This is a report / collection of information regarding the first ever JUCE Summit, 19-20 November 2015 at The Proud Archivist in Hoxton, London. I was invited to give a talk, with the title 'Developing multitrack audio effect plugins for music production research'. See below for slides and more info, chronologically where possible. 

Besides my own Twitter feed, see also the @JUCELibrary feed and @AbletonDev's feed, and the stream #JUCESummit

Videos of the talks, workshops and keynotes should be made available at some point, which I will then add here. can be found here. 
The workshops and morning talks were on two separate tracks in two rooms, so there are two workshops and six talks I haven't seen and can't comment on. 

Speakers and others are very welcome to send me more info and links to resources, and notify of any errors I may have made. 

Day 1 - Thursday 19 November

8:00 AM: Coffee and breakfast, Registration
9:00 AM: Welcome address

9:15 AM: Developing Audio Applications with JUCE (Fabian Renn-Giles and Timur Doumler) /  Developing Graphical User Interfaces with JUCE (Julian Storer)
11:30 AM: Using the FAUST DSP language and the libfaust JIT compiler in JUCE (Oli Larkin) / Mixing it up: Audio plugin development in C++ and Lua (Costas Calamvokis)

12:00 PM: Web-based Remote User Interfaces for MIDI Devices (Zoltán Jánosy) / Developing multitrack audio effect plugins for music production research (Brecht De Man)
Multitrack digital audio effects (Digital Audio Workstation plugins with more than two inputs and/or outputs) are a class of signal processing tools that provide significant opportunities for researchers and artists, as they allow audio features from different tracks to affect processing of others in real-time, within an otherwise conventional digital production environment. However, the development of such plugins comes with many interesting challenges, including the fact that most DAWs do not natively support it. This presentation focuses on both the possibilities of multitrack plugins and obstacles encountered when developing multitrack plugins with JUCE in particular.

Some resources referenced in the talk: 
Download the handouts here
(in order of appearance)
  • S. Mansbridge, S. Finn, and J. D. Reiss, “Implementation and evaluation of autonomous multi-track fader control,” in 132nd Convention of the Audio Engineering Society, April 2012.
  • E. Perez-Gonzalez and J. D. Reiss, “Automatic gain and fader control for live mixing,” IEEE Workshop on applications of signal processing to audio and acoustics, October 2009.
  • E. Perez-Gonzalez and J. D. Reiss, “Automatic equalization of multi-channel audio using cross-adaptive methods,” 127th Convention of the Audio Engineering Society, October 2009.
  • S. Hafezi and J. D. Reiss, “Autonomous multitrack equalisation based on masking reduction,” to appear in Journal of the Audio Engineering Society, 2015.
  • A. Clifford and J. D. Reiss, “Calculating time delays of multiple active sources in live sound,” in 129th Convention of the Audio Engineering Society, 2010.
  • A. Clifford and J. D. Reiss, “Reducing comb filtering on different musical instruments using time delay estimation,” Journal of the Art of Record Production, vol. Issue 5, July 2011.
  • N. Jillings, A. Clifford, and J. D. Reiss, “Performance optimization of gcc-phat for delay and polarity correction under real world conditions,” in 134th Convention of the Audio Engineering Society, 2013.
  • Z. Ma, B. De Man, P. D. Pestana, D. A. A. Black, and J. D. Reiss, “Intelligent multitrack dynamic range compression,” Journal of the Audio Engineering Society, vol. 63, pp. 412–426, June 2015.
  • T. Wilmering, G. Fazekas, and M. B. Sandler, “High-level semantic metadata for the control of multitrack adaptive digital audio effects,” in Audio Engineering Society Convention 133, 10 2012.
  • A. McPherson, “TouchKeys: Capacitive multi-touch sensing on a physical keyboard,” in Proc. NIME, 2012.
  • R. Stables, S. Enderby, B. De Man, G. Fazekas, and J. D. Reiss, “SAFE: A system for the extraction and retrieval of semantic audio descriptors,” in 15th International Society for Music Information Retrieval Conference (ISMIR 2014), October 2014.
  • J. D. Reiss and A. McPherson, Audio Effects: Theory, Implementation and Application. CRC Press, 2015.
  • S. Mansbridge, S. Finn, and J. D. Reiss, “An autonomous system for multi-track stereo pan positioning,” in 133rd Convention of the Audio Engineering Society, October 2012.
  • E. Perez-Gonzalez and J. D. Reiss, “Automatic mixing: Live downmixing stereo panner,” in 10th International Conference on Digital Audio Effects (DAFx-10), 2007.
  • E. Perez Gonzalez and J. D. Reiss, “A real-time semiautonomous audio panning system for music mixing,” EURASIP Journal on Advances in Signal Processing, 2010.
12:30 PM: MIDI control distribution (Richard Foss) / Porting and Expanding the JUCE Host into a Workflow Enhancing Audio Plugin (Thomas Klebanoff)

1:00 PM: Lunch break

2:00 PM: Keynote: Obsessive Coding Disorder (Julian Storer)
         "Only an idiot would change code that works perfectly well. So I'll be that idiot."
        Jules took apart zlib and showed many examples of inelegant code and crimes agains best practice. Interestingly, the code didn't get faster, but it was certainly more readable, elegant, and shorter (see below). 
        On the topic of making his improvement public, he wondered at which point completely transformed code is no longer 'plagiarised'. Zlib is freely available and its license is fairly liberal. 
3:00 PM: JUCE for Education C++ and Audio Development (Martin Robinson)
        Martin is the author of 'Getting Started with JUCE' (Twitter feed)
3:30 PM: Squeezing JUCE out of your laptop, and into the Real World (Ryan McGill)
        Presentation (Github)
4:00 PM: Combining the JUCE audio engine with a Xamarin/CocosSharp game UI on iOS and Android (Leo Olivers)
        Sample source code on GitHub
4:30 PM: The SAFE JUCE module: A System for Managing Music Production Metadata (Sean Enderby)
Sean discussed the SAFE Project, a collaboration between researchers at Queen Mary University of London's Centre for Digital Music and Birmingham City University's Digital Media Technology Lab, including Sean and me. Specifically, his talk focussed on the SAFE JUCE class (downloadable here) that allows data collection about incoming and outgoing features, parameters, and metadata about genre, instrument, background, experience, desired, ..., from your DAW plugin. 
The plugins can be downloaded for free here. Please play around with them, we need more data! 
The presentation featured the following video, demonstrating the use of the plugins, which was very well received by the audience. 
Another introductory video: 
And awaiting the recording of the talk, another similar talk by Sean at the AES Midlands event on Intelligent Music Production: 
More about SAFE next week at DAFx! 
5:15 PM: Guest talk: Becoming a Better (Audio) Programmer (Pete Goodliffe)
        Spectacularly high-paced talk on how to be a better, more productive coder. It was filled with genius advice like 'Optimise the right thing', 'Less code, more software' (source), 'Improve code by removing it' and so on, but it was so fast it was impossible to take notes. I'm sure it inspired me though. 
        Pete on Blogspot: 'Speaking: JUCE Summit'
        Pete on Twitter

6:00 PM: Banquet at ROLI
        ROLI, the company that owns JUCE since over a year and which famously employs two full-time chefs to cater the vegetarian lunch, hosted us in their offices not far from the conference venue, where those very chefs treated speakers and other delegates to an amazing, seven course, vegetarian dinner. Most noteworthy is that the first item on the menu was 'Gin 'n JUCE', with grapefruit juice (Grapefruit is the name of the latest major JUCE release, JUCE 4). 

Day 2 - Friday 20 November

8:00 AM: Coffee and breakfast
9:00 AM: Opening Day 2 (Julian Storer and Jean-Baptiste Thiebaut)

9:15 AM: Developing Android Apps with High Performance Audio (Google's Don Turner, Phil Burk, Ian Ni-Lewis, Glenn Kasten and JUCE's Fabian Renn-Giles) / Working with the Projucer on Mac (Timur Doumler and Joshua Gerrard)
        The Projucer's most impressive feature is the JIT compilation ('live coding'), making it possible to tweak parameters in realtime as you see the interface or hear the audio change. See here for demo examples. You can get the Projucer (the sequel to the Introjucer, which remains available) from the JUCE website, but the JIT capabilities expire after a while if you don't subscribe/buy a license. 

11:30 AM: Mobile development with JUCE and native APIs (Adam Wilson) / C++ IDE to make you more productive – myth or reality? (Anastasia Kazakova)        Download Adam's slides from Slideshare
        Code example:

12:00 PM: Using C++11 to Improve Code Clarity: Braced Initialisers (David Rowland) /  Integrating Juce-based GUI in Max/MSP, OpenMusic or other computer music environments. (Thibaut Carpentier)

12:30 PM: A FM Synth in Iterations (Casper Ravenhorst) / Introduction to Native Instrument's Native Kontrol Standard (NKS) (Tim Adnitt and Tobias Baumbach)
        An introduction of the Native Kontrol Standard, an extension of the VST standard to enable integration of Native Instruments controller keyboards. 

        Code of the FM synthesiser developed by Casper Ravenhorst can be found on his Bitbucket page
1:00 PM: Lunch break

2:00 PM: Keynote: Developing Max/MSP with JUCE 
(David Zicarelli)
        Rare insights into the history of Max/MSP from the CEO of Cycling '74, which was one of the 'early adopters' of JUCE. 
        Code example referenced in the keynote:
3:00 PM: C++ in the Audio Industry, Episode II: Floating Atomics (Timur Doumler)
        JUCE's Timur Doumler delivered a sequel to his CppCon 2015 talk, this time on the use of atomic types, specifically floats. See below for a summary of why you would use them. 
3:30 PM: Maximilian Library (Mick Grierson)
        Mick started his presentation by saying it might be 'shambolic' (and that as an academic, that was allowed and even expected), and boy was he right. I don't know how many examples he demonstrated and coded live, but there were a LOT. 
        Get Maximilian from his GitHub page. 
        Another interesting project he touched on briefly is TextCircle, a website that allows 'live coding together', if I understood correctly. 
4:00 PM: Virtual Analog Audio Effects Simulation with JUCE (Ivan Cohen)
        The most impressive aspect of Ivan's presentation was that it was written using JUCE - i.e. it was not a Microsoft PowerPoint, LaTeX Beamer or Prezi presentation, but an actual 'app' that read from a text file and even included a music player with built-in audio effects that allowed a demonstration of his work within the presentation. 
        See also Ivan's website and Twitter feed
4:30 PM: Evolution of Audio Plugins and Best Usability Practices (Gebre Waddell)
        Gebre gave his view on the history and future of audio plugins, from his perspective of mastering engineer and plugin developer. For this, he interviewed renown mixing and mastering engineers at the 139th Convention of the Audio Engineering Society earlier this month in New York (see my previous post). 
        He also visited the Centre for Digital Music prior to the Summit, and gave a shoutout to the SoundSoftware projects page for its many interesting pieces of code. 
5:15 PM: Guest talk: Creative Coding in C++ (Andrew Bell)
        Andrew comes from a primarily visual background, so not so much an audio coder, which made his talk all the more interesting, as he talked about his Cinder library (GitHub), perhaps the equivalent of JUCE in the parallel creative coding / visuals realm. Many cool examples of projects for multinational corporations and nonprofits. 
        With regards to the modelling of 'groups of organisms' (e.g. a CGI school of fish), he referred to this paper (essentially attraction from a large distance, repulsion when you come too close, neither at a certain distance). 
        Andrew's Twitter feed
6:00 PM: Q&A Session with the JUCE Team
        The most popular topic was the new licensing model, which among others means
  1. One license per developer, not per company
  2. A license for 'desktop' (Windows, OS X, Linux development), 'iOS' and 'Android' - i.e. if you're developing for multiple platforms, you may need to buy more than one and up to three licenses. 
  3. The Projucer is not freely available (even for personal or academic use, except for a grace period). 

More information on the JUCE website
7:00 PM: Closing and drinks


Following this very enjoyable and inspiring conference, with talks and food invariably of very high quality, I spent part of this Sunday afternoon updating the code for the 'Audio Effects: Theory, Implementation and Application' by Joshua D. Reiss and Andrew McPherson, for which I initially refactored and/or wrote the JUCE code examples. 
I mainly replaced some deprecated functions (thanks Dr Rob Toulson for flagging this on Friday), homogenised the default dependency paths, and removed some redundant lines like unused variables. 
I'm sure there's more scope for improvement, but it doesn't currently feature in my list of priorities - I did want it to at least run straight out of the box, though. 

Download the audio effects examples here, on the SoundSoftware page.