Real-time Audio Reactive Visuals in Ableton using Max OpenGL and Jitter
I’ve always been fascinated by the idea of being able to create both sound and visuals together in a seamless way, and while using separate software and applications is still necessary, I was very happy when I discovered Ableton Live’s capacity to manipulate images and video, and also to create audio-reactive visuals for live performances through Max for Live.
Before we get started...
If you are interested about how to sync video and images in your Ableton file, here is a quick tutorial from accusonus on YouTube.
What is Max for Live?
Max for Live is a system that allows you to build custom plugins for Ableton Live, which are called “devices”. These devices aren’t necessarily all related to editing or generating animations or videos, they can be instruments, patches, or audio effects.
Any device inside Max for Live can be customized and edited, or you can even build your own, as the majority of them are available for download for free by the M4L developer community.
Of course, there is a little bit of a learning curve to building your own device, but I found the environment intuitive and a bit similar to the workspace environment of SparkAR for example, which also uses the “patching” system, or for those of you familiar with analog synthesizers, it’s a similar concept of “patching” (except this is all in a digital environment).
Also, Max for Live can completely change how Live interacts with all things external. You can:
- reconfigure connections to hardware controllers & synthesizers,
- route audio to multiple sets of speakers from your Live project,
- use Live to control physical objects like motors and lights using Arduino, OSC and other internet of things technologies
- process input signal from external devices like a webcam or analog instrument (e.g. a guitar or pedal)
What is Max for Live Jitter Patch?
By using Jitter, one can create a basic audio reactive patch to work within Ableton live.
From my understanding (please correct me if I’m wrong), a Jitter patch captures an audio signal that normally goes to the right and left channels, and analyzes it while also creating a copy of the signal which then is being sent to a series of OpenGL objects (or functions if you will), which each draws shapes on the canvas according to the signals it receives.
For example: To allow of the drawing and rendering of OpenGL, the jit.gl.render object is needed. This renders OpenGL objects to the destination window. jit.catch~ object is used to capture the audio and to transform it into matrices needed for the visualization. jit.gl.gridshape object creates defined shapes such as sphere, torus, cube, plane, circle and others. The jit.gl.mesh object, through changing attributes, gives you the ability to have different draw modes such as polygon, line, point, triangle, quads and line loop.
Each OpenGL object can have attached to itself another series of objects (like color, density etc.) used to further generate, render, or transform sounds into visuals.
Max for Live has a few Jitter/Video plugins available here that can be further customized or extended, making visual possibilities very vast and interesting.
The challenge is capturing the rendered visuals, as the concept behind this is to be used in a live performance setting, not necessarily to be recorded… But this can be easily overcome by either using a secondary monitor, or by screen recording your visuals and then syncing the audio in post-processing.
Below is another snippet of visuals I created for one of my songs using Jitter, OpenGL and available devices for M4L.
Klangdefekt – Untergang / Dream of the Endless
Thank you so much for reading. Don’t forget to signup to the newsletter for random ramblings, behind the scenes, exclusive goodies and unreleased content – like this song for example which will be sent for free download to all of the subscribers! You’ll never receive spam and your data will never be sold.