though it may feel real this is all just a memory
programmatic art with musical data input
In collaboration with Molly Trueman and Alexander Moravscik
Have you ever listened to a live music performance and relived it in your dreams? Over and over, until sounds blend together, until all that you’re left with is a faint memory?
Welcome to that memory.
The intent of this project is to transform sound into visuals and atmosphere, using music production techniques and creative coding. Our aim is to combine poetry, acoustic guitar performance, digital synthesis, and coding in a way that transforms live music into a complete audiovisual experience.
Live music is processed using Ableton Live, and data is retrieved from the sound using Max MSP. Another connected MIDI device can be directly manipulated to further process the sound and visuals. Data from Max MSP and the connected device is sent to a p5.js sketch that transforms these numbers into visuals. The p5.js sketch, which acts as the visual aspect of this experience, is separated into five different mini sketches which the user can switch between using the arrow keys.
From the sound, we extract five kinds of data:
- Estimated fundamental pitch
- Peak amplitude
- Chromatic note
- Deviation in sense
- Onsent detection
Each of these data sources are used in differing ways throughout the five mini-sketches, such as colors, size of shapes, and speed of moving objects.
We also adjust the visuals in a corresponding manner to four parameters, knobs on the connected MIDI device:
- Auto-pan (speed & amount)
- Soft bit reduction & and high-pass filter
- Reverb decay time (inverted)
- Resonator (tonic pitch, +7, +12)
These processes also adjust visuals like speed, color, size, and opacity of objects.
You can view the technical demo here.