The system combined GPU-driven particle fields, custom GLSL shaders, depth-augmented footage, and layered video feedback to translate the track’s intensity into evolving visual structures. Live iteration on set allowed me to adjust turbulence, distortion, and palette in real time against different edits of the song, ensuring the motion language aligned precisely with the band’s vision.