This is a thing I made in Max/MSP/Jitter for an assignment with the idea of translation. The original program took a webcam feed of someone standing in front of a black curtain, scanned the image from left to right, and converted the image data into sound while playing the sound in realtime and projecting the scanning line onto the person’s body. The script looks for 4 pieces of data and assigns it to sound parameters: hue translates to panning, saturation to volume, lightness to frequency, and y-position of pixel to index number in the bank of oscillators. In other words, each pixel in a given column makes a sound depending on the pixel’s color, saturation, and lightness.
I used this thing along with some feedback samples and a drum machine to make this composition: