3 Phase is both an instrument and performance piece that studies three methods for integrating moving image and music in a cohesive stage presence. Each method is represented by a system consisting of an interface, a computer, a projector, a screen, and an animation. Each of the three systems has its own interactive strategy:
- Sound level and frequency influencing the image
- Performer interacting with image and thus influencing sound
- The spectral characteristics of a sound file influencing the image
The first system uses a microphone to listen to the dominant noise characteristics of the room. Data from an FFT frequency analysis changes parameters that affect the appearance and interplay between a series of still photographs.
The second system is a virtual rotary 8-track sequencer made up of black elements on a white field. The leading edge of an image element triggers the sounding of a sample while the length of that element determines the duration that the sample sounds. The height of the ring of elements determines the gain of the triggered samples. The performer interacts with the sequencer using a standard Midi Control Interface. The knobs influence the rate of each track in the sequencer and the sliders control the height.
The third system uses a series of animations that use the spectral frequency diagrams of a set of samples. The performer triggers the animations with a midi keyboard. Once an animation is activated it plays across the screen for the duration of the sample. Several samples can be layered to create a shifting, ethereal sound scape and animation.
The systems are programmed using Processing, ChucK and Isadora and communicate between programs over OpenSoundControl.
Source Code