Mar 032013
Minimum viable aegis concept must contain both a visual and sonic component that reacts to movement occurring in the area underneath a pyramidal object suspended from a ceiling at a height of 8-10 feet.
The following constraints must also be met:
- There must be a “sleep mode” during which the volume of the object’s audio is shut off and visual stimulus is minimized. Sleep mode disengages upon activity detected within in the sensor array.
- All electronic components require no human intervention or “setup” to return to function after a power down cycle.
- Object enclosure allows easy maintenance access.
The current prototype of the aegis project consists of the following components:
- Sensor array. A matrix of MaxSonar-EZ1 ultrasonic rangefinders will be connected to the analog inputs of a Teensy 2.0 microcontroller. The readings from the sensor array will be calibrated and translated into MIDI continuous controller data. The Teensy will be programmed to present itself as a generic USB MIDI device to a host operating system.
- Data transformation layer. The MIDI controller information from the sensor array is sent to a mini-ITX PC motherboard running a Linux host operating system. Incoming controller information will be analyzed in an attempt to glean gestural information over a number of samples.
- Audio manipulation engine. All control data will be passed into a PureData patch running on the Linux OS. This patch will use a database of small segments of prerecorded multitrack audio, multiple audio effects, and synths in tandem with incoming control data to create a dynamic audio mix in quadraphonic sound.
- LED output. A cluster of LEDs will be connected to the Teensy 2.0. Controller data used to control the LED colors will be passed back from PureData to the Teensy 2.0 via the USB MIDI interface.
- Enclosure. The four components listed above will be enclosed in a 24″ x 24″ 4-sided pyramidal structure. Symbols are cut into the top portion of the side triangles to allow for airflow for the computer and for light from the LED cluster to reflect on the ceiling.
- Sound output. Four individually amplified speakers will be placed on the ceiling, facing downward, in alignment with the corners of the enclosure.
There are still a number of unknowns about the feasibility of these component systems. Questions to be answered:
- Can the ultrasonic sensors be run simultaneously without cross-sensor interference?
- What is the effective range of the ultrasonic sensors in practice?
- Is the latency of the ultrasound sensors acceptable?
- Do the gestural mathematics make the device control too complex?
- Is the computational load of the gestural math plus the audio manipulation too high for the existing equipment?
- Is multitrack audio too cumbersome to work with in PureData/Linux given the scope/deadline of the project?
Engineering logs will follow.
Sorry, the comment form is closed at this time.