Beandolin
Designing a physical interface for music making, with an emphasis on gesture and physicality
- Physical interaction designer
- Musical instrument designer
- Sound designer
- Arduino
- Faust
- Microcontrollers & sensors
- Prototyping
- Usability testing
- Electronics
This project was a project for Stanford MUSIC 250A: Physical Interaction Design for Music. Shoutouts to everyone at CCRMA, my classmates, as well as my instructor and TA for their inspiration, motivation, and generous help.
Why build an instrument? What should we consider?
Designing a physical instrument is a unique task; we are essentially designing an interface for artistic creation. When we design things with straightforward utility like a chair or website, there is a strong restriction on the possible use cases (That isn't to say that nobody will turn a chair into art by placing it next to the definition of chair and a picture of a chair).
On the other hand, when designing a musical instrument, the utility is less straightforward. So how do we find our "why?" when we can't pin down utility?
A new musical instrument can:
- Excite an audience
- Engage new modes of physicality from the musician
- Shift and challenge a musician's cognitive mappings of gesture to sound
Visual gestures as part of an artist's performance
In MUSIC 351: Seminar in Music Cognition and Perception, we discussed how pianists physically move their torsos to signal a cadence in a way that is external to the technical execution (striking keys). Indeed, if you watch professional musicians, there is a lot of physical expression external to the production of sound. Violinists lean, pianists float their hands, guitarists contort their faces.
My idea for this project was to expose the physicality of the music-making itself by encouraging large gestures by the musician as part of the music making, as well as letting the viewer peek into the physical elements producing the sound. Contemplation on music interfaces aside, I also wanted it to be fun to play, watch, and listen to!
From sand to beans
I had an idea of what I wanted how I wanted the instrument to feel, but was not settled on a particular look or functionality. Early on, I planned on using colored sand to fill the glass containers. After testing the sand, I quickly realized that sand has qualities that did not align with the goals of the instrument.
- Sand did not trigger a strong physical impulse for translation into digital sound
- Sand moved as a fluid unit rather than a collection of visual particles that have the potential to lag or move independently
- Sand did not provide strong physical feedback for a musician to feel during performance
I quickly sent my send back and drove to Safeway to buy a bag of dried chickpeas, which provided a stronger sound, could move independently of each other, and provided strong physical feedback to the musician.
Placing restrictions to guide the user
If we want the musician to explore the physicality of music-making, we can remove complexity in other areas of sound production. This led me to restrain the function of the instrument by restricting its tonal potential to diatonic chords and notes, nudging the musician to explore the timbral qualities of the instrument. Without being distracted by or overwhelmed with tonal options, the musician could focus specifically on the gesture of shaking, simultaneously listening to the timbral quality and adjusting their performance.
Designing sound
To synthesize the sound, I used Faust, a functional programming language for audio digital signal processing.
The bass sound was built from a single oscillator shaped with an ADSR envelope. These sounds were triggered by button presses.
Due to time constraints, for the "mandolin" sound, I used an existing physical modelling synthesis tool that would process the signal of the beans hitting the bottom of the lid from the piezoelectric microphone and turn it into sound.
Microcontroller
To carry out the functions needed for sound production, I used a Teensy microcontroller. This means that with a power source, the computation needed to create the digital sounds from physical input can be done without the use of a computer.
Bringing it together and performing
Our class capped off the class by demonstrating and presenting at CCRMA. There I stood, in front of my peers, my educators, and Roger Linn, with nothing but beans and a dream. But it went well! There were a lot of smiles and laughs from the audience, as well as movement as a response to the physicality of my demonstration.
Afterwards, I received a lot of great feedback, as well as people who wanted to try out the instrument theirselves. I watched as strangers interacted with my instrument, exploring the different gestures and sounds made available to them by my design.
Ultimately, I accomplished what I set out to do, which was to create a musical interface that exposes the physicality of sound, guides the instrumentalist to lean into gesture, and encourages musicians and listeners alike to have a good time.