My past roles
User experience consultant
Audio research faculty
Physical interaction designer/developer
User experience deputy manager
User interface designer
Haptic interaction researcher
Web/interaction design intern
Interaction/game design intern
Research – Combining UX and MIR
My work until mid-2017 had been primarily in the field of user experience (UX) design, with a special interest in musical applications.
At GeorgiaTech, I am exploring ways in which UX and musical artificial intelligence (AI) could meld into each other. Could practical aspects of UX influence a computer’s understanding of music? Could such an understanding, in turn, improve the UX of audio software and hardware?
By combining UX and music information retrieval (MIR) in previously unexplored ways, I am attempting to bridge the gap between how the computer thinks and what we experience.
The UX bit:
After working on UX for over 8 years, which included a very wide spectrum of areas such as tangible/haptic/ultrasonic interfaces, musical interfaces/instruments, audio/visual interfaces, mixed reality and man-machine interaction, I have realized that design-thinking can, and should, be effectively applied at multiple stages of software/hardware product-building in order to give extraordinary results, faster. Tools such as Processing, Arduino, RPi, Sketch+Invision, MaxMSP/pd, Unity, etc, allow design-thinking to percolate throughout product-building, and somewhat merge the roles of a designer and a developer.
How could UX be applied at the core of product-building? What is the least that could be done at every point in a product’s design+development to quickly test it with users? In other words, how could development be made as agile as design, thus blurring the separation between design and development?
Since I’m obsessed with music and the techniques used to generate/analyze music (MIR), I think of ways in which design-thinking can be applied at the deepest levels of music software/hardware. Constantly paying attention to UX can not only save enormous time, but also lead to elegantly built products that users would love to experience.
The MIR bit:
Borrowing the notion of a prototype from UX, I am identifying MIR micro-tasks that could be effectively used to build fairly advanced MIR prototypes. Typical micro-tasks include creating blocks of audio samples, extracting features from a recording, training a support-vector machine, etc. Combining several such micro-tasks could result in a prototype that is capable of, say, identifying an instrument-technique such as guitar bends/slides, or detecting a recording’s musical-key/chord-pattern/rhythm-structure. Micro-tasks make it possible to quickly create and compare multiple design-options, conduct early user-testing, and have feature-level design-discussions and speedy iterations.
Should micro-tasks be code snippets, dataflow objects or something else? How could they facilitate easy user-testing on mobile/wearable devices?
We are perhaps heading towards a point where computers would be capable of writing their own efficient code, while we use abstractions such as micro-tasks to design user-friendly applications. Like we moved from low-level programming languages to high-level ones, this is a natural transition from building products that work to creating experiences that delight.