CAIRA – A Creative Artificially-Intuitive and Reasoning Agent in the Context of Ensemble Music Improvisation

Project Investigators

Jonas Braasch (PI), Selmer Brinsjord (Co-PI), Pauline Oliveros (Co-PI), Doug Van Nort (Post-Doc)

Student Investigators

Nikhil Deshpande, Simon Ellis, Colin Kuebler, Anthony Parks, Naveen Sundar G., M. Torben Pastore, Joe Valerio

Project Overview

The scope of this project was to develop CAIRA, a Creative Artificially-Intuitive and Reasoning Agent. CAIRA is a computer-based system that listens to other musical performers and improvises music based on what the other musicians play. Our goal was to better understand human creativity and to apply this understanding to machines. To achieve this goal, we simulated several functional brain stages related to auditory processing and decision making.

Media

Here are two takes from a recording session with CAIRA @EMPAC:

Jonas Braasch: Soprano Saxophone

Doug Van Nort: granular-feedback expanded instrument system (GREIS)

CAIRA: realtime music improvisation agent – using audio material from Pauline Oliveros, V-Accordion, for this recording

Here is our first demo for the CAIRA jazz version (More to come soon):

Nikhil Deshpande: guitar

 

CAIRAarchitecture

CAIRA architecture

How CAIRA works
For CAIRA, we developed an architecture with four functional layers. At the bottom layer, CAIRA performs an auditory scene analysis to extract basic musical features including pitch, tempo, and loudness. In the second layer, machine learning tools analyze sequences of events so CAIRA can make use of the time-based structure of music to, for example, capture melodies. Two techniques, Empirical Mode Decomposition and Hidden Markov Models, were used for this process. It was important to us that CAIRA analyzed music from actual sound and not through a symbolic notation like a music score. We also incorporated features of Pauline Oliveros’ Deep Listening Practice into the learning algorithm to enable the CAIRA to discriminate between global and focal listening.

CAIRA’s third layer resembles aspects of human cognition. Here, it was important to us to enable CAIRA to select between different strategies. CAIRA can reason using the logic-based musical calculus of HANDLE to make decisions, but also employs artificial intuition in its musical performance. CAIRA can trade both techniques off each other to perform in different ways, giving insight into how humans develop different performance styles based on internal strategies. We could demonstrate that CAIRA benefitted from using artificial intuition in cases where it had insufficient time for reasoning to respond to other musicians. A sub version of CAIRA, FILTER – the Freely Improvising, Learning and Transforming Evolutionary Recombination system – simulates a highly intuitive approach to music improvisation building on Pauline Oliveros’ Deep Listening Practice. In instances CAIRA uses the fundamentals of jazz theory to accompany a human soloist or perform with two musicians in a jazz. The fourth layer is the output stage of CAIRA. The system’s performance is based on a database of audio samples that CAIRA uses to create its own compositions to perform with others. CAIRA can also conduct an ensemble using an adaptive video score.

TriplePoint

Triple Point Trio performing with CAIRA/FILTER

CAIRA and FILTER have been featured in concerts and recordings including New Interfaces for the Musical Expression (NIME) 2012, Eyebeam, New York City, 2014, and the Deep Listening Conference, Troy, New York 2013.

Publications

Braasch, J. (2011). A cybernetic model approach for free jazz improvisations. Kybernetes 40 (7/8), 984-994.

Braasch, J. (2013). A precedence effect model to simulate localization dominance using an adaptive, stimulus parameter-based inhibition process. Journal of the Acoustical Society of America. 134 (1), 420–435.

Braasch, J. (2013). The Microcosm Project: An Introspective Platform to Study Intelligent Agents in the Context of Music Ensemble Improvisation. Sound – Perception – Performance Bader, R.. Springer. Berlin, Heidelberg, New York. 257-270.

Braasch, J., Van Nort, D., Oliveros, P. and Krueger, T. (2013). Telehaptic interfaces for interpersonal communication within a music ensemble. 21st International Congress on Acoustics. Montreal, Canada.

Braasch, J.,Blauert, J., and Parks, A.J. and Pastore, M.T. (2013). A cognitive approach for binaural models using a top-down feedback structure. 21st International Congress on Acoustics. Montrael, Canada.

Ellis, S., Sundar Govindarajulu, N., Valerio, J., Bringsjord, S., Braasch, J. and Oliveros, P. (2013). Creativity in Artificial Intelligence as a Hybrid of Logic and Spontaneity. Computational Creativity, Concept Invention, and General Intelligence (C3GI) Workshop at the 23rd International Joint Conference on Artificial Intelligence (IJCAI). Beijing, China.

Ellis, S. and Haig, A. and Sundar G., N. and Bringsjord, S. and Valerio, J. and Braasch, J. and Oliveros, P. (2015). Handle: Engineering Artificial Musical Creativity at the “Trickery” Level. Computational Creativity Research: Towards Creative Machines Besold, T. R. and Schorlemmer, M. and, Smaill, A.. Springer. Heidelberg, New York, Dordrecht, London. in press.

Van Nort, D., Braasch, J. and Oliveros, P. (2012). Sound Texture Recognition through Dynamical Systems Modeling of Empirical Mode Decomposition. J. Acoust. Soc. Am. 132 2734-2744.

Van Nort, D., Oliveros, P. and Braasch, J. (2013). Developing Systems for Improvisation based on Listening. Journal of New Music Research. 42 (4), 303-324.

nsf

This material is based upon work supported by the National Science Foundation under Grant Number 1002851. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.