Bias II for piano and interactive music system is part of a series of works engaging with the materiality and limitations of Machine Learning (ML) algorithms and data. During its interactions with different pianists, the computer music system collects data pertaining to the way performers navigate a set of 7 timbral clusters (pools of timbrally similar musical actions). Each second of the performance, a Recurrent Neural Network (RNN) trained on these data predicts how the performance might continue (i.e., which timbre is likely to follow next) and plays back sound material based on its predictions. Through this ML process, the work sets performers in an explicit dialogue with its interpretative history (i.e. interpretative choices made by themselves and other pianists in past performances).
Historical data, collected by the computer music system in past performances, influence the system’s future behavior, establishing a reciprocal relationship between individual performances of the work and its inscriptions (data). Rather than being an independent, self-contained event, each performance of this piece is a link in a chain of co-creative acts that both instantiate and rewrite the work. Bias II explores distributed musical creativity—distributed among actors (composer, performers, computer music system) in space and time, and reframes musical authorship in collective and posthuman terms.
Bias II for piano and interactive music system is part of a series of works engaging with the materiality and limitations of Machine Learning (ML) algorithms and data. During its interactions with different pianists, the computer music system collects data pertaining to the way performers navigate a set of 7 timbral clusters (pools of timbrally similar musical actions). Each second of the performance, a Recurrent Neural Network (RNN) trained on these data predicts how the performance might continue (i.e., which timbre is likely to follow next) and plays back sound material based on its predictions. Through this ML process, the work sets performers in an explicit dialogue with its interpretative history (i.e. interpretative choices made by themselves and other pianists in past performances).
Historical data, collected by the computer music system in past performances, influence the system’s future behavior, establishing a reciprocal relationship between individual performances of the work and its inscriptions (data). Rather than being an independent, self-contained event, each performance of this piece is a link in a chain of co-creative acts that both instantiate and rewrite the work. Bias II explores distributed musical creativity—distributed among actors (composer, performers, computer music system) in space and time, and reframes musical authorship in collective and posthuman terms.
The ML algorithm used in this piece was trained on data collected with pianists Magda Mayas and Xenia Pestova-Bennett.
This work was funded by ZKM Karlsruhe and the ERC advanced grant “MusAI – Music and Artificial Intelligence: Building Critical Interdisciplinary Studies” (innovation programme under European Research Council grant agreement no. 101019164).
Artemi-Maria Gioti (GR) is a composer and artistic researcher working in the field of artificial intelligence. Her research explores the transformative potential of technology for musical thinking and seeks to redefine notions of musical authorship. She holds a doctoral degree in Music Composition from the University of Music and Performing Arts Graz. She is currently a lecturer in New Media and Digital Technologies for Music at the University of Music Carl Maria von Weber Dresden and a Research Fellow in Music and AI at University College London (UCL), working on the ERC project MusAI.
Artemi-Maria Gioti (GR) is a composer and artistic researcher working in the field of artificial intelligence. Her research explores the transformative potential of technology for musical thinking and seeks to redefine notions of musical authorship. She holds a doctoral degree in Music Composition from the University of Music and Performing Arts Graz. She is currently a lecturer in New Media and Digital Technologies for Music at the University of Music Carl Maria von Weber Dresden and a Research Fellow in Music and AI at University College London (UCL), working on the ERC project MusAI.