RSA FG and Mozarteum invited “Music Technologists” and musicologists to a kick-off event in St. Gilgen
An overview of the international digital musicology and technology was provided on 23.01.2020 by the international conference “Interactive Music Technologies”, organized by the Research Studios Austria Forschungsgesellschaft (RSA FG) and the University Mozarteum Salzburg.
The conference was attended by leading representatives of the international community of music technologists – above all Michela Magas, the inventor of the Stockholm Music Tech Fest, Sergi Jorda, the inventor of the “Reactable” (an interactive ‘musical table’ for sound generation), Maria Mannone with her “CubeHarmonic” and Stefania Serafin. Helmi Vent, professor emeritus of the Mozarteum, demonstrated a refreshingly provocative, alternative approach to sounds and sound generators, decontextualizing them and placing them in a new context. The question was to what extent the “Aware Systems” of the Research Studio Pervasive Computing Applications of the RSA FG (eye trackers, sensors, etc.) could also be used to produce body-directed (context-controlled) music.
In addition to the “Music Technologists”, the conference offered an overview of current research efforts trying to classify music with the latest approaches of deep learning and artificial intelligence. As Petr Knoth, the new studio director of the Research Studio Data Science of the RSA FG, pointed out, a distinction should be made between Music Information Retrieval (MIR) and (Automated) Music Generation. “Music Generation is not Information Retrieval,” said Knoth, who can well imagine an application of Big Data Science and deep learning algorithms for the classification and analysis of music as a future field of work for his studio. On the one hand, for example, an open archive of performances at the Mozarteum University in Salzburg could be created, and on the other hand, he believes that data science could help to optimise music education.
Peter Knees from the Technical University of Vienna said in his lecture that co-creation between man and machine is crucial: Composing is not left to software alone, but AI can help the human composer to be more creative. “Co-creation is a central task for all future AI systems,” said Knees. The scientist uses recurrent neural networks (RNN) to analyse rhythm sequences. Tillman Weyde from the City University of London demonstrated what big data analysis can already show today: for example, the quantitative distribution of jazz, rock and pop music from 1950 to the present. One problem is always the “semantic gap”, i.e. the difference between the music itself and its coding within the framework of a category system.
Hauke Egermann from the University of York dealt with the perspective of music consumers in his lecture. For example, a proprietary FaceReader software can measure facial expressions while listening to sad or happy pieces of music. A presentation of the Blockchain project “Bloxberg” by Sandra Vengadasalam from the Max Planck Digital Labs rounded off the conference.
In her introductory statement, Mozarteum Rector Elisabeth Gutjahr dealt with the relationship between the arts and (digital) technologies. In her contributions to the discussion, she pointed out the advantages and disadvantages of digitisation and spoke in favour of clear objectives for digitisation strategies.
Peter A. Bruck, managing director and overall scientific director of the RSA FG, discussed with the speakers key points of a possible roadmap for digital projects in musicology, which could build on the competencies of the Studios Pervasive Computing Applications (PCA) and Data Science (DSc). Benedikt Gollan of the Research Studio PCA presented a five-step model of attention and examples from industrial applications. Alois Ferscha, also from Research Studio PCA, spanned the historical arc from explicit to implicit interaction with communication technologies and drew a parallel to the development of our interaction with musical instruments. At the end of his lecture he presented three pieces of music and let the audience guess who might have composed them. Shostakovich? Ravel? The answer was, to the astonishment of the audience: in all three cases it was artificial intelligence.
On the photo from left to right: Sergi Jorda, Alois Ferscha, Michela Magas, Mozarteum Rector Elisabeth Gutjahr, Peter A. Bruck (Photo: Alexander Killer)
Media coverage: