On 2-6 September, AIM PhD students Jordie Shier, Shuoyang Zeng, and Teresa Pelinski will be at the International Conference on New Interfaces for Musical Expression (NIME), which will take place in Utrecht, the Netherlands.
Jordie will present his paper Real-time Timbre Remapping with Differentiable DSP, written in collaboration with Charalampos Saitis (C4DM, QMUL), Andrew Robertson (Ableton) and Andrew McPherson (Imperial College London). The paper discusses a method for mapping audio from percussion instruments onto synthesiser controls in real-time using neural networks, enabling nuanced and audio-driven timbral control of a musical synthesiser. You can read the paper here and check the project website and presentation here.
Shuoyang will also present a paper, Building sketch-to-sound mapping with unsupervised feature extraction and interactive machine learning, written in collaboration with AIM PhD student Bleiz Del Sette and Charalampos Saitis (C4DM, QMUL), Anna Xambó (C4DM, QMUL) and Nick Bryan-Kinns (CCI, University of the Arts London). The paper explores interactive (personalised) constructions of mappings between visual sketches and sound controls as an expressive way for musical composition and performance.
Teresa will co-lead two workshops. The first workshop, First- and second-person perspectives for ML in NIME has been organised in collaboration with Théo Jourdan (Sorbonne Université) and Hugo Scurto (independent artist and researcher). It focuses on autoethnographic methods to articulate insights and experiences surrounding new instrument building with AI – you can read more here. The second workshop, Building NIMEs with Embedded AI, has been organised in collaboration with Charles Patrick Martin (Australian National University). It is a hands-on tutorial for embedding light deep learning models on Raspberry Pi and Bela – you can read more here.
See you at NIME!