Category Programme announcements

Call for Late Breaking Demo Papers: First AES International Conference on AI and Machine Learning for Audio (AIMLA 2025)

AIMLA conference logoThe AES International Conference on Artificial Intelligence and Machine Learning for Audio (AIMLA 2025), hosted at the Centre for Digital Music of Queen Mary University of London and taking place on Sept. 8-10, 2025 is calling for Late Breaking Demo Paper submissions.

We are seeking 2-page extended abstracts showcasing prototype systems and early research results that are highly relevant to the conference theme. At least one author must register for the conference and present their work as a poster in person during the main track poster session.

Submissions open on July 1, 2025 with a deadline of August 1, 2025. Submissions will be reviewed on a rolling basis.

For more information on submission guidelines, templates, and technical requirements, please visit: https://aes2.org/contributions/2025-1st-aes-international-conference-on-artificial-intelligence-and-machine-learning-for-audio-call-for-contributions/


AIM involvement in MIREX 2025

mirex logoMIREX (Music Information Retrieval Evaluation eXchange) is a prominent evaluation platform in the field of music information retrieval. Researchers are invited to submit novel algorithms for a variety of music-related tasks and receive standardized evaluation results, with the opportunity to present posters during the annual ISMIR conference. For more details on submission, please see the following link:

This year, AIM PhD students Yinghao Ma and Huan Zhang introduced new tasks to the platform, including Multimodal Music QA, Expressive Piano Performance Rendering, alongside traditional MIR challenges and emerging understanding/generation tasks. Specifically, we are coordinating the following tasks:

Music Reasoning QA
Task Captain: Yinghao Ma
The MIREX 2025 Music Reasoning Question Answering (QA) Task challenges participants to develop models capable of answering natural language questions that require understanding and reasoning over musical audio. This task seeks to advance the frontier of machine music intelligence by evaluating models on their ability to reason about all kinds of music information musical structure, instrument presence, melody information, vocal content, and environmental context etc., along with knowledge in music theory and music history.
Participants will build systems that answer multiple-choice questions grounded in audio inputs. The task includes questions from four curated subsets (Music, Music-Speech, Sound-Music, Sound-Music-Speech) from the MMAR benchmark, and Music-subset with image caption from the OmniBench benchmark. Each question is paired with an audio clip and 2-4 different choices.

RenCon: Expressive Piano Performance Rendering Contest
Task Captain: Huan Zhang
Expressive Performance Rendering (https://ren-con2025.vercel.app/) is a task that challenges participants to develop systems capable of rendering expressive musical performances from symbolic scores in MusicXML format. We accept system that generate symbolic (MIDI) or audio (wav) renderings, and the output shall contain human-like expressive deviation from the MusicXML score.
Similar to AI song contest, the evaluation of expressive rendering is subjective and requires human judges to assess. Thus, we have a two-phase competition structure: Phase 1 – Preliminary Round (Online) Submit performances of assigned and free-choice pieces. The submission period is open from May 30, 2025 to Aug 20, 2025. After the submission deadline, the preliminary round page will be finalized with the list of participants and their submissions, and the online evaluation will take place. Phase 2 – Live Contest at ISMIR (Daejeon, Korea) Top systems from preliminary round will be invited to render a surprise piece live at ISMIR, using their system in real time. The live contest is open to all ISMIR attendees, as well as the general public. The audience will be able to listen to the live performances and vote for their favorite system.

Audio Beat Tracking
Task Captain: Wenye Ma & Yinghao Ma
The aim of the automatic beat tracking task is to track each beat locations in a collection of sound files. Unlike the Audio Tempo Extraction task, which aim is to detect tempi for each file, the beat tracking task aims at detecting all beat locations in recordings. The algorithms will be evaluated in terms of their accuracy in predicting beat locations annotated by a group of listeners.

Audio Key Detection
Task Captain: Wenye Ma & Yinghao Ma
Audio Key Detection aims to identify the musical key (e.g., C major, A minor) of an audio recording. This involves determining both the tonic (root pitch) and the mode (major or minor) from the audio signal.


AIM at the RITMO Workshop on Music and AI

From 3rd to 5th March 2025, AIM researchers will participate in the RITMO Workshop on Music and AI. The event, hosted by the RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion at the University of Oslo, brings together scholars from AIM, RITMO, and the MUSAiC project at KTH Royal Institute of Technology.

MUSAiC at KTH explores AI’s role in music through listening, composition, performance, and critique, aiming to develop human-AI partnerships. RITMO, a Centre of Excellence at the University of Oslo, investigates rhythm as a structuring mechanism in human life, drawing on expertise from musicology, psychology, and informatics.

The programme includes presentations from RITMO, KTH, and AIM/C4DM researchers, hands-on sessions and discussions on AI-driven music technologies, and a visit to RITMO’s facilities.

This event fosters interdisciplinary collaboration and strengthens connections between leading AI and music research institutions.

Group picture with researchers from the participating institutes

More pictures from the event are available on the RITMO webpage.

Monday, 3 March, 2025 - Arrival
Morning/afternoonArrival
15:00Visit Munch Museum
18:30-Social dinner (Barcode Street Food)
Tuesday, 4 March, 2025 - Day 1
09:00-09:45Coffee and poster installation (RITMO Kitchen)
09:45-11:00Block 1:RITMO(Forsamlingssalen)
11:00-12:20Block 2:KTH’s Speech Music and Hearing(Forsamlingssalen)
12:20-12:30Group photo
12:30-13:30Lunch
13:40-15:00Block 3:AIM/C4DM(Forsamlingssalen)
15:00-16:30Block 4: Experiencing RITMO (fourMs Lab)
16:45-18:00Relocation and pizza
18:00-20:00C2HO Workshop on Image Sonification in Biology Research(NOTAM)
20:30-Social meetup
Wednesday, 5 March, 2025 - Day 2
09:30-11:00Block 1: 1-1 meetings / Visit the city of Oslo or other institutions
11:00-12:00Block 2: 1-1 meetings / Visit the city of Oslo or other institutions
12:00-13:00RITMOlunch seminar “Food & Paper”(RITMO kitchen)
13:30-14:00Opening of Makerspace and Modular Synthesizer Systems(ZEB, Department of Musicology)
14:00-16:00Block 3: 1-1 meetings / Visit the city of Oslo or other institutions