Scroll
Northwestern · ELP · 2026

AI-generated music is bad.

Algorithmically generated systems trained on human creativity without consent have given machine-made music its rightfully earned reputation.

But machine learning can be a tool,

not to replace artists or generate music from stolen data, but to help composers explore new timbres, map the body's response to sound, and surface patterns that musicians already sense intuitively.

Timbre, not harmony, predicts emotional response.

Random Forest models trained on biosignals from real listening sessions (EDA, PPG, ECG) achieved R²=0.937 for valence. Every top predictive feature was spectral, not harmonic.

Interactive Research

Explore the project.

Eight modules spanning the full design-build-test-learn cycle. Listen, interact, and contribute your own ratings to train the next model.

Modules

Choose any section to explore.

Selected Works
Selected Works
10 contemporary pieces spanning all four emotional quadrants
ML Demo
ML Demo
Pick a quadrant or play random, hear spectral gestures from the model's library
Bitalino
Bitalino
EDA, PPG, ECG, accelerometer, how biosignals were captured
Valence / Arousal
Valence / Arousal
The circumplex model and its spectral correlates
Waveform Viewer
Waveform Viewer
Interactive spectrogram and waveform for each piece
Timbre & Spectralism
Timbre & Spectralism
Grisey's overtone philosophy and the spectral hypothesis
Rate Gestures
Rate Gestures
Place music on the valence/arousal plane, contributes to model training
Interviews
Interviews
Composer and scientist perspectives, coming soon