The artists behind it? Over 1,000 musicians, including legends like Kate Bush, Damon Albarn, Annie Lennox, and Imogen Heap. Their message was loud and clear: stop using our music to train AI without consent.
This wasn’t just performance art. It was a protest directed at the UK government’s proposed legislation that would allow AI companies to freely scrape copyrighted music to train generative models unless artists opted out. The silent album, released in February 2025, served as a creative act of resistance and a call for awareness.
A Protest Without Words
Each track title contributed a piece of a sentence. When read in order, they formed the demand:
“The British Government must not legalise music theft to benefit AI companies.”
The album wasn’t for profit. Proceeds were donated to Help Musicians, a UK-based charity supporting struggling artists. But the deeper value lay in the statement it made. That silence, when intentional, can be louder than noise.
This wasn’t the first time artists voiced their opposition to unauthorized AI training. But it was one of the most elegant. It showed that even without litigation or legislation, artists can still reclaim agency through creativity and community.
A Growing Problem for Musicians
Behind the poetry of the protest is a real and pressing concern: AI models are being trained on music without permission, and the artists who created that music are often left out of the conversation (and the compensation).
Generative music models are improving fast. Many of them train on massive datasets pulled from the web, sometimes without regard for copyright or consent. The result? AI-generated songs that mimic the style of existing musicians without paying homage or royalties.
That’s not just frustrating. It’s existentially threatening to working artists who rely on their music to make a living.
Fighting Back, Creatively and Technically
The silent album reminds us that musicians are resourceful. But silence isn’t the only weapon.
If you're an artist concerned about your music being used without your knowledge or consent, there are now technical tools that can help you fight back, without compromising your sound. One such approach is the use of adversarial noise: subtle, inaudible alterations to your audio files that dirupt AI training models but remain completely undetectable to human listeners.
This means your fans still hear your music exactly as you intended, while AI systems are left with unusable data.
Here’s what artists are doing today to protect their work:
- Creating intentional silence (like the Is This What We Want? album) as symbolic protest
- Exploring legal action or licensing frameworks that limit unauthorized AI training