Kanye West claimed this week that his new album Bully was made with significant AI assistance. Then, three days later, he said it was not. The reversal was quick enough that it barely registered as news, but the original claim landed with enough force to revive a conversation that the music industry has been having loudly and then quietly and then loudly again for the last two years: what happens when artists start using AI to make music, and what does it mean when they do?
The honest answer is that the conversation has been muddled because everyone in it is using the word AI to mean different things. There is generative AI, which can produce entirely new audio from prompts. There is AI-assisted production, which uses machine learning to suggest arrangements, complete melodies, or clean up recordings. There is AI mastering, which has been in mainstream use for years. These are not the same thing, and treating them as equivalent produces a lot of noise without much signal.
The anxiety is real, though, and it points at something legitimate. What musicians fear is not the technology in isolation. It is what the technology represents in an industry that has spent the last decade finding new ways to make the economics of making music worse for the people making it. Streaming platforms that pay fractions of a cent per play. Sync fees that have not kept pace with the cost of living. A recording industry that consolidated into three major labels controlling access to most of the infrastructure. AI tools that can approximate a musician’s voice or style without compensation sit inside that context, not outside it.
The regulatory response has been slow and largely performative. Several states have passed right of publicity laws that offer some protection for artists against AI voice cloning. The European Union has AI Act provisions that require disclosure when AI-generated content is used commercially. But enforcement is inconsistent and the technology moves faster than legislation ever has. A label can generate fifty backing tracks using AI tools, use the best one, and there is currently no requirement to tell anyone that is what happened.
What is more interesting, and less discussed, is how working musicians are actually engaging with these tools. The conversation at the label and industry-media level tends toward extremes: AI will destroy music, or AI is just another instrument like the drum machine. Both of these framings protect whoever is saying them from having to think carefully. The reality that producers and engineers are living with is more granular. AI-assisted mixing is genuinely useful for small studios that cannot afford experienced engineers. Melody generation tools help songwriters who are stuck on a bridge or a pre-chorus. These are low-stakes, practical uses that mostly accelerate existing workflows without replacing the humans directing them.
Where the genuine threat lies is in what happens downstream of that. If a major label can produce genre-appropriate content at scale using AI, the market for mid-tier professional musicians, the session players and staff songwriters who make their living filling out the middle of the industry, compresses. This has already been happening for different reasons, and AI accelerates the trend without creating it. The automation of routine creative labor follows the same pattern as the automation of routine labor everywhere else: the people at the very top remain valuable, the people at the very bottom remain cheap, and everyone in between faces real pressure.
Kanye’s Bully retraction tells a small story about how artists navigate this moment. The claim of AI involvement was at least partly a provocation, a way to generate attention in a news cycle where the album had to compete with everything else. The walkback was equally strategic. The fact that neither statement can be fully verified says something about where we are: in a moment where AI disclosure is a rhetorical choice rather than an industry standard, and where listeners are expected to form opinions about something they have no reliable way to confirm or deny.
That is the actual problem. Not whether Bully has AI on it. The problem is that nobody is required to tell you, and most of the arguments about what to do about that are still happening in conference rooms rather than contracts.