The conversation about artificial intelligence in the recording studio has been happening for long enough now that we’ve moved past the initial panic and into something more complicated: a period where the fear has become granular, where the defenses have gotten more sophisticated, and where the actual use of these tools by actual artists has started to produce enough data to have a real argument about.

What’s emerging is not what either side predicted.

The maximalist tech argument, that AI would simply replace musicians wholesale, has not materialized in any meaningful way at the commercial level. The records people are excited about in 2026 were still made by humans who can explain every decision they made. Robyn’s Sexistential, Flea’s Honora, Melanie Martinez’s HADES all landed this week and none of them were made by a machine. Whatever tools were in the room were tools, not authors.

But the more interesting argument isn’t about replacement. It’s about friction.

For decades, the recording studio has been a place where friction was the point. You had to book time. You had to show up. You had to work through the gap between what you heard in your head and what you could actually produce with the musicians and gear in front of you. That friction produced accidents. The accidents produced discoveries. The discoveries became songs. This is how “Waterloo Sunset” happened. It’s how “Rapper’s Delight” happened. It’s how nearly everything happened.

AI tools, at their most capable, eliminate friction. They can generate a passable hook on demand. They can take a vocal performance and correct not just pitch but timing, feel, breath, all of it, so that the emotional content of the take gets smoothed into something technically perfect and spiritually inert. They can write lyrics in the style of whoever you specify. They can produce a full backing track overnight that would have taken a session band a week.

The question is what happens to the music when you remove the friction.

The answer, based on what’s actually coming out of studios that have integrated these tools most aggressively, is: it gets efficient. It gets consistent. It loses the quality of having been made by someone who had to struggle to make it, and that quality turns out to be one of the things listeners are actually responding to, even when they can’t articulate why.

This is not an argument against technology. Every technology changes the friction. The electric guitar changed it. The four-track changed it. Pro Tools changed it so dramatically that it’s almost impossible to remember what the studio looked like before. The question with each of these shifts was never whether to use the new tools, but how to use them without losing the things that made the old friction productive.

The artists figuring this out most interestingly right now are, predictably, the ones who were already comfortable with discomfort. Producers who grew up in bedroom studios, who understand that constraint produces creativity, tend to use AI tools as one constraint among many rather than as an escape from constraint. They’ll use a generative system to produce something unexpected, then treat that output as raw material rather than finished product. The human judgment comes in determining what, out of a hundred generated options, is actually interesting, and why, and what to do with it next.

That’s not so different from what a musician does with an instrument they don’t fully control. The difference is scale and speed, and scale and speed have their own pathologies.

The legal picture remains genuinely unresolved. The Drake-Kendrick lawsuit metastasized into something involving UMG and streaming economics and intellectual property in ways that nobody has fully mapped yet. The FKA Twigs NDA litigation raised questions about who owns a performer’s voice, which became considerably more pressing the moment voice cloning became a household tool. The music industry spent 2024 and 2025 alternating between issuing statements condemning AI-generated music and quietly signing deals with the companies making it. That contradiction has not resolved.

What seems clear, though, is that the next five years will not produce a world without AI in music. They will produce a world where the distinction that matters is not “AI or not AI” but “does this sound like it was made by someone who cared about making it.” That’s a harder distinction to fake than the technologists assume and easier to perceive than they’d like. Listeners are not always able to explain what’s missing from music that has been over-optimized. But they feel it. They’ve always felt it. The tools change. That doesn’t.