A few weeks ago, we wrote about Sienna Rose, the “anonymous” neo-soul act with three songs in the Spotify Top 50, millions of monthly listeners, and no discernible evidence of being an actual person. Deezer’s detection tools flagged her music as computer-generated. Pretty much everyone on the internet has figured it out by now. And yet the social media accounts tied to the project are still posting as if she’s real—still fielding fan questions, still teasing a potential tour, still doing the whole bit—because, of course, why would they stop? They have no reason to. They’re making plenty of cash as is. Which brings us, with impeccable timing, to Apple Music, which announced this week that it is introducing something called “Transparency Tags”: a new metadata system that asks record labels and distributors to please let the platform know when the music they upload was made by a computer. Yeah, I’m sure the Sienna Rose people will get right on that.
The system, laid out in a newsletter to industry partners on March 4, covers four categories: sound recordings, compositions (including lyrics), artwork, and music videos. Labels can tag any or all of those elements when they deliver content to Apple Music, and Apple has framed the whole thing as “a concrete first step toward the transparency necessary for the industry to establish best practices and policies that work for everyone.” Which sounds great until you get to the fine print—or, rather, the part where there is no fine print, because the tags are entirely optional. Apple’s own technical spec notes that if the tags are omitted, the platform simply assumes no AI was involved. There is no detection mechanism on Apple’s end, no verification process, no consequence for nondisclosure. Apple isn’t even defining what counts as AI-generated; that determination is left to the labels themselves, “similar to genres, credits, and other metadata.” In other words: if a label doesn’t feel like telling you its latest signing is a chatbot, it doesn’t have to, and Apple will happily look the other way.
This is especially dispiriting when you consider what other platforms are actually doing. Deezer, the French streaming service, has spent the past year building its own AI detection infrastructure; tools that can identify tracks generated by models like Suno and Udio without relying on anyone to self-report. The numbers they’re finding are staggering: as of January, Deezer was receiving over 60,000 fully AI-generated tracks per day, up from 10,000 when it first launched its detection tool a year earlier. Synthetic content now accounts for roughly 39% of all music delivered to the platform daily, and up to 85% of the streams on that music are fraudulent—not listeners enjoying a song, but bots farming royalties. Deezer strips those streams from the royalty pool and demotes the tracks from its recommendations. Bandcamp, meanwhile, has simply banned AI-generated music outright and asked its users to help flag violations. iHeartRadio launched a “Guaranteed Human” initiative committing to zero AI singers or AI DJs on air. These are imperfect solutions, but they are solutions—actual attempts to draw a line rather than politely requesting that the people profiting from the problem identify themselves.
God knows Spotify isn’t an escape hatch here, either. Sure, last year it added back-end fields so labels could mark where AI showed up in vocals, instrumentation, or polishing, built with help from an industry standards group. But on the front end, the service is still happily feeding machine-made tracks into people’s recommendations this year—and let us not forget this is the same platform that recently turned listener attention into a recruitment funnel for ICE. The throughline isn’t responsibility, it’s plausible deniability: as long as there’s a checkbox somewhere and a policy PDF to point to, the largest streamers get to keep treating AI as a growth lever instead of something they’re actually accountable for.
In the meantime, Apple gets what it needs. It can tell regulators in Brussels and Washington that it has a framework for AI transparency in place, point to a neat set of metadata fields in Spec 5.3.25, and talk about “concrete first steps” in the same measured corporate language it uses for everything else. And it can also assure major labels that it’s not going to kneecap their synthetic side hustles or their AI-assisted A&R experiments, because nothing in the spec says those tracks will be demoted, demonetized, or even meaningfully separated from the regular catalogue—and, most of all, it can keep cashing subscription checks from users who assume, naturally, that “Transparency Tags” mean someone, somewhere, is actually checking. But hey, better than nothing, I guess.

