Music streaming services have fundamentally changed how people access and enjoy music. The convenience of on-demand libraries has made platforms like Spotify, Apple Music, and Deezer the primary listening method for millions worldwide. However, this digital shift also opens the door to new forms of content manipulation, particularly through the rapid rise of generative artificial intelligence. Recently, Deezer, a global music streaming service, released startling data regarding the influx of AI-generated music on its platform: a staggering 44% of all new music uploads are now created by artificial intelligence, and the vast majority of streams for those tracks are the product of fraudulent activity rather than genuine human listening.
This revelation comes from Deezer's ongoing efforts to combat AI-generated content and streaming fraud. The company has invested heavily in developing proprietary technology to detect AI-created tracks, a move that sets it apart from many competitors that have been slower to address the issue. According to Deezer's internal findings, its users have a remarkably difficult time distinguishing AI-produced music from that made by humans. In a survey, listeners were asked to evaluate three songs, two of which were AI-generated. A staggering 97% of participants could not correctly identify which song was created by a human artist. This highlights the profound challenge facing the music industry: as AI models improve, the line between machine-made and human-made art blurs, threatening the livelihoods of genuine musicians and the integrity of streaming catalogs.
The scale of the problem is immense. Deezer reports that it now sees approximately 75,000 new AI-generated tracks uploaded every single day. This accounts for the 44% figure, which represents a dramatic increase over previous years. The company's detection system, which it licenses to third parties, boasts an extremely low false positive rate of less than 0.01%, giving it confidence in identifying AI content. However, despite the high upload volume, Deezer emphasizes that listeners are unlikely to encounter these AI tracks organically. The platform deliberately excludes any content flagged as AI from recommendation algorithms and editorial playlists. As a result, AI music constitutes only about 1–3% of total streams on Deezer. The primary motivation for uploading such vast quantities of AI tracks appears to be fraudulent: bad actors aim to generate revenue by exploiting the streaming payment model, which pays artists based on the number of legitimate human listens. Deezer has responded by demonetizing approximately 85% of streams associated with AI-generated content.
The Technology Behind Detection
Deezer's approach to identifying AI-generated music is multifaceted. While the company does not reveal the full details of its proprietary system, it likely combines audio fingerprinting, metadata analysis, and machine learning models trained to recognize the subtle artifacts and patterns unique to generative audio. The system must be adept at distinguishing between genuine human-created music that may use AI tools (e.g., AI-assisted production) and entirely AI-generated compositions. The low false positive rate suggests that Deezer's models are highly refined, though the company acknowledges that detection is an arms race. As generative AI models evolve, detection systems must constantly adapt. Competitors like Spotify have also invested in content moderation but face similar challenges; Spotify's recent crackdown on a separate form of fraud—stream manipulation—indicates that the streaming ecosystem is under siege from multiple angles.
Watermarking is another front in this battle. Major AI music generators such as Google's Lyria (used by Gemini) and platforms like Suno and Udio have built-in watermarking technologies, such as Google's SynthID, to tag songs as AI. However, these watermarks are not foolproof. Deezer's report notes that it is becoming increasingly easy to strip watermarks from audio files or to generate music using custom models that never include them. As AI inference costs drop, the barriers to creating high-volume, low-quality AI music—often called 'AI slop'—continue to fall. This democratization of music creation, while positive in some respects, creates a flood of content that threatens to drown out genuine artistry and exploit the platform's payment systems.
Implications for the Music Industry
The rise of AI-generated music raises profound questions about copyright, artist compensation, and the definition of creativity. Current copyright laws in many jurisdictions do not recognize AI-generated works as eligible for protection, complicating the legal landscape. If an AI track sounds indistinguishable from a human composition, who owns the rights? And if streaming platforms pay out royalties based on streams, do AI tracks that fool listeners deserve the same compensation as those created through human effort? Deezer's decision to demonetize flagged AI music is a proactive step, but it is not universally adopted. Other streaming services may not have the technology or the will to implement similar measures, creating an uneven playing field. Furthermore, legitimate artists who use AI as a creative tool—for instance, to generate backing tracks or experimental sounds—may find their work incorrectly flagged. Deezer claims its system minimizes false positives, but the tension between supporting innovation and preventing abuse remains unresolved.
The economic impact is significant. Streaming royalties are already notoriously low for many artists, with a fraction of a cent earned per stream. If AI-generated tracks siphon even a small percentage of the revenue pool, the loss to human creators can be substantial. Deezer's data suggests that without its detection and demonetization efforts, AI fraud could have diluted payments by a much larger margin. The company's CEO, Alexis Lanternier, stated, 'Thanks to our technology and the proactive measures we put in place more than a year ago, we have shown that it’s possible to reduce AI-related fraud and payment dilution in streaming to a minimum.' This statement reflects optimism but also underscores the ongoing need for vigilance.
The Broader AI Music Ecosystem
Deezer is not alone in facing this challenge. The entire music streaming industry is grappling with the influx of AI content. YouTube Music, for example, has policies against synthetic content that misleads users, but enforcement is uneven. Spotify has launched its own AI-powered features, such as AI DJs and personalized playlists, but has not publicly shared data on AI upload percentages. The difference may lie in detection capabilities: Deezer appears to be ahead of the curve, perhaps due to its smaller scale and focused investment. As generative audio models proliferate, other platforms will likely need to follow suit to protect their catalogs and their bottom line.
In conclusion, the music industry stands at a crossroads. The tools for creating AI music are becoming more accessible and sophisticated, while detection methods must keep pace. Deezer's report serves as a wake-up call: nearly half of all new music uploads are synthetic, and the majority are generated with fraudulent intent. While Deezer has taken steps to mitigate the damage, the long-term solution will require collaboration across the industry, legal reforms, and perhaps a redefinition of what it means to be a musician in the age of AI. The challenge is not going away—it is only accelerating.
Source: Ars Technica News