In only a few years, artificial intelligence has moved from the margins of music production to the center of one of the industry’s biggest controversies. Tools such as Suno now let users generate complete songs from simple natural-language prompts, with vocals, instrumentation, structure, and style produced in minutes. At the same time, streaming platforms are being flooded with synthetic content. Deezer said in January 2026 that it was receiving more than 60,000 fully AI-generated tracks per day, representing roughly 39% of all daily uploads on the platform. This is no longer a niche experiment. It is a structural transformation that raises difficult questions about authorship, artist remuneration, copyright, transparency, and even the meaning of art itself.

What makes this moment especially disruptive is not just that AI can assist musicians, but that it can imitate a whole production chain that used to require writers, composers, singers, instrumentalists, engineers, and studios. Suno openly markets the fact that users can describe a genre, mood, lyrics, instrumentation, tempo, and structure, and receive a finished track in return. In advanced modes, users can refine sections, create variations, extract stems, and edit outputs further. The technological barrier to producing a polished song has therefore fallen dramatically. That can be empowering for amateurs, content creators, and independent artists with limited resources. But it also means that the supply of “music” can now expand at industrial scale, much faster than human demand, attention, or curation can keep up.
This has immediate consequences for artist remuneration. Music streaming already operates in a crowded attention economy where visibility is scarce and royalty pools are finite. Deezer’s data suggests the problem is not merely aesthetic saturation but economic manipulation: although AI-generated tracks accounted for only a small share of total platform streams, up to 85% of streams on AI-generated music were detected as fraudulent in 2025 and removed from the royalty pool. In other words, AI music is not only competing with human artists for audience attention; in many cases it is being mass-uploaded and artificially streamed to siphon money away from real creators. That makes the debate over AI in music not just a cultural issue, but a distribution-of-income issue.
The broader financial outlook is also worrying for creators. A 2024 CISAC study projected that by 2028 music creators could have 24% of their revenues at risk, with generative AI music expected to capture around 20% of traditional streaming revenues and about 60% of music library revenues. CISAC’s argument is that creators face a double loss: first, their works may be used without authorization to train models; second, AI-generated outputs may then substitute for human-made works in the market. Even if some of these projections prove too pessimistic, the direction is clear. If AI systems can produce cheap, endless, “good enough” music for background playlists, ads, creator videos, and stock libraries, the sectors most dependent on volume rather than superstar identity are especially exposed.
That leads to the next crucial question: how are these systems trained? The answer is still partly opaque, and that opacity is one of the industry’s core complaints. The European Commission has acknowledged that general-purpose AI models are trained on large quantities of data while there is still limited information available about the origin of that data. WIPO has similarly noted that AI datasets are often assembled from publicly available internet sources, which can include copyright-protected works such as music. In June 2024, major record companies sued Suno and Udio, alleging that the services copied vast quantities of copyrighted recordings without permission in order to build systems capable of generating convincing imitations of human recordings. These are allegations, not final court findings, but they show why training data has become the legal fault line of the entire debate.
Once a song is generated, another question appears: can it be protected? In the United States, current guidance is relatively clear on one point: purely AI-generated output does not receive copyright protection simply because a user typed a prompt. The U.S. Copyright Office’s 2025 report states that prompts alone do not provide sufficient control under current technology, and that simple prompting is generally insufficient to make the user the author of the generated result. However, the Office also explains that copyright can subsist in the human parts of an AI-assisted work: for example, when a creator contributes original expressive material, or creatively selects, arranges, edits, or modifies AI-generated content. The practical consequence is that AI can be a tool inside a protected work, but a fully machine-generated song with no meaningful human authorship is on much shakier ground.
In Europe, the situation is less settled. A 2025 study commissioned by the European Parliament described the status of AI-generated content in EU copyright law as uncertain and argued that the current legal framework was not designed for the expressive and synthetic nature of generative AI training. The same study emphasized the EU’s human-centered conception of authorship and recommended clear rules on transparency, opt-outs, fair remuneration, and licensing. Separately, the EU AI Act has already begun imposing transparency and copyright-related obligations on providers of general-purpose AI models, including requirements connected to summarizing training content. That does not solve the ownership question of every AI-generated song, but it marks a shift: regulators are increasingly focusing not only on outputs, but also on the data pipelines and business models behind them.
The hardest question may be the least legal one: is AI-generated music still art? The answer depends on what one thinks art is. If art is simply the production of pleasant patterns of sound, then AI can certainly produce something art-like. But if art involves intention, lived experience, vulnerability, interpretation, and the expression of a consciousness situated in the world, then AI-generated music remains derivative rather than fully artistic. It may simulate style, but it does not experience heartbreak, faith, grief, political anger, or joy. That does not mean AI has no place in artistic practice. It means the human role becomes decisive. Significantly, music fans themselves still place high value on that human role: IFPI’s global consumer research found strong support for restrictions on AI cloning and for requiring permission before artists’ music or vocals are used by AI systems.

That said, treating AI only as a threat would be too simplistic. AI can also function as an assistive technology. The U.S. Copyright Office explicitly states that using AI as a tool does not by itself eliminate copyright protection for a larger human-authored work. In practice, musicians may use AI to prototype melodies, generate demo arrangements, test genres, separate stems, brainstorm lyrics, or overcome technical and financial constraints. For independent creators, that can mean faster iteration and lower production costs. For disabled artists or creators without formal training, it can increase access to musical expression. The real divide, then, may not be between “AI music” and “real music,” but between AI used as a creative instrument under human direction and AI used as an automated substitute for creative labor.
Looking ahead, the most likely future for the sector is not total prohibition, but segmentation. One part of the market will keep pushing low-cost, mass-produced synthetic content, especially where music serves as functional background rather than authored expression. Another part will move toward licensing, filtering, attribution, and controlled commercial ecosystems. Deezer is already tagging and excluding AI-generated music from recommendations while selling detection technology to the broader industry. Universal Music Group, meanwhile, announced in late 2025 that it had settled litigation with Udio and would collaborate on a licensed AI music creation platform trained on authorized music. These developments suggest that the music business is slowly moving from a simple “AI yes or no” debate toward a more concrete model: licensed inputs, transparent systems, filtered outputs, and clearer revenue-sharing mechanisms.
This shift is happening inside a music industry that is still growing. IFPI reported that global recorded music revenues reached $31.7 billion in 2025, the eleventh consecutive year of growth. That matters because AI is not entering a collapsing market with nothing left to defend; it is entering a successful, streaming-led ecosystem in which rights, catalogues, recommendation systems, and audience trust already carry enormous value. The central question is therefore not whether AI will be part of music’s future. It already is. The question is whether that future will be organized around extraction or around consent. If the rules remain weak, AI may devalue music by overwhelming platforms with cheap supply and undermining creator income. If the rules become clearer, AI could remain a powerful tool without destroying the economic and moral foundations of musical creation.
In the end, AI forces the music industry to confront what it truly wants to protect. If music is treated only as content, then automated generation will always look like efficient progress. But if music is understood as culture, labor, identity, and human expression, then the industry cannot accept a model in which machines are trained on creators’ works without permission and then used to outcompete them. The future of the sector will likely depend on four pillars: transparency about training data, enforceable authorization rules, protection for genuine human authorship, and remuneration systems that prevent value from being transferred wholesale from artists to AI firms. The debate is therefore not really about whether AI can make music. It is about whether the industry can build a future in which technology serves creativity instead of replacing it.
Written by Antoine



