
In a significant policy reversal, the UK government has stepped back from controversial proposals that would have allowed artificial intelligence companies to freely use copyrighted books, music, films, and other creative works to train their systems. This decision follows intense pressure from the nation’s creative community, which argued the original plans threatened their livelihoods and intellectual property rights.
Technology Secretary Liz Kendall announced that the government no longer has a “preferred option” on copyright reform, walking back a position that had triggered one of the most unified campaigns of protest the creative industries had mounted in recent memory. The government’s change of course marks a significant victory for artists, authors, and performers who had mobilized against what they saw as an existential threat to their professions.
A Controversial Proposal Meets Fierce Resistance
Earlier this year, the UK’s Department for Science, Innovation and Technology had signaled support for a so-called “text and data mining” exception to copyright law. This legal provision would have permitted AI developers to scrape and utilize copyrighted material from the internet and other sources without seeking permission from or compensating the rights holders. The proposed model operated on an opt-out basis, meaning creators would have needed to actively register their works to prevent their use — a framework many argued was impractical and unjust.
The reaction from the creative sectors was swift and formidable. An alliance of musicians, novelists, screenwriters, actors, and visual artists launched a coordinated campaign, arguing the policy would effectively hand over the accumulated creative output of generations of British artists to technology companies at no cost. High-profile figures from the music, film, and publishing worlds publicly condemned the proposals, lending significant media visibility to the campaign.
The Economics of Creativity vs. AI Development

The debate sits at the intersection of two major economic priorities for the British government: supporting the thriving creative industries, which contribute over £100 billion annually to the UK economy, and positioning the country as a global hub for artificial intelligence development and investment. These two priorities had appeared to be in direct conflict under the previous policy approach.
AI companies argue that access to vast datasets — including copyrighted works — is essential for training the powerful models that underpin their products. Without broad training data, they contend, the quality and capability of AI systems would be significantly constrained, hampering innovation and competitive positioning against US and Chinese rivals. This argument has found sympathetic ears in government quarters focused on economic competitiveness.
What Rights Holders Are Demanding
Creative industry representatives are calling for a system in which AI companies must seek permission before using copyrighted works for training, and where rights holders receive fair compensation when their work is used. Many point to licensing frameworks in other sectors — such as music streaming royalties — as models that could be adapted for the AI training context.
They also argue for greater transparency from AI companies about what data has been used to train their models, enabling rights holders to determine whether their works have been incorporated and to seek redress if they have been used without permission. This transparency demand aligns with provisions in the European Union’s AI Act, which requires documentation of training data sources.
Global Context: A Battle Being Fought on Multiple Fronts
The UK’s policy reversal is one episode in a global struggle over the legal and ethical status of AI training data. In the United States, a series of high-profile lawsuits brought by publishers, news organizations, and individual creators against major AI companies are working their way through the courts, testing whether using copyrighted works without permission to train AI systems constitutes copyright infringement.
The European Union, through its AI Act and existing copyright directives, has taken a more prescriptive approach, requiring documentation of training data and providing opt-out mechanisms for rights holders. How these different national and regional frameworks evolve will significantly shape the global AI industry’s operating environment.
What Comes Next for UK Policy
With the government having stepped back from its previous position, a period of renewed consultation and policy deliberation lies ahead. Technology Secretary Kendall indicated that all options for balancing AI development needs against creators’ rights would be considered. This open-ended approach, while welcomed by the creative industries as a significant improvement over the previous stance, leaves considerable uncertainty about the eventual regulatory framework.
Industry bodies representing musicians, authors, and visual artists have called for a collaborative process that gives creators a genuine voice in shaping the outcome. AI companies and technology industry groups, meanwhile, will be lobbying hard to ensure that whatever framework emerges does not impose overly burdensome restrictions on their ability to develop and improve AI systems.
Conclusion
The UK government’s U-turn on AI copyright policy represents a meaningful acknowledgment that the concerns of the creative community cannot be dismissed in the rush to embrace artificial intelligence. The road ahead will require difficult negotiations and trade-offs, but the reversal has at least established that creators’ rights must be a central consideration in how AI development is regulated. The global AI community will be watching closely as the UK works toward a new framework — one that may well influence policy approaches in other jurisdictions.
