top of page

Subscribe to our newsletter

Write a
Title Here

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

© Indic Pacific Legal Research LLP. 

The works published on this website are licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International.

For articles published in VISUAL LEGAL ANALYTICA, you may refer to the editorial guidelines for more information.

The UK Government Brief on AI and Copyright Law (2024), Explained

Updated: Feb 5

The author of this insight was a Research Intern at the Indian Society of Artificial Intelligence and Law.


 
Made via Luma AI.
Made via Luma AI.

The UK economy is driven by many creative industries, including TV and film, advertising, performing arts, music publishing and video games contributing nearly 124.8 billion GVA to the economy annually. The rapid development of AI over the recent years has sparked a debate globally and within the UK about various challenges and opportunities it brings. It led to massive concerns within the creative and media industries about their work being used to train AI without their permission and media organizations not being able to secure remuneration to licensing agreements. There has also been a lack of transparency from the AI developers about the content that is being used to train the models while these firms also raise their own concerns about the lack of clarity over how they can legally access the data to train the models. These concerns are hindering AI, adoption, stunting innovation, and holding back the UK from fully utilizing the potential AI holds. The UK government consultation document highlights the need for working in partnership with both the AI sector and media sector ensuring greater transparency from AI developers, to build trust between developers and the creative industry.


Focus Areas of the Consultation


The key pillars of the UK government focused on the approach to copyright and AI policy include transparency, technical standards, contracts and licensing, labelling, computer generated works, digital replicas and emerging issues. The government aims to tackle the challenges with AI in terms of copyright by ensuring that AI developers are transparent about the use of training data for their AI models. The government seeks views on the level of transparency required to ensure that there is a trust built between AI companies and organisations in the creative industry. Establishing technical standards will help improve and standardise the tools, making it easier for creators and developers to exercise their reserving rights. Moreover, licensing frameworks need to be strengthened to ensure that the creators receive fair remuneration while the AI developers also get access to necessary training material. Labelling measures help distinguish the AI generated content from the human created work, which will foster clarity for consumers. Additionally, the protection of consumer generated work needs to align with modern AI capabilities so that fairness is ensured. Finally, addressing digital replicas, such as deepfakes is essential to protect individuals’ identity from misuse. 

 

 

Figure 1: Key pillars of Copyright and AI policy
Figure 1: Key pillars of Copyright and AI policy

Overcoming Challenges in AI Training and Copyright Protection


The government’s consultation document looks at the problem of using copyrighted works to train AI models. AI developers use large amounts of data, including copyrighted works, to train their models but many creators don’t get paid for the use of their work. The consultation highlights the issue of transparency as creators often don’t know if their work is in the AI training datasets. The government acknowledges the conflict between copyright law and AI development especially when AI outputs reproduce substantial parts of copyrighted works without permission which could be copyright infringement. The Getty Images vs Stability AI case is being debated but it may take years to resolve. The government is looking at legislation to clarify the rules around AI training and outputs to get the balance right between creators and AI developers.



Figure 2: A Venn Diagram discussing intersectional aspects around AI Training & Data Mining and Copyright Ownership & Creator Rights
Figure 2: A Venn Diagram discussing intersectional aspects around AI Training & Data Mining and Copyright Ownership & Creator Rights

 


Exceptions with rights reservation


Key features and scope


The data mining exception and rights reservation package that is under consideration would have features pertaining to increased transparency by AI firms in use of training data, ensuring right holders get fair payment upon use of their work by AI firms and addresses the need for licensing. The proposed solutions aim to regulate data mining activities ensuring lawful access to data building trust and partnership between AI firms and media and creative organisations.


Figure 3: Proposed exceptions to Data Mining and its Scope.
Figure 3: Proposed exceptions to Data Mining and its Scope.

Addressing Challenges in Developing and Implementing Technical Standards


There is a requirement and growing need for standardization for copyright and AI so publishers of content on the Internet can reserve the rights while AI developers have access to training data that does not infringe on the rights of publishers. Regulation is needed to support the adoption of such standards, which will ensure that protocols are recognised and complied with. There are multiple generative AI web crawlers that flag data unavailable for training to the developer.

Many firms and data set owners also keep themselves open to be notified more directly by organisations if they don’t want their work to be used for training an AI model. However, even the most widely adopted standard, which is the robots.txt cannot provide granular control over the use of works that right holders seek.

Robots.txt does not allow a massive degree of control because content that is being used for search indexing or language training may not be recognized for generative AI. The consultation proposes the need for standardisation that ensure that developers have legal access to training data and that protocols protecting data privacy of content are met.

 

Figure 4: Key focus areas to achieve technical standardisation
Figure 4: Key focus areas to achieve technical standardisation

Contracts and licensing


Contracts and licensing for AI training often involve creators licensing their works through collective management organizations (CMOs) or directly to developers, but creators sometimes lack control over how their work is used. Broad or vague contractual terms and industry expectations can make it challenging for creators to protect their rights. CMOs play a crucial role in efficiently licensing large collections of works, ensuring fair remuneration for creators while simplifying access for AI developers. However, new structures may be needed to aggregate and license data for AI training. The government aims to support good licensing practices, fair remuneration, and mechanisms like text and data mining (TDM) exceptions to balance the needs of right holders and AI developers. Additionally, copyright and AI in education require consideration to protect pupils’ intellectual property while avoiding undue burdens on educators.


Ensuring Transparency: Tackling Challenges in Openness and Accountability


 Transparency is crucial for building trust in AI and copyright frameworks. Right holders face challenges in determining whether their works are used for AI training, as some developers do not disclose or provide limited information about training data sources. Greater transparency can help enforce copyright law, assess legal liabilities, and foster consumer confidence in AI systems. Potential measures include requiring AI firms to disclose datasets, web crawler details, and compliance with rights reservations. However, transparency must be balanced with practical challenges, trade secret protections, and proportionality. International approaches, such as the EU’s AI Act and California’s AB 2013, offer insights into implementing effective transparency standards, which the UK will consider for global alignment.


Enhancing Accountability Through Effective AI Output Labelling Standards


Labelling AI-generated outputs enhances transparency and benefits copyright owners, service providers, and consumers by providing clear attribution and informed choices. Industry initiatives like Meta’s ‘AI info’ label exemplify current efforts, but consistent regulation may be needed to ensure uniformity and effectiveness. Challenges include defining the threshold for labelling, scalability, and preventing manipulation or removal of labels. International developments, such as the EU AI Act’s rules for machine-readable labels, offer valuable insights. The UK government will explore supporting research and development for robust labelling tools to promote transparency and facilitate copyright compliance.

 

Figure 5: AI Labelling, depicted.
Figure 5: AI Labelling, depicted.

 

Navigating Challenges in Regulating Digital Replicas


The use of AI to create “digital replicas” of actors and singers—realistic images, videos, and audio replicating their voice or appearance has raised significant concerns within the creative industries. These replicas are often made without consent, using AI tools trained on an individual’s likeness or voice. Existing protections in the UK, such as intellectual property rights, performers’ rights under the CDPA 1988, and data protection laws, offer some control over the misuse of personal data or unauthorized reproductions. However, concerns remain about AI’s ability to imitate performances or create synthetic reproductions, prompting calls for stronger legal protections, such as the introduction of personality rights. The government acknowledges these concerns and is exploring whether the current legal framework adequately protects individuals’ control over their personality and likeness while monitoring international developments, such as proposed federal laws in the US.


Policy Analysis and The Way Ahead


The UK government's Copyright and AI consultation is a critical moment for policy to strike the balance between technological innovation and the protection of creative industries, generally, the proposal aims to solve a complicated thicket of legal issues on AI model training. This would allow access to copyrighted works by AI developers unless rights holders specifically opt out, addressing considerably grey areas of uncertainty that still lurk over AI developments. The consultation accepts that the fast development in technology no longer fits very well with the existing copyright framework, thus putting the UK in danger of losing its edge in the field of global AI innovations.

An opt-out mechanism in place for copyright rules would help stimulate policymakers who otherwise could not be sure how to protect intellectual property in an environment conducive to technological improvements. Creative industries express grave concerns that unlicensed use of their works by new AI firms, arising from a notion of fair use protections, will undermine personal ownership.

AI companies counter that without complete access to the training data required for the construction of sophisticated models of machine learning, through either licensing or exceptions, they won't be able to continue with their work. The intentions of these consultations are to find some common ground, a solution which looks set to simultaneously ensure AI's continued development and provide some control and possible remuneration to content creators that would help de-escalate conflicts between these two groups. Arising out of a more long-term vision, these consultations represent the beginning of an attempt to get ahead of the curve in shaping copyright law, technology development, and IP issues in an increasingly AI-governed world.

 

References

Comments


bottom of page