The author is currently a Research Intern at the Indian Society of Artificial Intelligence and Law.
What is Character.AI and What is the Mass Deletion Event?
Imagine having your personal Batman, Superman, Iron Man, or even Atticus Finch- someone you can interact with at any moment. Character.AI has turned this dream into reality for man, especially within fandom communities. Character.AI is an artificial intelligence (AI) platform through which users interact with and create AI-powered chatbots, based on either fictional or real people.
Since its launch in 2021, the platform has gained a significant traction among the fandom communities and it has become a go-to platform to explore interactions with their favorite, often fictional, characters. However, the platform’s user base isn't just limited to fandom communities, it also extends over to people interested in history, philosophy, literature, and people with other niche interests.
Character.AI also enjoys an advantage that is available only to very few platforms, and that is: a diverse user-base. This includes everyone from serious interests, to simple casual explorers. Users from fandom communities saw the platform as a new way to engage with their favorite characters. Character.AI also enjoys a good demographic advantage, where the majority of Character.AI users are located in the United States, Brazil, India, Indonesia and then the United Kingdom.
However Character.AI also has been surrounded in its fair share of controversies, including the latest one where they carried out a mass deletion drive involving copyrighted characters, and raising concerns over copyright infringement, platform liability, and platform ethics in the context of AI generated content.
Overview of Character.AI’s platform and user base
Character.AI’s core value proposition lies in enabling users to interact with AI-powered chatbots designed to stimulate lifelike conversation. These chatbots reflect diverse personalities, conversational styles and traits unique to the character upon which the chatbot was trained, making the platform particularly popular for role-playing with favorite characters, and story-telling. At the heart of it all, Character.AI is a conversational AI platform that hosts a wide range of chatbots, and gives the users the ability to either interact with existing characters or create their own, and customize the characters personalities, and responses.
Character.AI boasts a diverse user base with a chunk of them falling within the 18-23 age group. The composition of its user demographics is visually represented in the following figure:
The platform hosts a wide range of characters, including historical figures,celebrities, fictional characters, and even dungeon masters. This makes it accessible to people belonging to different age groups. It is also quite evident that the majority of its user base stems from the 18-24 age group. Also the combined user base of the age group of people belonging under the age group of 44 years make up for 89.84 percent of its user base.
Summary of the mass deletion of copyrighted characters
In the month of November 2024, Character.AI carried out a mass deletion drive of AI chatbots that were based on copyrighted characters from various franchises, including "Harry Potter,"Game of Thrones," and "Looney Tunes." The company announced that the deletions were the result of the Digital Millennium Copyright Act (DMCA) as well as copyright law. However, the company did not explain why they did this or whether they were proactively engaged in a dialogue with the copyright holders, vis-a-viz Warner Bros Discovery. Interestingly, users were not officially notified about these deletions but only came to know about the situation through a screenshot that was circulating online.
The removals were countered with a strong backlash from the user community, in particular those within fandom cultures that have invested time and emotional energy in interacting with these AI characters and of people who share similar interests that have put a lot of enthusiasm and effort into their interactions with these AI characters. The removal of popular familiar figures such as Severus Snape, who had clocked 47.3 million user chats, has caused the fandom community to be in turmoil and has at the same time made people doubt the future of Character.AI and its relationship with copyrighted content.
Initial user reactions and impact on the fandom community
The initial reactions from users highlighted their frustrations, disappointment, discontentment, anger and upset. Some users considered migrating to different AI platforms, as the deletions have sparked discussions about the balance between copyright protection and creative expression within AI Platforms. Many users expressed their disappointment over a lack of prior notice regarding the deletion drive. One user remarked:
“at least a prior notice would be nice. This allows us to archive or download the chats at the very least. Also, I earnestly hope you finally listen to your community. Thank you!”.
While others criticized the unprofessionalism of the situation where the platform communicates the news two days after the deletion drive that has already occurred. While some users also acknowledged and in some ways already knew the potential reasons behind the deletion drive- recognizing the need for Warner Bros. Discovery to protect their IP’s from potential controversies- they were mostly concerned about the lack of transparent communication and the absence of any heads up.
Copyright Law and AI-Generated Content
The mass deletion on Character.AI highlights the complex legal issues in dealing with copyright law and AI-generated content. The use of copyrighted characters in AI chatbots raises concerns around copyright infringement, fair use, and the responsibilities of the AI platform regarding intellectual property rights.
Analysis of copyright infringement claims in AI-generated chatbots
Intellectual Property laws and particularly the copyright law, essentially grants exclusive rights to copyright holders, including the right to reproduce, distribute, licence, and create derivative works, based on their original creative works. The emergence of AI-chatbots and conversational AI in general presents a complex conundrum, where they potentially infringe upon these exclusive rights when they reproduce the exclusively protected elements of those characters, personalities, appearances, storylines, conversational styles, ideologies, and simply put those characters in their entirety.
However, dealing with copyright infringements in the realm of AI-generated content is not an easy legal problem to overcome. Since matters pertaining to this realm are still pending in courts, and there are limited precedents to establish a responsible discourse. All of this gets even more complicated by the fact that the Large Language Models (LLMs) which power these AI systems, do not simply copy and present the content. Instead, they analyze vast data points to learn patterns to generate works inspired no by a single copyright holder, but everyone. Courts will need to consider factors such as the extent to which the AI chatbot copies protected elements of the copyrighted characters, the purpose, and the potential impact on the market for the original work. This mind map below gives a comprehensive examination of the fair use legal arguments with respect to AI Training.
Discussion of the Digital Millennium Copyright Act (DMCA) Implications
The Digital Millennium Copyright Act (DMCA) provides a safe harbor framework that protects online from liability for copyright infringement by their users, provided that certain conditions are met. These Conditions are illustrated for your reference in Figure 3.The DMCA also carries significant implications for platforms like Character.AI, requiring them to establish mechanisms for addressing infringement claims. This includes responding to takedown notices from copyright holders and proactively implementing measures to prevent potential infringements. However, the applications of the DMCA to AI-Generated content remains underdeveloped, leaving unanswered questions about how the notice and takedown systems can effectively address the unique challenges posed by the future of AI-generated content.
Platform Liability and Content Moderation
This mass deletion on Character.AI raises pertinent questions about the legal duties of AI platforms to moderate content and prevent harm. As these AI chatbots become ever more capable and able to produce increasingly lifelike, immersive experiences, it poses a tremendous challenge on such platforms as ensuring the safety of users, protecting intellectual property rights, and living up to various legal and ethical standards.
Exploration of Character.AI’s legal responsibilities as a platform.
Character.AI, like other online platforms, bears a legal responsibility towards the users and society at large. These include protecting user privacy, preventing harm, and complying with the law of the land. Policies and guidelines in the terms of service of Character.AI deal with the dos and don'ts regarding user behaviour, content, and intellectual property rights.
However, the specific legal obligations and the extent to which platforms should be held liable for content generated by their users or the actions of their chatbots are still evolving. The recent lawsuit against Character.AI involving an issue such as a wrongful death case regarding a teenager’s suicide after forming a deep emotional attachment with a 'Daenerys Targaryen'-inspired Chatbot, underscores the potential risks of conversational AI and specifically, character based conversational AI.
The lawsuit alleges negligence, wrongful death, product liability, and deceptive trade practices, claiming that Character had a responsibility to inform users of dangers related to the service, particularly a dangerous threat to children.
Aside from the legal responsibilities, Character.AI also grapples with ethical issues involving bias within the training data, ensuring prevention of black-boxisiation of their conversational AI models, and establishing accountability for actions and impacts of AI systems. These ethical concerns are critical in their own right and must be addressed as proactively we seek to innovate. Here's an evaluation of proactive vs. reactive content moderation strategies as depicted in the figure below.
Comparison with other AI platforms approaches to copyrighted content
Different AI platforms have adopted differing approaches towards copyright content management. Some of the platforms strictly enforce the policies against the use of the copyrighted characters, whereas others have taken a more permissive approach, allowing for users to create and interact with AI chatbots based on copyrighted characters under certain conditions.
For example, Replika and Chai focused on the creation of novel AI companions rather than replicating pre-existent characters to minimise the issue of copyright. NovelAI on the other hand has implemented features, such as the ability for users to generate content that is based on copyrighted works but within limitations and safeguards to avoid copyright violations.
User Rights and Expectations in AI Fandom Spaces
In the complex scheme of things where copyrighted content is utilized to train large language models(LLMs), that is merely a derivative of the original work, and where users further refine these models through prompting to get a more personalized experience and interact with those that they couldn't interact with in real life. Thus, a new dynamic emerges, one where there are unreasonable set expectations. This dynamic becomes even more critical especially when companies are not doing their part in making their users aware about the limitations of their conversational AI models that the users want to experience. Users then invest significant time, creativity and emotional energy in fine tuning and interacting with these models.
All the interactions that people have had with the models has helped it to be better and improved. They have contributed to the success of those chatbots and also helped in creating personalized experiences for others. The Initial reaction to the abrupt deletion of chatbots by the platforms has highlighted the basic expectations of core users, particularly the need to have some form of control or say over the deletion of those chatbots and the data generated during interactions and receiving prior notice, so that they could exercise their ability to archive conversations before they are removed. It is crucial to understand here that it’s not just about the energy they have spent in crafting the personalized conversations they had with the chatbots, but also the comfort they sought, the ideas they had, and the brainstorming they did with those chat bots.
Examination of user generated content ownership in AI environments
One question and a major concern of the users of conversational AI, for the future technology law jurisprudence is whether users of chats based on LLMs are also in-part copyright holders of the chats between them and the characters they are interacting with. Since platforms like Character.AI allow for users to have private, personalized conversations, that are often unique to the input prompts, and also given that the users now can share their chats with others, giving it the status of published works, complicating the issue of ownership even further.
Character AI’s Terms Of Services (TOS) provide that users retain ownership of their characters and by extension, the generated content. However the platform reserves a broad and a sweeping license to use this content for any purpose, including commercial use. This convenient arrangement gives rise to the potential for Character.AI to commercially benefit from user generated content, without compensation or recognition of not only the User generated derivative content but as a matter of fact, the original copyrighted works itself.
Discussion of user expectations for persistence of AI characters
When it comes to deletion of characters, the TOS of Character.AI is broad and sweeping. It states that Character.AI reserves the right to terminate accounts and delete content for any and multiple reasons, including inactivity or violation of the TOS, often without prior notice. The lack of transparency into content moderation has an overpowering impact and consequence particularly when there can be severe emotional consequences for those who rely on these characters for emotional and mental support. The ethical implications of this opaque policy can be amplified in the context of fandom, where fans tend to be generally dependent on the parasocial relationships they tend to enjoy with their fictional characters.
In addition to that the TOS also provides for the following provision:
“You agree that Character.AI has no responsibility or liability for the deletion or failure to store any data or other content maintained or uploaded to the Services”.
The provision of these terms only exacerbates the asymmetry between the control, influence and certainty which the users expect and the powers that company wants to exercise unquestionably. These terms not only neglect the user rights, but also fail to address the ethical concerns like transparency and fair moderation.
Analysis of potential terms of service and user agreement issues.
Character.AI’s terms of services provide for several contentious provisions and they include as depicted in the figure below:
These provisions of the TOS raises several legal and policy concerns, including the broad and sweeping disregard of user expectations only highlights the need for a more balanced approach that protects user rights while still allowing for innovation and the responsible use of Conversational AI. This is even more pertinent especially in the context of conversational AI systems where users rely on platforms for emotional validation, support, and interactions. And where the consequences could be of a higher magnitude for the user than the other way around.
Ethical Considerations in AI-Powered Fandom Interactions
Exploration of parasocial relationships with AI characters
One significant concern that has emerged since the advent of conversational AI and especially personalized and personality based conversational AI is the development of Parasocial Relationships. Parasocial relationships refer to one sided attachments and connections where individuals develop emotional attachments to fictional and media personalities.The development of emotional bond and attachments to these is an even more common occurrence in the fandom spaces. Within fandom communities, where people are already emotionally invested in their favorite characters and universe, for them, such relationships come on par with the reality they live in, sometimes exceeding real-life relationships.
The introduction of Conversational AI, further intensifies these relationships and dynamics, since the interactions only become personalized, interactive, and more so real-wordly. Character.AI has the option to call your personal 'Batman', 'Harvey Specter', 'Harley Quinn' and a random 'mentorship coach'. Imagine interacting with them, and feeling intimately close to the figures you admire through this feature. The increasing sophistication of AI characters and their ability to mimic human-like conversations, only blurs the lines between the real and simulated worlds.
It all would become real for people and has real world consequences. AI companies and their developers have an ethical responsibility to ensure transparency about the limitations of AI characters, and ensure that they do not mislead users about their capabilities or simulate emotions that those systems cannot experience. Minors and Elderly then become the vulnerable populations of manipulative conversational AI systems that if unchecked, creates a risk of people living in distorted realities, and alienated worlds that they have created for themselves, or simply put the AI systems manipulated them to be in.
Discussion of potential psychological impacts on users, especially minors
The psychological implications of excessive and early exposure and introduction to conversational AI are significant, particularly for children. Similar to social media’s impact, these systems could hinder the development of social skills and the ability to build meaningful, real-world relationships. This incorporation will only hurt their prospects of becoming mature and reasonable adults that can navigate the challenges in complex human dynamics.
Research suggests that users and particularly children may be vulnerable to the “Empathy Gap” of AI chatbots. Children are likely to treat AI characters as friends and misinterpret their responses due their limited understanding of technologies they are interacting with. Studies have also suggested that interactions with AI systems increase loneliness, sleep disturbances, alcohol consumption, and depression.
Also, early introduction to AI systems with limited awareness and in the absence of effective regulatory and support mechanisms would promote unhealthy behaviours that are not only detrimental to their human interactions, but also mental and physical health and also emotional intelligence. This could have second order effects into their careers and real world interactions where they might have unreasonable expectations from humans to do as they say and expect. (something which LLMs are known to do).
Ethical implications of AI characters mimicking real or fictional personas
AI Characters that mimic real life or fictional personalities raises a whole range of ethical dilemmas that humans truly are not ready to understand the consequences of. Issues related to identity, authenticity, consent, life like conversational mimicking, manipulation need a nuanced understanding in the backdrop of disagreements even on the definitions of what actually is AI?
For example, the use of AI to create personas of real people, without their explicit consent can be seen as a gross violation of their privacy. Additionally, actors or creators associated with the original characters might face unintended consequences such as a displaced sense of attachment, love, anger, pain, and distress onto them. Creating real world consequences and unintended second order effects that are hard to mitigate.
There is a potential for misrepresentation, and manipulation by AI characters is equally troubling. Technologies like deep fakes already have illustrated the potential for misinformation, reputational damage and legal consequences for those whose AI personas committed or abetted the said manipulation. Additionally it is also true that fictional personas may reinforce unsuitable and inappropriate narratives or behaviors, since which the chat bots were trained on. For example, an AI character that is based on fictional antagonists could reinforce the negative stereotypes or behaviors, when the users interacting with it are not aware about how the technology functions and in absence of required safeguards to protect the interacting users.
To address these risks, companies developing these AI characters must themselves adopt widely accepted ethical standards. It is crucial to educate users about the limitations of AI systems and to implement transparent practices that are important to prevent harm.
Intellectual Property Strategies for Media Companies in the AI Era
The Rise of AI has presented media companies that seek to protect their intellectual property portfolio, while embracing innovation with challenges and opportunities. Traditional IP frameworks need to be reimagined and redesigned to address the unique set of challenges that AI-generated content and AI powered fandom brings to the table.
It is crucial to highlight that AI systems have an asymmetrical advantage over the IP right holders whose creative works are often utilized to train theri LLMs. While these LLMs and the companies that train them rapidly ideate, scale, and distribute the fruits of their LLMs, the decision and analysis of the core issues that are central to shaping of future discourse is tied up in court for a significant while, to add onto the stagnant nature of policy making is also the hesitance of govt. to rapidly adopt effective policies and legislations, aiming to avoid completely stifling innovation.
The IP owners of those exclusive works face a slower process of defending their rights through courts. They are also un-equipped with appropriate strategies that enforces their rights over their creative works. The incentive structures for AI companies encourages them to quickly develop and scale their products, and enjoy revenue sources from the commercialisation of these LLMs, often leaving IP holders scrambling to even claim rights over their own creative works.
Meanwhile, governments often are completely hesitant and do not want to stifle innovation or potential helpful use cases of these systems, yet they do not move beyond the whole whac-a-mole approach to shaping policy discourse around AI and Law.
Analysis of Warner Bros. Discovery’ approach to protecting IP
Warner Bros. discovery is a media and entertainment company that faces the challenge of protecting its vast and matured IP portfolio in the age of AI. The company’s approach involves a combination of legal strategies, measures, and proactive interaction with AI platforms. The rapid ideation, scaling and implementation advantage of AI companies necessitate for media and creative works copyright holders to incorporate a variety of measures that are of ex ante and post ante nature. A key component to their approach involves monitoring AI platforms, and communities for unauthorized use of Intellectual property in training chatbots, taking legal measures against infringements, negotiating licensing opportunities, and exploring the future world of media entertainment. In the present context, Warner Bros. Discovery has seemed to have devised a proactive strategy to deal with infringements in the digital environment. Thus, mitigating the need for litigation-less enforcement of their claim over their IP rights.
Warner Bros Discovery and other media and entertainment companies have a once in a decade opportunity to collaborate with AI platforms to develop tools and technologies that protect their Intellectual Property Portfolio, all the while furthering innovation; curbing misinformation, unauthorised access; dealing with ethical concerns and also enabling AI platforms to put in place appropriate compliance measures that further reduce their liabilities.
These collaborations could give headway to develop industry standards and best practices for IP protection at a stage where these technologies are still developing. The unprecedented collaborations could also assist in educating the public about the potential misinformation, consent, unauthorized access and setting user expectations. Media and Entertainment companies could assist AI platforms in explaining the terms of services, privacy policies and user agreements in a story format, with the help of AI characters, this would foster a more healthy and effective approach to dealing with the ethical concerns that have been raised time and again by various stakeholders that are shaping the discourse around AI systems and content creation.
Exploration of Licensing Models for AI Character Creation
Recent cases, such as the Dow Jones and NYP Holdings v. Perplexity AI and Bartz v. Anthropic, have iterated a significant turning point in the potential relationship between AI companies and owners of creative works, upon which LLMs are trained. In both cases, the owners of exclusive intellectual property have expressed their willingness to potentially collaborate and explore licensing strategies that provide for fair compensation for the use of their works in Training LLMs. This marks a change in approach that IP holders want to exercise to earn an additional source of revenue, and also highlights the fact that they are not reluctant in the usage of their copyrighted content, but are only concerned about the piracy of the content of which they are the sole IP holder.
There are various licensing strategies that the AI companies and media entertainment companies could potentially explore as a default. These include exclusive licenses, Non-exclusive licenses, revenue sharing models, and usage based licenses. These models of licensing could be explored, and incorporated, depending on the context for the usage of copyrighted content by the AI companies. The pros and cons of these models are explained hereinafter the form of a mind map:
Conclusion and Recommendations
To conclude, the potential collaborations between IP holders and AI platforms is going to shape how users and owners of creative works view the incentive structures, and what other forms of entertainment are yet to be explored. The 'Tabooisation' of AI systems in the creative work fields will only be detrimental to the media company. Instead, if they choose to embrace a future that is already here and is here to stay, Media companies then would be able develop interactive narratives, personalized experiences, postscript bites, and other new entertainment forms that work in collaboration and not in isolation from AI systems.
Here are some mind maps, which reflect some suggestions for balancing copyright protection & innovation in the case of AI use.