The Supreme Court of Singapore came up with an intriguing circular on Using Generative AI, or RoughDraft AI (term coined by AI expert Gary Marcus), by stakeholders in courts. The guidance indicated in the circular is quite intriguing, which requires an essential breakdown.
However, to begin with: the circular itself shows that the Court does not regard using GenAI tools, as an ultimatum to improve their court tasks, and has reduced the status of Generative AI tools as mere productivity enhancement tools, unlike what many AI companies in India & abroad tried to claim. This insight covers the circular in detail.
On 23rd September 2024, the Supreme Court of Singapore issued Registrar’s Circular No. 1 of 2024, providing a detailed guide on the use of Generative Artificial Intelligence (AI) tools in court proceedings. This guide is set to take effect from 1st October 2024 and will apply across various judicial levels, including the Supreme Court, State Courts, and Family Justice Courts. The document provides nuanced principles for court users regarding the integration of Generative AI in preparing court documents, but it also places a heavy emphasis on maintaining traditional legal obligations, ensuring data accuracy, and protecting intellectual property rights.
Scope and Application
The circular begins by defining its scope. It applies to all matters within the Supreme Court, State Courts (including tribunals such as the Small Claims Tribunals, Employment Claims Tribunals, and the Community Disputes Resolution Tribunals), and Family Justice Courts. All categories of court users, including prosecutors, legal professionals, litigants-in-person, and witnesses, fall within the ambit of the circular. It clarifies that while the use of Generative AI tools is not outrightly prohibited, users are still bound by existing legislation, professional codes, and practice directions.
Key Definitions
Several key definitions are provided to frame the guide:
Artificial Intelligence: Defined broadly, it encompasses technology that can perform tasks requiring intelligence, such as problem-solving, learning, and reasoning. However, it excludes more basic tools such as grammar-checkers, which do not generate content.
Court Documents: Includes all written, visual, auditory, and other materials submitted during court proceedings. This extends to written submissions, affidavits, and pleadings, placing emphasis on accurate and responsible content generation.
Generative AI: Described as software that generates content based on user prompts. This can encompass text, audio, video, and images. Examples include AI-powered chatbots and tools using Large Language Models (LLMs).
General Principles on the Use of Generative AI Tools
The Supreme Court maintains a neutral stance on the use of Generative AI tools. The circular is clear that Generative AI is merely a tool, and court users assume full responsibility for any content generated using such tools. Notably, the Court does not require pre-emptive declarations about the use of Generative AI unless the content is questioned. However, court users are encouraged to independently verify any AI-generated content before submitting it.
Responsibility and Verification: Court users, whether they are legal professionals or self-represented litigants, are required to ensure that AI-generated content is accurate, appropriate, and verified independently. For lawyers, this falls under their professional duty of care. Similarly, self-represented individuals are reminded of their obligation to provide truthful and reliable content.
Neutral Stance: The court clarifies that its stance on Generative AI remains neutral. While users may employ these tools for drafting court documents, the onus of the content lies solely on the user. This emphasizes that Generative AI tools are not infallible and could generate inaccurate or misleading content. Users must ensure that all submissions are factual and comply with court protocols.
Generative AI: Functional Explanation
The document goes further to explain how Generative AI tools work, outlining their reliance on LLMs to generate responses that appear contextually appropriate based on user prompts. It compares the technology to a sophisticated form of predictive text but highlights that it lacks true human intelligence. While these tools may produce outputs that appear tailored, they do not engage in genuine understanding, posing risks of inaccuracies, especially in the legal context.
The circular provides a cautious explanation of the limitations of these tools:
Accuracy issues: It warns that Generative AI chatbots may hallucinate, i.e., generate fabricated or inaccurate responses, including non-existent legal cases or incorrect citations.
Inability to provide legal advice: Court users are reminded that Generative AI cannot serve as a substitute for legal expertise, especially in matters requiring legal interpretation or advice. The circular advises caution in using such tools for legal research, as they may not incorporate the latest developments in the law.
Use in Court Documents
Generative AI tools can assist in the preparation of court documents, but the court mandates careful oversight. The following guidelines are provided:
Fact-checking and Proofreading: Users are instructed to fact-check and proofread AI-generated content. Importantly, users cannot solely rely on AI outputs for accuracy and must verify all references independently.
Relevance and IP Considerations: The court stresses that all content, whether generated by AI or not, must be relevant to the case and should not infringe on intellectual property rights. The guide cautions users against submitting material that lacks attribution or infringes copyright.
Prohibited Uses: While the use of AI for drafting preliminary documents, such as a first draft of an affidavit, is allowed, the circular strictly prohibits using AI for generating evidence. It also emphasizes that AI-generated content should not be fabricated, altered, or tampered with to mislead the court.
Accuracy and Verification
A major focus of the circular is the need for court users to ensure accuracy in their submissions. The following are key responsibilities outlined for users:
Fact-checking: AI-generated legal research or citations must be fact-checked using trusted and verified sources. Self-represented litigants are provided guidance on using resources like Singapore Statutes Online and the eLitigation GD Viewer for such verification.
Accountability: If questioned by the court, users must be able to explain and verify the content generated by AI. They are expected to provide details on how the content was produced and how it was verified. The court retains the authority to question submissions and demand further explanations if the content raises doubts.
Intellectual Property Concerns
One of the key concerns when using Generative AI tools is ensuring that any content generated does not infringe upon the intellectual property rights of third parties. This involves adhering to copyright, trademark, and patent laws, especially when AI tools generate text, images, or other content based on user prompts.
Proper Attribution and Compliance with Copyright Laws
The circular mandates that court users must ensure proper attribution of sources when using AI-generated content. This includes accurately citing the original source of any material referenced or used in court documents. For instance, if a passage from a legal article or a textbook is included in an AI-generated draft, the user must provide the author’s name, title of the work, and year of publication. Failure to do so may not only lead to copyright infringement but can also affect the credibility of the court submissions.
The circular further clarifies that Generative AI tools should not be relied upon to generate evidence or content meant to represent factual claims, as AI can potentially fabricate information. If AI-generated content includes case law, statutes, or quotes, it is the responsibility of the court user to ensure the accuracy and proper citation of such references. This applies to both lawyers and self-represented litigants.
Generative AI and Copyright Infringement Risks
A key issue with Generative AI tools is that they are trained on vast datasets, which may include copyrighted material without proper licensing. While the AI itself may generate new content, the underlying data on which it is trained may pose risks of copyright violations if not properly addressed. For example, AI-generated text could inadvertently reproduce language from a copyrighted source, which may lead to legal disputes if the original source is not acknowledged.
Court users must be vigilant about verifying that the content generated by AI does not infringe on existing copyright protections. This is especially important when submitting legal documents to the court, as any infringement could lead to penalties, legal action, and damage to professional reputations. The circular reminds users that the responsibility for checking these issues lies with them, not with the AI tool.
Confidentiality Concerns
The circular also highlights the importance of maintaining confidentiality and safeguarding sensitive information when using Generative AI tools. This concern is particularly pressing because AI platforms may not always guarantee that the data inputted will remain confidential. In fact, many AI tools store user inputs for training purposes, which could result in unintentional disclosure of private information.
Risks of Inputting Confidential Data
The court warns that entering personal, confidential, or sensitive information into Generative AI platforms can lead to unintended consequences. Since most AI tools are cloud-based and developed by third-party providers, any data inputted could potentially be accessed or stored by the AI provider. This raises several issues, particularly with respect to legal privilege, client confidentiality, and data protection.
For example, if a lawyer inputs sensitive case details into an AI tool to draft a legal document, those details could be stored by the AI provider. This storage may inadvertently lead to the exposure of confidential information, potentially breaching data privacy laws or client confidentiality agreements. This is particularly concerning in cases where non-disclosure agreements (NDAs) are in place, or where the data falls under privileged communication between a lawyer and their client.
Compliance with Data Protection Laws
The circular emphasises that court users must comply with the relevant personal data protection laws and any confidentiality orders issued by the court. In Singapore, this would involve adhering to the provisions of the Personal Data Protection Act (PDPA), which regulates the collection, use, and disclosure of personal data. Failure to safeguard confidential data may lead to legal consequences, including fines, civil lawsuits, and disciplinary actions.
Legal Privilege and Sensitive Information
Additionally, the court reminds users that documents obtained through court orders must not be used for any purposes beyond the proceedings for which the order was granted. This reinforces the need for discretion when handling privileged documents and ensures that such documents are not exposed to Generative AI platforms, which could compromise their confidentiality.
The circular advises court users to refrain from sharing confidential case details with AI tools. Instead, users should take extra caution when deciding what information to include in AI prompts. The document acknowledges the potential for unauthorised disclosure, noting that information input into Generative AI tools could be stored or misused.
Therefore, users must take proactive steps to avoid breaching confidentiality obligations, particularly in cases involving sensitive personal data, trade secrets, or other proprietary information.
Intellectual Property Rights and Legal Implications
Court users are also reminded that existing laws on intellectual property rights, including provisions related to court proceedings, remain fully applicable. This means that while Generative AI tools can be used to generate drafts of legal documents, any content included in those documents must comply with IP laws.
Court Order Documents: If a court has granted a production order for specific documents, these materials must not be shared with Generative AI tools or used outside the proceedings for which they were obtained.
Respect for Privilege: Users must ensure that any data shared with Generative AI tools does not violate legal privilege. This includes ensuring that privileged communications between lawyers and clients remain confidential and are not disclosed to third-party AI providers.
Enforcement of IP and Confidentiality Rules
Failure to comply with the guidelines set out in the circular can result in significant penalties, including:
Cost orders: Users may be ordered to pay costs to the opposing party, particularly if AI-generated content is found to infringe IP rights or violate confidentiality rules.
Disciplinary actions: Lawyers who fail to comply with these rules could face disciplinary measures, including reprimands, suspensions, or fines.
Reduction in evidentiary weight: The court may also choose to disregard AI-generated submissions or reduce their evidentiary weight if they fail to meet accuracy, attribution, or confidentiality standards.
Conclusion
The Singapore Supreme Court's Registrar's Circular No. 1 of 2024 provides a pragmatic yet cautious approach to the use of Generative AI in court proceedings. While the court acknowledges the utility of such tools, it emphasises that responsibility for accuracy, relevance, and appropriateness remains squarely with the court user. Generative AI is positioned as a useful aid, but not a replacement for human judgment, legal expertise, or verification processes. Users of Generative AI are held to the same standards of accuracy, truthfulness, and integrity as in any other court submission.
Nevertheless, it seems clear that even the Supreme Court of Singapore does not deify the purpose of Generative AI tools and remains quite cautious, which only increases trust in their judicial system. This cautious approach is further validated by recent findings from an Australian government regulator, which discovered that generative AI text solutions can actually increase workload rather than reduce it.
In a trial conducted by the regulator, it was found that AI-generated summaries of information were often less accurate and comprehensive than those produced by human analysts, requiring additional time and effort to correct and verify.This highlights the importance of the Singapore Supreme Court's emphasis on human oversight and responsibility when using generative AI in legal proceedings. While these tools may offer some efficiency gains, they are not a panacea and can potentially introduce new challenges and risks if not used judiciously.
However, it would be unreasonable to write off this assumption that Generative AI/ Rough Draft AI tools will be used ad nauseam, and could lead to huge replacements. As the technology continues to evolve and improve, it is likely that generative AI will play an increasingly significant role in various aspects of legal practice, from research and document preparation to predictive analytics and decision support.The key, as emphasized in the Singapore Supreme Court's circular, is to strike a balance between leveraging the capabilities of these tools and maintaining the human expertise, judgment, and accountability that are essential to the integrity of the legal system.
By setting clear guidelines and expectations for the responsible use of generative AI, the Singapore Supreme Court seems to have laid the groundwork for a future in which these technologies can be harnessed to enhance, rather than replace, the work of legal professionals.
Thanks for reading this insight.
Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train.
We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com.