
In a development that underscores the perils of AI in legal proceedings, a Stanford University professor's expert testimony was recently excluded by a Minnesota federal court after it was discovered that his declaration contained fake citations generated by AI.
The case, Kohls v. Ellison, which challenges Minnesota's deepfake law, has become a cautionary tale about the intersection of artificial intelligence and legal practice.
Professor Jeff Hancock, Director of Stanford's Social Media Lab and an expert on AI and misinformation, inadvertently included AI-hallucinated citations in his expert declaration. The irony was not lost on Judge Laura M. Provinzino, who noted that an AI misinformation expert had "fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less."
The incident has sparked broader discussions about evidence reliability, professional responsibility, and the need for robust verification protocols in an era where AI tools are increasingly common in legal practice.
Hence, this legal-policy analysis delves into the incident, and how this being one of many such similar incidents, can help us remain cautioned about the way we look at AI-related evidence law considerations.
The Ironic Incident

The deepfake-related lawsuit in Minnesota took an unexpected turn with the filing of two expert declarations—one from Professor Jevin West and another from Professor Jeff Hancock—for Attorney General Keith Ellison in opposition to a motion for a preliminary injunction. As noted, “[t]he declarations generally offer background about artificial intelligence (“AI”), deepfakes, and the dangers of deepfakes to free speech and democracy”.
Plaintiffs promptly moved to exclude both declarations, asserting they were “conclusory and contradicted by the experts’ prior writings”. However, the controversy escalated when it was revealed that the Hancock Declaration contained fabricated references, courtesy of an inadvertent reliance on GPT-4o.
According to the record, “[a]fter reviewing Plaintiffs’ motion to exclude, Attorney General Ellison’s office contacted Professor Hancock, who subsequently admitted that his declaration inadvertently included citations to two non-existent academic articles, and incorrectly cited the authors of a third article”.
As explained, “These errors apparently originated from Professor Hancock’s use of GPT-4o—a generative AI tool—in drafting his declaration. GPT-4o provided Professor Hancock with fake citations to academic articles, which Professor Hancock failed to verify before including them in his declaration”.
Although Professor Hancock offered a “detailed explanation of his drafting process” and stated “he stands by the substantive propositions in his declaration”, the court could not overlook the severity of filing “a declaration made under penalty of perjury with fake citations”.
Plaintiffs further argued that “the fake citations in the Hancock Declaration taint the entirety of Professor Hancock’s opinions and render any opinion by him inadmissible”.
The Court’s Scathing Response
In evaluating both declarations, the court first observed that it would assess the “competence, personal knowledge and credibility” of each submission rather than apply a full Daubert analysis at the preliminary-injunction stage. Regarding the West Declaration, the court found that while Plaintiffs deemed it “conclusory”, its overall “competence, personal knowledge and credibility” made it admissible for the limited purpose of preliminary-injunction proceedings.
Moreover, the court noted that “an expert may not testify as to whether ‘a legal standard has been met,’ [but] … may offer his opinion as to facts that, if found, would support a conclusion that the legal standard at issue was satisfied”—a standard the West Declaration satisfied. Additionally, countering deepfakes through counter speech was treated as “a fact relevant to the ultimate legal inquiry,” not as “a legal standard”.
In stark contrast stood the Hancock Declaration. Labelling it “particularly troubling”, the court underscored the “irony” that a “credentialed expert on the dangers of AI and misinformation” had “fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less”.
The court stressed that, regardless of whether the mistakes were innocent, “the fact remains that Professor Hancock submitted a declaration made under penalty of perjury with fake citations”. This lapse in judgment, especially from someone who “typically validates citations with a reference software when he writes academic articles” but failed to do so here, “shatters his credibility” before the court.
Beyond the specific repercussions for Professor Hancock, the court’s admonition carried broader implications. It reminded counsel of the “personal, nondelegable responsibility” under Rule 11 of the US Federal Rules of Civil Procedure to “validate the truth and legal reasonableness of the papers filed” in any action and proclaimed the now-familiar warning that attorneys and experts must “verify AI-generated content in legal submissions!”.
Ultimately, the court excluded Professor Hancock’s testimony in its entirety for the preliminary-injunction analysis, emphasizing that “signing a declaration under penalty of perjury” should never be treated as a “mere formality”. Instead, reliability and trustworthiness remain paramount for the judicial process, as “citing to fake sources imposes many harms, including ‘wasting the opposing party’s time and money, the Court’s time and resources, and reputational harms to the legal system’”.
Legal and Professional Implications
The court’s order in Kohls v. Ellison emphasises that the submission of expert declarations under penalty of perjury must remain a solemn undertaking, and that “citing to fake sources imposes many harms, including ‘wasting the opposing party’s time and money, the Court’s time and resources, and reputational harms to the legal system (to name a few).’”
As a result, professionals in both the legal and academic realms are under increasing pressure to ensure all cited materials—especially those derived from AI tools—are thoroughly verified to avoid undermining credibility.
Impact on Expert Witness Credibility
Expert witnesses play a pivotal role by helping courts grasp technically complex or scientific issues, yet “Professor Hancock’s citation to fake, AI-generated sources in his declaration…shatters his credibility with this Court.”
Even if the remaining portions of an expert’s testimony carry legitimate insights, any inclusion of unreliable citations can effectively negate their value. Because trust is paramount, courts are inclined to exclude testimony once credibility is compromised.
Responsibilities while Filing in a Court
The US Federal Rule of Civil Procedure places a “personal, nondelegable responsibility” on attorneys to verify that filings are factually and legally sound. This means counsel must conduct a “reasonable inquiry under the circumstances” before submitting an expert’s declaration. When AI tools like GPT-4 are used in drafting, attorneys must confirm that witnesses have checked the authenticity of any references generated by AI, ensuring the final declarations are not contaminated by fictitious sources.
Verification Requirements for AI-Generated Content
In granting in part the motion to exclude the expert declarations, the court “adds its voice to a growing chorus of courts around the country declaring the same message: verify AI-generated content in legal submissions!”
This clarion call underscores the need for robust protocols—such as cross-referencing AI-furnished citations in academic databases—to guard against fabricated content that may slip into legal filings.
Evidence Law in the AI Era
Trust Mechanisms in Legal Proceedings
Declarations under penalty of perjury benefit from “indicia of truthfulness” that courts rely on to assess credibility.
Where AI is involved, these traditional trust mechanisms are tested: the chain of verifying the accuracy and reliability of reference materials is complicated when the “author” is a software algorithm, rather than a person who can be cross-examined.
Challenges to Traditional Evidence Standards
Traditional evidence rules—rooted in witness testimony, document authentication, and cross-examination—did not anticipate a party’s reliance on AI-generated documents containing fabricated citations or invented studies. As a result, courts face unprecedented procedural stress, such as how to handle novel forms of “fictitious” or “hallucinatory” references inserted by AI tools.
Need for New Verification Protocols
Because AI integration into legal practice continues to expand, judges, lawyers, and experts must develop fresh or updated reference-checking procedures. The court admonished that “the Court should be able to trust the ‘indicia of truthfulness’ that declarations made under penalty of perjury carry, but that trust was broken here.”
This highlights the urgent need for multi-step validation protocols, peer reviews, and external verification software to safeguard procedural integrity.
Future Guidelines and Best Practices
Proposed Standards for AI Use in Legal Documents
Going forward, courts may set baseline requirements for the use of AI, such as compulsory disclosure of AI assistance and mandatory cross-checking of citations. The order in Kohls v. Ellison suggests that attorneys should explicitly ask their experts if they have relied on AI—and if so, what steps they took to verify any AI-generated content. Failure to comply could invite adverse rulings and sanctions.
Expert Witness Responsibilities
Expert witnesses, especially those testifying on topics like deepfakes and AI, owe heightened duties of diligence. As the court stated, “One would expect that greater attention would be paid to a document submitted under penalty of perjury than academic articles.”
Experts must now demonstrate to judges that they mitigate the risk of AI “hallucinations” by employing reference management systems and meticulously confirming each cited source.
Attorney Due Diligence Requirements
For attorneys, professional accountability includes overseeing the drafting process to confirm that any AI-derived references have been verified. This may involve:
Asking explicit questions about the extent of AI usage.
Insisting on documentary proof (e.g., PDF copies, DOIs, or credible database links) for each citation.
Staying abreast of emerging precedential guidelines, as courts progressively adapt their evidentiary rules to the realities of AI.
Broader Implications for Courts in India and the Indo-Pacific
Courts across India and the wider Indo-Pacific region increasingly face legal submissions involving AI-generated content, compelling them to define verification standards for evidence.
India’s recent judicial initiatives—including digitisation drives and pilot AI projects—mirror broader Asia-Pacific trends, where lawmakers and courts debate how best to balance innovation with accuracy and due process. In particular, concerns about fabricated citations resonate with legislative and judicial authorities seeking to maintain public trust in the judicial process while welcoming AI’s potential for legal research and document automation.
Impact on Academic-Legal Collaborations
Such incidents place academic-legal collaborations under scrutiny, pushing universities to mandate tighter guidelines when faculty serve as expert witnesses. Enhanced training on AI’s limitations—and transparent disclaimers about the reliability of AI-derived sources—can mitigate risks while preserving fruitful collaboration.
Trust in Expert Testimony
Courts traditionally extend professional deference to expert witnesses, presuming their diligence in source verification. Yet the discovery of bogus citations“shatters credibility”and erodes the trust integral to expert testimony. This dynamic is prompting courts to demand detailed methodologies from experts, including any use of AI.
For instance, requiring supplemental affidavits attesting to source verification can help restore the damaged “indicia of truthfulness” once guaranteed by penalty-of-perjury statements.
Indian Evidence Law Framework
The Bhartiya Sakshya Adhiniyam (BSA) has introduced Section 61 which explicitly recognises electronic and digital records, treating them at par with documentary evidence. The BSA's Section 63 expands the scope to include electronic records in semiconductor memories, alongside traditional paper and optical/magnetic media storage.
For AI-generated content to be admissible as evidence, it must satisfy stringent authentication requirements. Under the BSA, such content would likely be classified as 'digital' or 'electronic evidence,' requiring authentication through certificates signed by both the person in charge of the computer/communication device and an expert.
Authentication Challenges
The multi-contributor nature of AI systems presents unique verification challenges:
Multiple persons involved in data collation, model training, and testing
Complex self-learning algorithms making certification cumbersome
Difficulty in explaining functioning of advanced AI systems, especially those involving deep learning
Singapore Supreme Court’s AI Guidelines
The Supreme Court of Singapore issued comprehensive guidelines for generative AI use in court proceedings through Registrar's Circular No. 1 of 2024, effective October 1, 2024. Key provisions include:
Core Principles
Maintains a "neutral stance" on GenAI tools
Treats AI as a tool, with users bearing full responsibility
No pre-emptive declaration required unless specifically questioned
Specific Requirements
Document Preparation:
AI can be used for drafting but not for generating evidence
All content must be fact-checked and independently verified
References must be authenticated using trusted sources
Verification Protocol:
Users must verify citations through official channels
Cannot use one AI tool to verify another's output
Must be prepared to identify AI-generated portions
Professional Responsibility:
Lawyers retain professional obligations
Self-represented persons bear responsibility for accuracy
Violations may result in costs orders or other sanctions
Future Implications for Legal Systems
Evidence Authentication Systems
Courts must develop comprehensive verification protocols for AI-generated content. This includes establishing chain-of-custody requirements specific to AI outputs and implementing multi-step validation processes. The Singapore Supreme Court's approach provides a model, requiring users to "independently verify AI-generated content" and authenticate "references through official channels". Such systems should incorporate both technical and procedural safeguards, moving beyond simple human verification to include specialised software tools and expert review processes.
Professional Standards Evolution
The legal profession must adapt its ethical guidelines and practice standards. As demonstrated in Kohls v. Ellison, even experienced professionals can fall prey to AI hallucinations, necessitating new professional responsibilities. Legal practitioners must now implement specific verification protocols for AI-generated content, including:
Documentation Requirements
Mandatory disclosure of AI use in legal submissions
Detailed records of verification methods employed
Clear attribution of human oversight and responsibility
Verification Protocols
Practitioners must establish robust systems that go beyond traditional citation checking. This includes using specialized software for AI content detection and maintaining clear documentation of AI use in legal submissions.
Judicial Framework Development
Courts must establish clear standards for:
Evidence Admissibility
The current approach to AI-generated evidence remains fragmented. Courts need standardised criteria for evaluating the reliability and authenticity of AI-generated content. This includes:
Technical standards for validating AI outputs
Requirements for expert testimony regarding AI systems
Clear protocols for challenging AI-generated content
Expert Testimony Guidelines
The Minnesota case demonstrates the need for updated standards governing expert testimony about AI systems. Courts must establish:
Qualification requirements for AI experts
Standards for validating AI-generated research
Protocols for verifying AI-assisted expert declarations
These changes require a fundamental shift in how legal systems approach technology-generated evidence, moving beyond traditional authentication methods to embrace new technical and procedural safeguards.
As Judge Provinzino noted, "when attorneys and experts abdicate their independent judgment and critical thinking skills in favour of ready-made, AI-generated answers, the quality of our legal profession and the Court's decisional process suffer”.
List of Quotes from the Kohl v. Ellison
“The declarations generally offer background about artificial intelligence (“AI”), deepfakes, and the dangers of deepfakes to free speech and democracy. ECF No. 23 ¶¶ 7–32; ECF No. 24 ¶¶ 7–23.”
“Plaintiffs moved to exclude these declarations, arguing that they are conclusory and contradicted by the experts’ prior writings.”
“After reviewing Plaintiffs’ motion to exclude, Attorney General Ellison’s office contacted Professor Hancock, who subsequently admitted that his declaration inadvertently included citations to two non-existent academic articles, and incorrectly cited the authors of a third article. ECF No. 37 at 3–4. These errors apparently originated from Professor Hancock’s use of GPT-4o—a generative AI tool—in drafting his declaration. ECF No. 39 ¶¶ 11, 21. GPT-4o provided Professor Hancock with fake citations to academic articles, which Professor Hancock failed to verify before including them in his declaration. Id. ¶¶ 12–14.”
“Professor Hancock, a credentialed expert on the dangers of AI and misinformation, has fallen victim… the fact remains that Professor Hancock submitted a declaration made under penalty of perjury with fake citations.”
“Plaintiffs continue to maintain that the fake citations in the Hancock Declaration taint the entirety of Professor Hancock’s opinions and render any opinion by him inadmissible. ECF No. 44 at 8–9.”
“Rather, the Court will evaluate the ‘competence, personal knowledge and credibility’ of the West and Hancock Declarations.”
“As for the West Declaration, Plaintiffs argue that it is conclusory because it lacks a reliable methodology under Daubert. ECF No. 30 at 17–21… the Court is satisfied that the ‘competence, personal knowledge and credibility’ of Professor West’s testimony weigh in favor of admitting his declaration at this early stage.”
“Although an expert may not testify as to whether ‘a legal standard has been met,’ an expert ‘may offer his opinion as to facts that, if found, would support a conclusion that the legal standard at issue was satisfied.’”
“But whether counterspeech is effective in combatting deepfakes is not a legal standard; rather, it is a fact relevant to the ultimate legal inquiry here: the First Amendment means-fit analysis.”
“Professor Hancock… has fallen victim to the siren call… shatters his credibility with this Court… verify AI-generated content in legal submissions!”
“Signing a declaration under penalty of perjury is not a mere formality; rather, it ‘alert[s] declarants to the gravity of their undertaking and thereby have a meaningful effect on truth-telling and reliability.’”
“Moreover, citing to fake sources imposes many harms… Courts therefore do not, and should not, ‘make allowances for a [party] who cites to fake, nonexistent, misleading authorities.’”
Thanks for reading this insight.
Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train.
We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com.
Comments