AI Hallucinations in Court: Judge Sanctions Lindell’s Lawyers for Filing Fake Case Law
A federal judge fined Mike Lindell’s attorneys $6,000 for submitting an AI-generated court brief riddled with hallucinated citations and misquoted case law. The case underscores the legal profession’s
7/16/202510 min read


In a cautionary tale for the legal profession, two attorneys representing MyPillow founder Mike Lindell were fined $3,000 each by a federal judge after submitting a court brief containing nearly 30 fabricated or misquoted legal citations—many of them generated by artificial intelligence. U.S. District Judge Nina Y. Wang, presiding over Lindell’s defamation lawsuit in Denver, issued the sanctions on July 7, 2025, citing the lawyers’ “gross carelessness” and failure to meet basic professional duties. As AI becomes more prevalent in law, this high-profile blunder underscores a growing crisis: attorneys using generative tools without understanding their limitations or verifying their outputs.
The case in question was a high-profile defamation lawsuit brought by Eric Coomer – a former Dominion Voting Systems employee – who accused Lindell of spreading baseless claims that Coomer helped rig the 2020 presidential election. In June, a Colorado jury found Lindell liable for defamation and awarded Coomer roughly $2 million in damages (far less than the $62.7 million originally sought). It was during the lawsuit that Lindell’s legal team submitted an error-strewn opposition brief, unwittingly turning themselves into a cautionary tale about the perils of unverified AI in legal practice.
An Error-Strewn Filing Exposed in Court
The trouble began after Lindell’s attorneys – Christopher Kachouroff and Jennifer DeMaster – filed a February 25, 2025 opposition motion in the case. On its face, the brief looked like a routine legal filing. But at a pretrial conference on April 21, Judge Wang revealed a “troubling pattern” in the document: case citations and quotes that simply didn’t add up. One by one, the judge pressed Kachouroff about anomalies in the brief. Would he be surprised, she asked, to learn that a certain cited case “did not exist?” Kachouroff admitted he “would be surprised.” When confronted with a misquote from a real case, he conceded, “Your Honor I may have made a mistake…I wasn’t intending to mislead the Court,” even insisting the misquoted text was “not far off” from the real thing.
At first, Kachouroff fumbled to explain the situation. He suggested the document on file was just a draft that had been filed “by mistake,” hinting that another attorney (his co-counsel DeMaster) had been tasked with checking the citations. But Judge Wang’s scrutiny only intensified. According to her written order, “time and time again, when Mr. Kachouroff was asked for an explanation of why citations to legal authorities were inaccurate, he declined to offer any explanation, or suggested that it was a ‘draft’ pleading.” Finally, after repeated questioning, the truth came out: Kachouroff admitted the brief had been drafted with the help of AI.
In a dramatic exchange, Judge Wang asked whether the filing had been run through an AI program – a question Kachouroff initially did not expect. “Not initially,” he answered. “Initially, I did an outline for myself, and I drafted a motion, and then we ran it through AI.” Pressed on whether he double-checked the AI’s citations, Kachouroff admitted: “Your Honor, I personally did not check it. I am responsible for it not being checked.”  The courtroom transcript makes clear that only when directly confronted did the lawyer acknowledge using the AI tool – and acknowledge that he failed to verify the accuracy of the citations.
Hallucinated Cases, Misquotes, and “Draft” Excuses
What Judge Wang uncovered was startling. In her order, she wrote that the brief contained “misquotes of cited cases; misrepresentations of principles of law…including discussions of legal principles that simply do not appear within such decisions… and most egregiously, citations of cases that do not exist.”  Nearly 30 citations in the document were defective in one way or another . For example, one citation in the brief attributed a ruling to the 10th Circuit Court of Appeals that was entirely fabricated – the case “Perkins v. Fed. Fruit & Produce Co.” was listed with a 2019 citation that “did not exist as an actual case,” as the judge pointed out. (There was a different Perkins case from years earlier in a lower court, but it “does not stand for the proposition asserted by Defendants” in the AI-written brief ). In other instances, real cases were cited but quoted incorrectly or out of context, to support legal points those cases simply never addressed.
Kachouroff struggled to explain how his opposition brief became a patchwork of fiction. He claimed the garbled filing was an old draft that somehow got filed instead of the final, corrected version. In their formal response to the court, the attorneys maintained “it was inadvertent, an erroneous filing that was not done intentionally, and was filed mistakenly through human error.” They insisted that they had reviewed an AI-generated draft and fixed its mistakes, only for the earlier uncorrected draft to be submitted by accident.
Judge Wang was unpersuaded. After the April 21 hearing, she ordered the defense team to produce all versions of the brief, along with metadata and correspondence, to verify their claims. The subsequent investigation only heightened her skepticism. The supposedly “corrected” version that Kachouroff belatedly offered to file turned out to contain “several of the same substantive errors” as the original, and metadata showed it was edited after the court had flagged the issue. Emails between Kachouroff and DeMaster revealed that their drafts already contained the fake citations before the brief was ever filed. The judge found the lawyers’ explanations “contradictory” and lacking any corroboration, leading her to conclude the fiasco was not an inadvertent one-off at all. In fact, Judge Wang noted these attorneys had filed “similarly defective” court documents in at least one other case, suggesting a pattern of carelessness (or over-reliance on AI) beyond this incident.
At one point Kachouroff accused the court of trying to “blindside” him with the citation errors – a stance Judge Wang called “troubling and not well-taken.” “Neither Mr. Kachouroff nor Ms. DeMaster provided the Court any explanation as to how those citations appeared in any draft of the Opposition absent the use of generative artificial intelligence or gross carelessness,” Wang wrote, pointedly rejecting the lawyers explanations. In other words, either they used an AI tool that hallucinated fake cases, or they demonstrated “gross carelessness” – there was no benign excuse that she felt fit the facts.
Violations of Professional Duties: Rule 11 and Beyond
Beyond the embarrassment of citing fictional cases, the incident raised serious questions of professional responsibility. By signing and submitting the faulty brief, Kachouroff certified that, to the best of his knowledge, the legal arguments were grounded in valid law after reasonable inquiry – as required under Rule 11 of the Federal Rules of Civil Procedure. Yet he admitted in open court that he “failed to cite check the authority…after [AI] use before filing it with the Court – despite understanding his obligations under Rule 11.” That admission put him in clear breach of his duty of candor and diligence. Judge Wang’s order explicitly reminded Lindell’s counsel that they are “bound by the standards of professional conduct” set by the Colorado Rules of Professional Conduct, the code of ethics for attorneys. Those standards include the duty of competence (which now in many jurisdictions encompasses a duty to understand the benefits and risks of relevant technology) and the duty of candor toward the tribunal – duties that were arguably violated when an unvetted AI-generated brief was passed off as legal analysis.
Initially, Judge Wang not only considered monetary sanctions but also raised the specter of professional discipline. In late April, she issued an order for Kachouroff and DeMaster to show cause why they should not be referred to disciplinary authorities for potential violations of ethics rules. The lawyers scrambled to respond by the May 5 deadline, filing lengthy declarations defending their conduct. They apologized for the chaos – though the judge noted Kachouroff’s written “apology” came off as “a bit hostile” in tone. They insisted they had no intent to mislead and had been “wholly unprepared” when the judge sprang the issue on them at the hearing. Crucially, they argued that using AI itself was not wrongdoing: “There is nothing wrong with using AI when used properly,” they wrote, claiming they had no reason to think an unverified draft had been filed until it was too late.
In the end, Judge Wang stopped short of referring the matter for formal discipline, opting instead for the $6,000 in total fines as a measured punishment. Notably, she did not sanction Lindell himself or his company, since there was no evidence the client knew his lawyers were employing AI in their drafting. But the judge made clear that the primary fault lay in the lawyers’ laps: their failure to verify what they were filing and their lack of forthrightness when problems surfaced. The message was unmistakable – every attorney has a non-delegable duty to ensure their filings are accurate and grounded in real law, no matter if an associate or an algorithm helped prepare them.
A Wider Trend: Warnings from the Bench and Beyond
This incident is part of a growing pattern of lawyers learning the hard way that AI-generated legal work can go disastrously wrong if left unchecked. I have written about this many times here and it keeps happening. “More and more lawyers are getting caught – and punished – for including AI hallucinations in their work,” tech outlet The Verge observed, “and this trend will likely only continue to grow.”   In fact, just a year earlier, a pair of attorneys in New York made headlines for submitting a ChatGPT-written brief that cited a string of fake court decisions, prompting a judge’s ire and sanctions. In that 2023 episode, the AI tool had confidently fabricated six case citations complete with bogus quotes, leading the judge to lament that “technological advances are commonplace and of great benefit, but [lawyers] are responsible for ensuring the accuracy of their filings.” Courts across the country have since been “really cracking down on lawyers who submit pleadings with AI-hallucinated legal authority,” as one legal ethics commentator noted.
Legal technology experts say the Lindell case highlights a “critical need for the legal profession to establish AI competency standards” and protocols for law firms.  Put simply, modern language models like OpenAI’s GPT-4 or other generative AI systems have a well-known tendency to “hallucinate” – to fabricate plausible-sounding information (such as fake case law) in response to a prompt. These AI hallucinations are not rare glitches; they are an inherent limitation of how the technology predicts text without any grounding in truth. That means lawyers cannot treat an AI’s outputs as trustworthy legal research. As Colorado attorney and legal blogger Robin Shea quipped after reviewing the Lindell debacle, “If you’re doing something more important than writing a thank-you note…like submitting a brief on behalf of your client…you need to check behind the AI.” Lawyers “should know this,” she added, “but because they get burned for it all the time, apparently they don’t.” 
Even before this latest sanction, bar associations and ethics panels have been urging caution. The American Bar Association recently issued Formal Opinion 512, stressing that using generative AI tools is permissible only if attorneys ensure accuracy, confidentiality, and adhere to all ethical duties – ultimately, the lawyer remains the responsible party, not the AI. Some state bars have gone further, requiring lawyers to stay educated about technology as part of their competence obligation. The Lindell case has now put a very public spotlight on these issues. It serves as a “wake-up call for the entire legal profession”, as USA Herald’s Samuel Lopez wrote, warning that “as AI tools become increasingly accessible, the temptation to use them without proper safeguards will only grow.” And the consequences are real: “Client representation suffers, court resources are wasted, and public confidence in legal institutions erodes when attorneys fail to maintain basic professional standards,” Lopez noted. In short, the integrity of the justice system takes a hit when lawyers abdicate their quality-control responsibilities to a machine.
Lessons and Tips for Lawyers Using AI
For lawyers, the professional fallout from relying on unvetted AI should serve as an unequivocal warning. “Always check your citations,” legal columnist Bob Ambrogi wrote – only half-jokingly adding, “and always keep your pants on in court.”  (Kachouroff, it turns out, had previously made news for appearing pantless during a Zoom court hearing – a separate embarrassment that, while unrelated, underscores the scrutiny attorneys can face). The silver lining of such fiascos is that they yield clear lessons on how to use AI tools responsibly in legal practice:
AI is a tool, not a substitute for your own professional judgment. Treat anything an AI produces as a draft – a starting point to be carefully reviewed and edited by a human lawyer. No matter how sophisticated the software, it cannot be trusted to know the law or the facts with reliability. The Lindell case underscores that generative AI will confidently output text that looks convincing but may be completely false. It is the attorney’s duty to rigorously vet and correct that output.
Never delegate verification to AI (or to junior staff without oversight). In this incident, Kachouroff attempted to delegate the cite-checking to his colleague, and both trusted an AI-generated draft without double-checking every reference. This lack of oversight was a key failure. Critical tasks like citation and fact verification must be performed by a human who understands what to look for – be it the lead attorney or someone under their close supervision. Blaming an associate or an algorithm after the fact won’t absolve the lawyer who signs the filing. As the court put it, delegation without appropriate oversight is essentially a dereliction of an attorney’s duty.
Understand AI’s limitations – and use safeguards when prompting. If you do choose to use AI in drafting, be specific in your prompts to mitigate hallucinations, and still assume the output may contain errors. For example, instruct the AI not to fabricate case law and to flag any uncertain information. A prompt might say: “Include accurate, verbatim quotations from cited cases and do not invent any citations. Mark any statements that need verification.”   Such instructions can help, but they are not foolproof – nothing replaces a manual check. After generating text, compare every quote to the source, confirm every case is real and cited correctly, and ensure the AI hasn’t put words in a court’s mouth. In practice, this means using AI for efficiency (say, to improve the writing or organization) while you remain the fact-checker and researcher before anything goes out the door.
Get training and stay educated. The mishap also highlights a broader “competency gap.” Attorneys should seek out training on how generative AI works and how to use it ethically. This might involve CLE (continuing legal education) courses on legal technology or firm-wide protocols for AI usage. The legal profession is even considering formal AI competency certifications to ensure lawyers understand tools’ capabilities and pitfalls. As one commentator noted, had Lindell’s lawyers “understood the core principles of prompt engineering and AI limitations,” they might have been able to harness the tool’s benefits without falling into its traps. Every lawyer doesn’t need to be a programmer, but basic technological competence is quickly becoming part of the duty of competence in law.
In sum, the future of law will include AI – from research to drafting to data analysis – but only those attorneys who use these tools responsibly and knowledgeably will avoid disaster. By treating AI output with skepticism, verifying everything, and never relinquishing the your role as final editor and fact-checker, you can safely tap into AI’s advantages (speed, efficiency, linguistic polish) without undermining your integrity or your clients’ cases. As Judge Wang’s ruling makes clear, no clever software will save an attorney who forgets their fundamental obligations. The onus is on us to ensure that “garbage in, garbage out” doesn’t make it into official filings. And if an AI tool is used, you must be willing to stand behind the result – because ultimately, the judge and client will hold you accountable, not the algorithm.
Helpful Sources
Olivia Prentzel, The Colorado Sun – “MyPillow CEO’s lawyers fined for AI‑generated court filing in Denver defamation case” (Jul 7, 2025)
Economic Times (India) – “Mike Lindell’s lawyers fined $3,000 for using AI that created fake court citations” (Jul 9, 2025)
Ravi Hari, Livemint/Today News (Bloomberg) – “Denver judge fines MyPillow CEO Mike Lindell’s lawyers over AI‑generated fake citations: ‘This court derives no joy…’” (Jul 8, 2025)
Consulting
Expert legal consulting for technology-focused cases.
dean taylor, esq
© 2025. All rights reserved. LegalAIPractice