Linda Oliver v. Christian Dribusch, U.S. District Court, Northern District of New York (Nov. 21, 2025)
Case Citation: Linda Oliver v. Christian Dribusch, No. 1:25-CV-724 (AJB/DJS)
Court: U.S. District Court, Northern District of New York
Date: 21 November 2025
Litigants: Linda Oliver (Plaintiff) and Christian Dribusch (Defendant)
Country: United States (New York)
The Court’s Recognition of AI-Generated Citations
The case of Linda Oliver v. Christian Dribusch demonstrates how federal courts are beginning to identify and respond to artificial intelligence-generated materials in filings.
Linda Oliver, who represented herself, filed a civil rights complaint against Christian Dribusch, a former Chapter 7 bankruptcy trustee, alleging unlawful eviction, constitutional violations, and various torts. The case proceeded in the Northern District of New York before Judge Anthony Brindisi.
While the substance of the case concerned bankruptcy administration and the limits of judicial jurisdiction under the Barton doctrine, the court noted an emerging procedural problem. In a response to a sur-reply, the defendant observed that one of Oliver’s filings appeared to include an inaccurate citation that did not correspond to any real authority.
The court did not reproduce the false citation in its opinion but expressly acknowledged the dispute and cautioned both parties. The decision included a warning that the use of “hallucinated or fake case citations in legal submissions to a federal court is sanctionable conduct”.
Although the judge declined to make a formal finding that an AI system was used, the order’s language makes clear that the court considered this a real possibility. The reference to “hallucinated” citations, a term specifically associated with generative AI systems that fabricate information, reflects the court’s understanding of the phenomenon.
How the AI Issue Emerged in the Litigation
The suggestion that Oliver used AI arose from the defendant’s response brief, which argued that one of her citations was fictitious. The defendant’s counsel noted that the citation did not appear in any federal or state database and resembled examples of AI-generated case law that have surfaced in other jurisdictions.
This prompted the judge to insert a clarifying footnote in the final order. The court stated that it would “refrain from reproducing the citation” to avoid perpetuating the error, but used the opportunity to reaffirm procedural expectations about accuracy and verification in pleadings.
Importantly, the court did not need to determine whether Oliver intentionally used AI or whether the citation error was accidental. Instead, the ruling treated the issue as a matter of professional and procedural responsibility.
The warning applied equally to both parties, reflecting a broader judicial principle: regardless of intent, any party submitting false or fabricated authorities risks sanctions.
The court placed the burden squarely on the filer to verify every citation, regardless of whether it was produced by an AI tool, a research platform, or manual drafting.
The Judicial Approach to AI Hallucination
Judge Brindisi’s decision handled the AI issue with precision and restraint. The opinion focused primarily on jurisdictional questions under the Barton doctrine and the lack of federal subject matter jurisdiction, but still took the opportunity to address the procedural integrity of filings.
The judge’s use of the phrase “hallucinated or fake case citations” marks one of the earliest instances in the Northern District of New York where a federal court directly connected citation errors with artificial intelligence.
The court’s treatment of the matter reflects a new judicial standard. Rather than dismissing the filing solely on the basis of the AI issue, the court emphasised that such conduct could lead to sanctions if repeated or proven intentional.
This positions AI hallucination as a serious procedural concern—one that may not yet warrant immediate penalties in every instance, but one that courts are clearly tracking.
The decision also reveals how courts are identifying AI-related errors through contextual review. The mention of an “inaccurate legal citation” that could not be traced through conventional research databases suggests that judges and counsel are increasingly alert to the distinctive patterns of AI hallucination.
These include realistic citation formatting, plausible party names, and references to legitimate-sounding appellate departments or reporters, all of which mimic authentic legal authority.
The Broader Procedural Context
While the AI issue was procedural, the case itself turned on jurisdictional doctrines that highlight the limits of federal court power. The court dismissed Oliver’s claims because she failed to seek leave from the bankruptcy court before suing the trustee, as required by the Barton doctrine.
Even if jurisdiction had been proper, the court found that the defendant was not a state actor under 42 U.S.C. § 1983 and that a Bivens action could not be extended to a bankruptcy trustee. The federal claims were dismissed with prejudice, and the state claims were dismissed without prejudice for lack of jurisdiction.
The AI issue, though secondary, reflects how courts are enforcing procedural accuracy alongside substantive reasoning. In this case, the inclusion of the cautionary note on hallucinated citations within a dismissal order shows that judges view AI misuse as part of a broader obligation of candour to the tribunal.
The ruling does not isolate AI misuse as a technological problem, but embeds it within existing professional conduct and procedural compliance doctrines.
The court’s approach maintains the integrity of the judicial process without overreacting to emerging technology. It acknowledges the presence of AI-generated material in filings while reasserting that verification remains the responsibility of the litigant or lawyer.
The refusal to replicate the hallucinated citation in the opinion itself shows careful judicial handling—preserving transparency while preventing further spread of misinformation within official records.
Conclusion
Linda Oliver v. Christian Dribusch illustrates how federal courts are now encountering and documenting AI hallucinations within filed submissions. The plaintiff, acting pro se, appears to have relied on an AI-generated source that produced a fabricated citation.
Although the court stopped short of confirming which tool was used or imposing sanctions, it explicitly warned that the use of fake or hallucinated authorities in federal pleadings is sanctionable.