Wheat v. Vichie, 2025 WL 3089438 (N.Y. Sup. Ct. 2025)
Case Citation: Wheat v. Vichie, 2025 WL 3089438 (N.Y. Sup. Ct. 2025)
Court: Supreme Court of New York
Date: 3 November 2025
Litigants: Thomas Wheat (Plaintiff) and Trent Vichie (Defendant)
Country: United States (New York)
The Court’s Identification of AI Use
In Wheat v Vichie, the Supreme Court of New York confronted a procedural irregularity that revealed the improper use of artificial intelligence in legal filings. The plaintiff, Thomas Wheat, represented himself and filed an amended complaint that contained fabricated case citations.
The court explicitly stated that the plaintiff’s submissions “misstate the holdings of cases cited to and include two citations to what appears to be entirely fictional cases.”
The opinion named one of these fabricated cases, Sosnovska v. Belle World Beauty, Inc., which appeared twice, falsely attributed to different New York appellate departments.
The court noted that the fictional citations and misstatements might have been “the result of reliance on AI or otherwise” and issued a formal caution to the plaintiff.
It warned that “further citations to non-existent cases or misrepresentations of case holdings will result in sanctions.” This statement shows how the judiciary is learning to recognise and address AI hallucinations within filings, even when the litigant is self-represented.
Although the court did not perform a forensic inquiry into which software generated the hallucinated authorities, its language makes clear that it regarded the plaintiff’s errors as symptomatic of AI-assisted drafting.
The repetition of a fabricated citation across different court levels was a key signal that the content did not originate from legitimate research or authoritative legal databases.
The Nature of the AI Hallucination
The term “AI hallucination” refers to a situation in which an artificial intelligence tool generates information that appears authentic but is wholly fabricated. In this case, the hallucination manifested through invented judicial precedents, falsely formatted citations, and misrepresented holdings.
The errors were conspicuous because they used recognisable legal citation styles, but the cases themselves did not exist in any public record or database.
The judge’s decision to preserve the invalid citations in the official opinion demonstrates the court’s seriousness about the issue. They were maintained “as written since they are part of the official record,” a choice that transforms the decision itself into an evidentiary document about the appearance and nature of AI hallucination in judicial proceedings.
The court did not accept the filings as innocent mistakes. It treated them as misrepresentations to a tribunal, which, in New York, can be sanctionable under procedural rules.
However, because the plaintiff was not an attorney and there was no evidence of intent to deceive, the court opted for a warning rather than imposing penalties. The explicit reference to sanctions in future filings marks the judiciary’s effort to deter careless or unverified use of AI.
Judicial Reasoning on the AI Component
Justice Lyle E. Frank took a balanced approach. The opinion recognised that the plaintiff was self-represented and therefore may not have fully understood the technical risks of relying on AI-generated text. Nonetheless, the court held him responsible for the content of his filings.
The judge noted that “whether the result of reliance on AI or otherwise,” misrepresenting legal authority to a court is unacceptable. This phrasing establishes that a litigant’s reliance on AI does not excuse them from procedural responsibility.
The “or otherwise” clause leaves no space for the argument that AI tools can serve as intermediaries whose mistakes relieve users of accountability. In essence, the court reaffirmed a foundational rule of litigation: the party who files a document bears full responsibility for its contents, regardless of how it was prepared.
The court’s reasoning also illustrates how judges identify AI use indirectly. Unlike plagiarism or fraud investigations, there is rarely direct evidence that a party used an AI tool.
Instead, judges look for textual patterns that are improbable in human-authored legal writing: perfectly formatted but non-existent citations, doctrinal inconsistencies, and mischaracterised case law. In this decision, the presence of the same fictitious case appearing twice in separate appellate departments was decisive proof.
The decision also placed the AI issue in a procedural context. The case itself involved employment and discrimination claims, which were largely dismissed on conventional grounds.
The mention of AI was isolated as a separate procedural matter within the discussion section, signalling that the court viewed it as distinct from the substantive merits but still relevant to the integrity of the proceedings.
The Broader Judicial Response
The ruling in Wheat v. Vichie stands out because it represents an early instance of a state trial court explicitly recognising AI hallucination in a published decision.
The court’s direct acknowledgement of the phenomenon and its decision to document the fabricated citations within the official record create a clear precedent for transparency.
Although no sanction was imposed, the judicial warning performs a regulatory function. It educates litigants and lawyers alike that artificial intelligence cannot be used as a source of unverified authority.
It also shows that courts are developing methods to detect AI misuse without relying on expert testimony or forensic analysis.
The choice to retain the hallucinated citations in the reported version of the case has documentary importance. Future courts, disciplinary committees, and legal educators can refer to this decision as an authentic record of how AI-generated hallucinations appear in real litigation documents.
The judge’s statement that links these fictional cases to possible AI use provides the evidentiary bridge between the fabricated citations and the technology responsible for their creation.
Conclusion
Wheat v. Vichie illustrates how the judiciary is responding to the infiltration of AI-generated misinformation into formal legal proceedings.
The plaintiff, Thomas Wheat, acting without a lawyer, submitted filings that contained fabricated cases and misrepresented authorities, conduct that the court attributed to possible reliance on AI. Justice Lyle E. Frank identified the irregularities through citation analysis, inferred the use of artificial intelligence, and issued a formal warning.
No AI tool was named, and the decision did not seek to identify one. The ruling is important because it confirms that courts can and will identify AI hallucinations through internal textual evidence, hold litigants accountable for the results, and document the misuse directly within the case record.
The judgment represents an early and clear articulation of the judiciary’s awareness of AI-assisted filings and sets a procedural framework for how future courts may handle similar issues.