I.H. v. O.K., 2025 WL 3064403 (Ind. Ct. App. 2025)

Case Citation: I.H. v. O.K., Court of Appeals of Indiana, Case No. 25A-PO-1355, 2025 WL 3064403 (Ind. Ct. App. Nov. 3, 2025)
Court: Court of Appeals of Indiana
Date: 3 November 2025
Litigants: I.H. (Appellant-Respondent) and O.K. (Appellee-Petitioner)
Country: United States (Indiana)

The Background to the Dispute

The case of I.H. v. O.K. was heard by the Indiana Court of Appeals in November 2025. It arose from a domestic dispute in which a husband, acting without legal representation, challenged a protective order issued against him.

His appeal was based on several procedural and substantive grounds, including allegations of fraud, due process violations, and newly discovered evidence.

The court ultimately dismissed his claims. What set this case apart was not the nature of the protective order itself but the way the appellant presented his arguments.

The decision became notable for exposing how artificial intelligence had been used to generate false or irrelevant legal citations within his submissions.

How the Use of Artificial Intelligence Became Evident

In its memorandum decision, the appellate court observed that the appellant’s written submissions contained numerous citations to authorities that did not support the propositions they were supposed to substantiate.

The court noted that these references appeared to be fabricated or mismatched. The judges concluded that the appellant had likely used artificial intelligence tools to prepare the filings and inserted references produced by an automated system that lacked legal accuracy.

The court made this finding without any formal investigation into which system was used. Instead, it drew inferences from the nature of the citations themselves.

The references included cases that either did not exist or were unrelated to the points made. This was characteristic of what courts and legal commentators now refer to as “AI hallucination” — a situation where an artificial intelligence system fabricates information that appears authentic but is entirely false or misleading.

The court described the appellant’s brief as “littered with citations to authorities that clearly do not support the propositions for which he cites them.”

This language indicated that the issue went beyond minor referencing errors. It was systemic within the submission, reflecting reliance on an automated text-generation system that produced legally unsound material.

The judges expressed concern that this conduct risked undermining the integrity of the judicial process.

The Court’s Response to the AI-Generated Filings

Rather than imposing immediate sanctions, the appellate panel took a cautionary stance. It advised the appellant to “exercise great caution” when using artificial intelligence to draft legal documents.

The court warned that continued misuse of such systems could lead to dismissal of filings, restrictions on future submissions, or monetary sanctions.

This statement echoed similar warnings issued by other U.S. courts earlier in the decade, particularly following the Mata v. Avianca, Inc. incident in 2023, where fabricated AI-generated citations were submitted by counsel in federal court.

The Indiana Court of Appeals thus aligned itself with a growing judicial stance: while courts acknowledge that AI tools may assist in drafting, the responsibility for verifying accuracy remains entirely with the filing party.

The court did not attribute bad faith to the appellant. It recognized that he was acting pro se and may not have fully understood the implications of using AI tools for legal drafting.

Nevertheless, the ruling made clear that litigants—whether represented or not—bear full responsibility for ensuring that authorities cited in legal filings are genuine and relevant.

The decision also referred to Williams v. Kirch (Ind. Ct. App. Aug. 18, 2025), another Indiana appellate case where similar issues arose. That cross-reference suggested that the judiciary in Indiana had already confronted more than one instance of AI-generated legal hallucinations by 2025, enough to justify formal judicial warnings.

The Broader Judicial Reasoning

The main appeal in I.H. v. O.K. failed on conventional legal grounds before the court ever reached the issue of artificial intelligence.

The appellant sought to vacate a protective order under Indiana Trial Rule 60(B), claiming newly discovered evidence, fraud upon the court, and due process violations. The appellate judges determined that his motion lacked factual support.

He had listed categories of “evidence” but had attached no documentation or proof. Consequently, the trial court was not required to hold an evidentiary hearing.

It was within this procedural context that the AI issue became relevant. The court noted that not only were the evidentiary claims unsupported, but the legal authorities relied upon to justify them were themselves unreliable.

The submission’s deficiencies in content, combined with its use of non-existent legal citations, rendered the appeal insubstantial. The judgment, therefore, affirmed the lower court’s decision without further analysis.

The reference to AI hallucination appeared at the end of the opinion, signalling that it was not the primary issue but a judicial observation meant to safeguard the integrity of the appellate process.

The court made its reasoning explicit: artificial intelligence cannot substitute the legal judgment required in preparing filings. Misuse of these systems can mislead courts, waste judicial resources, and expose the user to procedural penalties.

The case also reflected the court’s awareness that artificial intelligence tools have become accessible to self-represented litigants. It acknowledged that individuals using such technology may not recognise when it generates fabricated authority.

The court’s response was therefore balanced. It warned against misuse while maintaining the fairness owed to an unrepresented party.

Summary

I.H. v. O.K. (Ind. Ct. App. 2025) serves as one of the earliest appellate-level decisions in the United States to explicitly acknowledge the presence of artificial intelligence hallucination in a party’s legal filings.

The appellant, acting without counsel, relied on an AI system that inserted fictitious or irrelevant case citations into his brief.

The court identified this pattern and concluded that it was the product of AI drafting. It affirmed the lower court’s decision, rejected the appeal, and issued a warning about the potential consequences of submitting AI-generated material without verification.