Alexey Dubinin v. Varsenik Papazian, Case No. 25-CV-23877-RAR

Case Citation: Alexey Dubinin v. Varsenik Papazian, Case No. 25-CV-23877-RAR
Court: United States District Court, Southern District of Florida
Date: 21 November 2025
Litigants: Alexey Dubinin (Plaintiff) and Varsenik Papazian, Miami Asylum Office Director, USCIS (Defendant)
Country: United States (Florida)

Identification of AI Hallucination

This case presents one of the clearest federal examples of a court identifying hallucinated legal authorities in a filed brief. The plaintiff’s attorney, Missiva Khacer, submitted a response to a motion to dismiss that contained fabricated case citations and invented quotations.

The defendant’s reply first drew attention to the errors, noting that the response contained quotes “that do not appear in binding Eleventh Circuit opinions” and a citation to a decision “that does not exist”.

The court then conducted its own review. Judge Rodolfo Ruiz found at least ten citations and quotations that were fictional, prompting an Order to Show Cause.

The order asked counsel to explain whether she “intentionally, and with the use of generative artificial intelligence tools, made misrepresentations to the Court”. That framing marks the court’s recognition that the pattern of errors was consistent with AI hallucination rather than human mistake or sloppy research.

The presence of invented opinions, fabricated quotations that tracked typical AI syntactic patterns, and citations that resembled but did not match real cases led the court to conclude that a generative tool had been used in the drafting.

The judge confirmed that another recent case involving the same counsel had also raised concerns about AI-generated filings, including formatting errors and references to authorities that the opposing party had never cited. The contextual pattern across multiple cases reinforced the court’s finding of AI involvement.

During the hearing, attorney Nataliya Gavlin admitted that she delegated drafting to a legal assistant who used artificial intelligence. The court considered this admission significant and treated the filings as AI-generated. Although Khacer denied her own use of AI, the court held her fully responsible because she signed the filings and failed to review them.

Who Used AI and How the Court Determined It

The attorney of record, Missiva Khacer, filed the complaint and the response containing the hallucinated material. Her explanation was that she relied on another lawyer, Gavlin, who in turn relied on a legal assistant who used AI.

The court accepted that AI was used in the preparation of the filings and held Khacer responsible. The judge emphasised that an attorney cannot delegate away the duty to ensure accuracy, stating that her name on the signature block made her accountable for the submission.

The court’s determination rested on three points:

  1. Inherent textual patterns of AI hallucination. The citations did not exist, the quotations were not derivable from any real authority, and the structure of the authorities reflected known generative patterns. This included mismatched reporter citations and plausible-sounding but fictitious holdings.
  2. Replication of the issue across multiple cases. The same attorney had previously been warned in a similar action. Another federal judge in the district had flagged the use of AI, noting formatting anomalies and odd references that did not correspond to any legal argument in the case. The recurrence strengthened the court’s conclusion.
  3. Admission at the hearing. While Khacer did not personally admit using AI, she acknowledged delegating drafting to another attorney, and that attorney acknowledged delegating to a legal assistant who used a generative tool. The court took this as confirmation of the use of AI.

The court therefore treated the submission as AI-generated and subject to sanction.

The Court’s Response to the AI Hallucination

The use of fabricated legal authorities prompted significant sanctions. The court struck the complaint, dismissed the case, and ordered Khacer to pay the defendant’s attorneys’ fees. The judge also referred her to the Florida Bar and to the court’s grievance committee. Gavlin, who drafted the submission but was not admitted to practice in the district, was also referred to the Bar for investigation.

The order was grounded in Rule 11, the court’s inherent authority, and 28 U.S.C. § 1927. The court noted that the inaccurate content reflected reckless and frivolous conduct, and that the repeated presence of hallucinated material demonstrated bad faith. The judge also cited the Florida Bar’s professional competence rule, which requires lawyers to understand the risks associated with technology, including generative artificial intelligence.

The court emphasised that reliance on a paralegal or legal assistant using AI does not reduce an attorney’s responsibility. The signature on the pleading bears responsibility for verifying the accuracy of all authorities, regardless of the method used.

The Role of AI in the Procedural History

The procedural record shows that hallucinated material entered both the complaint and the response. While the complaint contained fewer errors, the court flagged it as suspicious and consistent with AI generation.

The response, however, was replete with fabricated cases and inaccurate quotations. The defendant’s reply prompted the court’s investigation, but the court had already begun to notice a pattern in prior cases filed by the same counsel.

The judge carefully documented the misuse, linking the inaccuracies to a broader pattern of conduct and emphasising the importance of attorney oversight. The opinion stands as a detailed example of a federal court addressing the misuse of generative AI, documenting how AI hallucinations are identified, attributed, and sanctioned within federal practice.