Are You Hallucinating? Attorneys Sanctioned for the “Unprecedented” Act of Submitting Nonexistent Case Law Provided by ChatGPT
On June 22, 2023, District Court Judge P. Kevin Castel of the United States District Court for the Southern District of New York sanctioned a law firm after it submitted fabricated judicial citations and opinions provided by the popular artificial intelligence (AI) engine, ChatGPT.
After plaintiff’s counsel filed an affirmation with the court, which was drafted by one attorney but signed by another at the same firm, defense counsel advised that he had “‘been unable to locate most of the case law cited in [the Affirmation], and the few cases which the undersigned has been able to locate do not stand for the propositions for which they are cited.’” The court “conducted its own search for the cited cases but was unable to locate multiple authorities cited in the Affirmation [].” Accordingly, Judge Castel issued an order to show cause for sanctions, emphasizing the “unprecedented circumstance” presented to the court.
The court required a hearing as to whether sanctions ought to be imposed. Following submissions, it made several findings and ultimately imposed sanctions on plaintiffs’ counsel. First, Judge Castel found that the attorney who signed the Affirmation “violated Rule 11 in not reading a single case cited in his … Affirmation [] and taking no other steps on his own to check whether any aspect of the assertions of law were warranted by existing law.” In the court’s view, had counsel adequately reviewed his own affidavit, it would have revealed that (1) at least one case could not be found; (2) many of the cases were excerpts and not full cases; and (3) reading only the opening passages of one of the cases “would have revealed that it was internally inconsistent and nonsensical.” Such conduct, according to the court, is “an act of subjective bad faith.”
Second, the court sanctioned the attorney who drafted the Affirmation based on (1) his failure to disclose his own suspicions of one of the decisions after he failed to confirm its authenticity; and (2) his “untruthful assertion that ChatGPT was merely a ‘supplement’ to his research.” For instance, that attorney admitted in testimony that he “couldn’t find” one of the decisions cited to the court when he ran a search on another legal research website. Judge Castel rejected this “dubious” claim, as the record included “screenshots taken from a smartphone in which [the drafting attorney] questioned ChatGPT about the reliability of its work (e.g., ‘Is Varghese a real case’ and ‘Are the other cases you provided fake’).”
Finally, finding no exceptional circumstances under Fed. R. Civ. P. 11(c)(1), Judge Castel sanctioned the plaintiff’s law firm and individual counsel. Against this backdrop, Judge Castel issued the following sanctions:
- The firm will conduct a mandatory Continuing Legal Education program on technological competence and artificial intelligence programs and hold mandatory training for all lawyers and staff on notarization practices.
- The drafting and signing attorneys will “inform their client and the judges whose names were wrongfully invoked of the sanctions imposed.”
- Assessed counsel will pay a $5,000 penalty into the Registry of the Court.
Numerous lessons can be drawn from this matter, all of which point to the great care that must be taken in relying on artificial intelligence to any degree in the practice of law. While “[t]echnological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” this decision illustrates the significant issues that arise when attorneys abdicate their “gatekeeping role … to ensure the accuracy of their filings.” Plaintiff’s counsel maintained, for instance, that he was “operating under the false assumption and disbelief[sic] that this website could produce completely fabricated cases.” Yet, this phenomenon of “artificial intelligence hallucinations” is well documented and arises when AI programs like ChatGPT generate seemingly realistic information “that do[es] not correspond to any real-world input.” In other words, it must be understood that current forms of generative-text AI are able to “fabricate” facts, holdings, and other legal authority.
Practitioners should also be aware of the ethical challenges that can arise when unquestioningly relying on these AI tools. For example, Judge Castel notes, pursuant to New York’s Rules of Professional Conduct, attorneys are prohibited from knowingly making “‘a false statement of fact or law to a tribunal.’” As noted above, counsel was on notice of the authenticity of the decisions submitted to the court. Likewise, practitioners should be reminded of our professional duty of competence – which “requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.” Here, counsel’s failure to review the affidavit, even after learning of its lack of accuracy, cannot be considered competent representation under the model professional rules.
While AI has immense power to support our work, attorneys should be reminded that these tools cannot replace traditional legal research and writing or the ultimate responsibility to confirm that anything submitted to the court is accurate. By delegating these types of tasks to AI, practitioners invite significant ethical issues, no matter how well intentioned they may be. When utilizing AI technology, attorneys should always develop and implement AI guidelines in accordance with Rules of Professional Conduct. Specifically, practitioners and organizations should consider:
- Holding parties accountable for impermissible uses of AI and developing training schemes on the ethical use of AI;
- Developing administrative controls to ensure that AI programs are subject to human oversight;
- Instituting means to carefully cross-check and confirm the results of generative AI research.