Skip to main content

A B.C. lawyer submitted 'fictitious' cases generated by ChatGPT to the court. Now, she has to pay for that mistake.

The ChatGPT app is displayed on an iPhone in New York, Thursday, May 18, 2023. Montreal researchers say they were shocked by how much reference information the AI tool fabricated in their published study. (AP Photo/Richard Drew) The ChatGPT app is displayed on an iPhone in New York, Thursday, May 18, 2023. Montreal researchers say they were shocked by how much reference information the AI tool fabricated in their published study. (AP Photo/Richard Drew)
Share

A B.C. lawyer has been ordered to review all of her files after she made the "serious mistake" of citing "fictitious" cases created by ChatGPT to the court, according to a recent judgment.

"Citing fake cases in court filings and other materials handed up to the court is an abuse of process and is tantamount to making a false statement to the court. Unchecked, it can lead to a miscarriage of justice," Justice David Masuhara wrote in his Feb. 20 ruling.

The issue arose during a "high conflict" dispute in family court in which Chong Ke was representing a father seeking an order that would allow his children to travel to China so he could have parenting time, according to the ruling, which also ordered Ke to pay opposing counsel the cost of uncovering the fact that the cases she cited in support of the application were "legal hallucinations."

Ke cited only two cases in support of her client's application, both of which found that allowing weeks-long overseas travel to spend time with a parent was in the best interest of the child.

But when the lawyers for the mother in the dispute tried to track down the cases, they could not find them and asked for copies. When those weren't provided, they hired a researcher to try and dig them up.

"The researcher could not locate the cases. The researcher came to the view that they did not exist," the decision says.

'I had no idea'

When the matter was set to be heard, Ke was unavailable but asked the associate appearing in her stead to provide a copy of an email from her to the judge and opposing counsel.

"I made a serious mistake when preparing a recent notice of application for my client by referring to two cases suggested by Chat GPT (an artificial intelligent tool) without verifying the source of information. I had no idea that these two cases could be erroneous," it read

"I will not repeat the same mistake again. I had no intention to mislead the opposing counsel or the court and sincerely apologize for the mistake that I made."

But the matter was adjourned and the email was not handed over to the other lawyers on that day.

When the hearing did proceed, Ke provided the court with an affidavit in which she described the experience of realizing what she had done as "mortifying" and said she was "deeply embarrassed."

She also said she was naïve about the potential for AI to create entirely fake cases and reiterated that she did not intend to deceive the court.

"I acknowledge that I should have been aware of the dangers of relying on Al-generated resources, and been more diligent and careful in preparing the materials for this application. I wish to apologize again to the court and to opposing counsel for my error," she wrote.

Worthy of rebuke?

The court found that even though the cases were withdrawn and were never presented to the court during the hearing, their inclusion in the initial application did result in "additional effort and expense" for opposing counsel. Because the mistake was Ke's, she was ordered to pay the costs associated with the time and resources it took to discover that the cases were "non-existent."

Opposing counsel also asked the court to award special costs in the case, arguing that Ke's conduct was "reprehensible and deserving of rebuke," the decision says.

The judge disagreed.

"It is an extraordinary step to award special costs against a lawyer. It requires a finding of reprehensible conduct or an abuse of process by the lawyer," Masuhara wrote, before concluding that although the incident was "alarming," Ke had no "intention to deceive or misdirect."

He did, however, say that it was also "unfortunate" that Ke did not seem to have been aware of the Law Society of B.C.'s warnings about the potential pitfalls of relying on AI-generated material. The society had advised lawyers that they are ultimately responsible for ensuring accuracy, and also advised them that it would be "prudent" to inform the court if ChatGPT or AI was used in the preparation of documents.

The judge closed with a final comment on the issues raised by the case.

"As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers. Competence in the selection and use of any technology tools, including those powered by AI, is critical," he wrote.

"The integrity of the justice system requires no less."

CTVNews.ca Top Stories

Stay Connected