B.C. lawyer reprimanded for citing fake cases invented by ChatGPT - Action News
Home WebMail Sunday, November 10, 2024, 11:10 PM | Calgary | 0.4°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
British Columbia

B.C. lawyer reprimanded for citing fake cases invented by ChatGPT

The cases would have provided compelling precedent for a divorced dad to take his children to China had they been real.

Chong Ke ordered to pay costs for opposing counsel to discover precedent was AI 'hallucination'

ChatGPT logo over a white background is pictured on a mobile phone held in front of a larger version of the logo.
A B.C. lawyer has been ordered to pay costs for opposing counsel for the time they took to discover that two cases she cited as precedent were created by ChatGPT. (Dado Ruvic/Reuters)

The cases would have provided compelling precedent for a divorced dad to take his children to China had they been real.

But instead of savouring courtroom victory, the Vancouverlawyer for a millionaire embroiled in an acrimonious split has been told to personally compensate her client's ex-wife's lawyers for the time it took them to learn the cases she hoped tocite were conjured up by ChatGPT.

In a decision released Monday, a B.C. Supreme Court judge reprimanded lawyer Chong Kefor including two AI "hallucinations" in an application filed lastDecember.

The cases never made it into Ke's arguments; they were withdrawn once she learned they were non-existent.

Justice David Masuhara said he didn't think the lawyer intended to deceive the court but he was troubled all the same.

"As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers," Masuhara wrote in a "final comment" appended to his ruling.

"Competence in the selection and use of any technology tools, including those powered by AI, is critical."

WATCH | Some U.S. lawyers have been caught using false legal briefs created by ChatGPT:

How A.I. is already impacting courtroom proceedings

10 months ago
Duration 0:44
Lawyers in the United States have been caught using false legal briefs created by ChatGPT. But that doesnt mean that artificial intelligence cant help in justice proceedings.

'Discovered to be non-existent'

Ke represents Wei Chen, a businessman whose net worth according to Chinese divorce proceedings is said to be between $70 and $90 million. Chen's ex-wife, Nina Zhang, lives with their three children in an $8.4 million home in West Vancouver.

Last December, the court ordered Chen to pay Zhang$16,062 a month in child support after calculating his annual income at $1 million.

A Canadian passport
The fake cases surfaced in an application for the couple's children to get permission to travel to China. The lawyer withdrew the cases from the application once they proved to be fake. (Sean Kilpatrick/The Canadian Press)

Shortly before that ruling, Ke filed an application on Chen's behalf foran order permitting his children to travel to China.

The notice of application cited two cases: one in which a mother took her "child, aged 7, to India for six weeks" and anothergranting a"mother's application to travel with the child, aged 9, to China for four weeks to visit her parents and friends."

"These cases are at the centre of the controversy before me, as they were discovered to be non-existent," Masuhara wrote.

The problem came to light when Zhang's lawyers told Ke's office they needed copies of the cases to prepare a response and couldn't locate them by their citation identifiers.

Ke gave a letter of apology along with an admission the cases were fake to an associate who was to appear at a court hearing in her place, but the matter wasn't heard that day and the associate didn't give Zhang's lawyers a copy.

Masuhara said the lawyer later swore an affidavit outlining her "lack of knowledge" of the risks of using ChatGPT and "her discovery that the cases were fictitious, which she describes as being 'mortifying.'"

"I did not intend to generate or refer to fictitious cases in this matter. That is clearly wrong and not something I would knowingly do," Ke wrote in her deposition.

"I never had any intention to rely upon any fictitious authorities or to mislead the court."

WATCH | ChatGPT's effect on academia:

How ChatGPT is rewiring university for students and profs

10 months ago
Duration 3:00
University campuses everywhere are facing the same problem: how to deal with ChatGPT and other AI-powered programs that can complete assignments in seconds. The CBCs Carolyn Stokes looks for answers at Memorial University.

No intent to deceive

The incident appears to be one of the first reportedinstances of ChatGPT-generated precedent making it into a Canadian courtroom.

The issuemade headlines in the U.S.last year when a Manhattan lawyer begged a federal judge for mercy after filing a brief relying solely on decisions he later learned had been invented byChatGPT.

A photo of the building that says Court of Appeal and Supreme Court.
A B.C. Supreme Court judge has ordered Chong Ke to pay costs for work opposing counsel undertook to figure out a pair of cases cited in an application were made up by artificial intelligence. (Ben Nelms/CBC)

Following that case, the B.C. Law Society warned ofthe "growing level of AI-generated materials being used in court proceedings."

"Counsel are reminded that the ethical obligation to ensure the accuracy of materials submitted to court remains with you," the society said in guidance sent out to the profession.

"Where materials are generated using technologies such as ChatGPT, it would be prudent to advise the court accordingly."

Zhang's lawyers were seeking special costs that can be ordered for reprehensible conduct or an abuse of process. But the judge declined, saying he accepted the "sincerity" of her apology to counsel and the court.

"These observations are not intended to minimize what has occurred, which to be clear I find to be alarming," Masuhara wrote.

"Rather, they are relevant to the question of whether Ms.Ke had an intent to deceive.In light of the circumstances, I find that she did not."

But the judge said Ke should have to bear the costs for the steps Zhang's lawyers had to take to remedy the confusion created by the fake cases.

He also ordered the lawyer to review her other files: "If any materials filed or handed up to the court contain case citations or summaries which were obtained from ChatGPT or other generative AI tools, she is to advise the opposing parties and the court immediately."