Alberta courts issue warning about the use of artificial intelligence in courtrooms - Action News
Home WebMail Friday, November 22, 2024, 05:51 PM | Calgary | -11.1°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Edmonton

Alberta courts issue warning about the use of artificial intelligence in courtrooms

Alberta's top judges are warning lawyers and the public to be cautious when using artificial intelligencetools to help prepare cases for the courtroom.

Experts say AI in law comes with opportunity and risk

A screen.
Artificial intelligence, like Open AI and ChatGPT, is becoming increasingly scrutinized in its usage in various settings including post-secondary and the legal system. (Marco Bertorello/AFP via Getty Images)

Alberta's top judges are warning lawyers and the public to be cautious when using artificial intelligencetools to help prepare cases for the courtroom.

The province's courtsand a handful of otherCanadian jurisdictions have issued notices followinga high-profile American case earlier this year where a New York federal judge fined two lawyers for filing a ChatGPT-generated brief that cited legal precedents that don't exist.

ChatGPT is an AI language program that takes prompts and generates texts, prompting concern particularly in academic contexts about students relying on the technology to write essays or answer questions for them.

In an announcement this month, the chief justices of Alberta's Court of Justice, Court of King's Bench and the Court of Appeal of Alberta all urged court participants to keep a "human in the loop" when pulling references they plan to bring to court.

"In the interest of maintaining the highest standards of accuracy and authenticity, any AI-generated submissions must be verified with meaningful human control," the justices said.

During court cases, lawyers or self-represented litigants refer to previous cases and decisions as part of their arguments, often urging a judge to take those past decisions into consideration when making findings of fact or determining a sentence.

How A.I. is already impacting courtroom proceedings

11 months ago
Duration 0:44
Lawyers in the United States have been caught using false legal briefs created by ChatGPT. But that doesnt mean that artificial intelligence cant help in justice proceedings.

The Alberta courts' announcement directs all lawyers and self-represented litigants using AI tools to double check that any cases they plan to reference do in fact exist by consulting authoritative sources, such as court websites or reliable services likeCanLII.

The courts recognize that new technologies are coming along and changing how lawyers work and prepare for court, said Darryl Ruether, executive legal counsel to the Court of King's Bench.

The announcement doesn't deter using AI, just urges caution.

"At the end of the day, there has to be a human element," Ruether said. "The submissions have to be accurate and that's consistent with lawyers' professional responsibilities."

So far, none of the Alberta courts have reported instances of AI-generated bogus case law being filed.

Quebec recently made an announcement similar to Alberta's, but Manitoba and Yukon were the first Canadian jurisdictions to weigh in though both went further than Alberta by issuing directives that lawyers must disclose if, how and why they have used any AI tools while preparing their case.

Accepting responsibility

A man sits in an office.
Gideon Christian is an assistant professor in AI and law at the University of Calgary's Faculty of Law. (Gideon Christian.)

That's going too far, said Gideon Christian, a University of Calgary faculty of law assistant professor who specializes in the ethics of artificial intelligence and law.

He said a Google search of a courthouse's address to be included in materials filed as part of a case is technically using AI, and that it would be a waste of the court's time to require disclosure of all uses.

He said the direction from Alberta's courts is better because it puts lawyers on guard and warns them to be careful.

"As a lawyer, you should be or you will be accepting responsibility for whatever output is generated by ChatGPT," Christian said. "You have to take that supervisory step to ensure that whatever is generated there is accurate."

Christian said there is great potential for AI to help self-represented litigants prepare the technical and sometimes complicated paperwork that's required for court, but recognizing when the end product includes fabricated legal references is much trickier for a lay person than a lawyer.

"Some of the problems we're having now may actually improve with time as newer versions of this technology begin to be deployed," Christian said.

Christian said that for lawyers, there are also many ethical issues with relying on AI,such as the risk of inputting a client's private information into a massive third party database while using one of the programs.

Cheating the client?

Criminal defence lawyer Brian Beresh says he worries about AI programs being used to fabricate evidence that could end up being used in court.
Criminal lawyer Brian Beresh defended Blair Hinkley on a charge of aggravated sexual assault and successfully applied first for a mistrial, then a stay. (Sam Martin/CBC News )

Senior Edmonton criminal defence lawyer Brian Beresh said he has no plans to start letting AI write his arguments for him.

"In my work in criminal law I don't think that I could generate an AI product that would take into account all the unique factors of my case or my client's cause," he said, though he believes there could be some benefit to using AI tools to help with organization and developing arguments.

"It is in some ways potentially cheating the client. The client's paying us good money for proper, professional representation. They're not paying some machine to generate the argument."

As a trial lawyer, Beresh also worries about the potential for AI to be used to fabricate evidence, such as fake photos ending up being used in court.

He thinks the Alberta judges' caution is a good starting point as the legal system grapples with the implications of how AI ends up being used because getting it wrong will undermine trust in the legal system.

"We're held to a very high standard and I think we should be to be accurate and not mislead, not misrepresent," Beresh said.