While courts still use fax machines, law firms are using AI to tailor arguments for judges - Action News
Home WebMail Monday, November 11, 2024, 01:54 AM | Calgary | -0.5°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Opinion

While courts still use fax machines, law firms are using AI to tailor arguments for judges

If courts and administrative decision makers used this same AI to identify their own biases and confront them, the justice system could be less vulnerable to those biases, writes Robyn Schleihauf.

AI can read a judges entire history of decision-making and spit out an argument based on what it finds

Equal Before the Law, a 2012 sculpture by Eldon Garnet that depicts a lion and lamb face-to-face, is pictured outside the Ontario Court of Justice in Toronto. Artificial intelligence can 'read' a judges entire record of decisions, so lawyers can tailor their arguments to align with the judges worldview, writes Robyn Schleihauf. (Evan Mitsui/CBC)

This column is an opinionby Robyn Schleihauf,a writer and a lawyer based in Dartmouth, N.S. For more information aboutCBC's Opinion section, please see theFAQ.

It is no secret that the courts and other bodies, such as provincial and federal human rights commissions, landlord and tenant boards, workers compensation boards, utility and review boards, etc. are behind the times when it comes to technology.

For decades, these bodies repeatedly failed to adopt new technologies. Many courts still rely primarily on couriers and fax machines. The COVID-19 pandemic forced a suite of changes in the justice system, bringing things like virtual hearings to reality, but as we move back to in-person appearances, some courts and administrative decision makers are showing their continued resistance to adopting technology debating things like whether to allow people to submit their divorce applications via email post-COVID.

Meanwhile, law firms and private sector lawyers are more technologically enabled than ever.

Law firms and lawyers can subscribe to legal analytics services, which can do things like use artificial intelligence (AI) to "read" a judge's entire record of decisions and sell that information to law firms so their lawyers can tailor their arguments to align with the judge's preferred word use and, arguably, their worldview.

What this means is that legal analytics can root out bias, and law firms can exploit it.

While the use of AI to understand a judge may seem alarming, it has always been the case that lawyers could exploit some judges' biases. Lawyers have become increasingly specialized over the years and familiarity with the system and the people within it is part of what some clients are paying for when they hire a lawyer.

The difference is the scale

Lawyers practising family law know which judges will never side entirely with the mother. Lawyers practising criminal law know who is generally sympathetic to arguments about systemic discrimination and who is not. Lawyers aren't supposed to "judge-shop," but stay in any circle of the law for long enough and you'll know which way the wind is blowing when it comes to certain decision makers. The system has always been skewed to favour those who can afford that expertise.

What is different with AI is the scale by which this knowledge is aggregated. While a lawyer who has been before a judge three or four times may have formed some opinions about them, these opinions are based on anecdotal evidence. AI can read the judge's entire history of decision-making and spit out an argument based on what it finds.

The common law has always used precedents, but what is being used here is different it's figuring out how a judge likes an argument to be framed, what language they like using, and feeding it back to them.

And because the legal system builds on itself with judges using prior cases to determine how a decision should be made in the case before them these AI-assisted arguments from lawyers could have the effect of further entrenching a judge's biases in the case law, as the judge's words are repeated verbatim in more and more decisions. This is particularly true if judges are unaware of their own biases.

Use AI to confront biases

Imagine instead if courts and administrative decision makers took these legal analytics seriously. If they used this same AI to identify their own biases and confront them, the justice system could be less vulnerable to those biases.

Issues like sexism and racism do not typically manifest suddenly and unexpectedly there are always subtle or not so subtle cues some harder to pinpoint than others, but obvious when stacked on top of each other. But the body charged with judicial accountability the Canadian Judicial Council relies, for the most part, on individual complaints before it looks at a judge's conduct.

AI-generated data could help bring the extent of the problem of bias to light in a way that relying on individual complainants to come forward never could. AI has the capacity to review hundreds of hours of trial recordings or tens of thousands of pages of court transcripts something that was previously inconceivable because of the human labour involved.

AI could help make evident the biases of judges that were known among the legal profession, but difficult to prove. And then bias and discrimination could be dealt with ideally before those decision makers cause immeasurable and unnecessary harm to those in the justice system, and before hundreds of thousands of dollars in appeal costs are spent to overturn bad law.

AI is here to stay and there is little doubt that judges will find bespoke arguments compelling. The question is not whether AI should be used AI is already being used. The question is whether our court systems will continue to struggle with technology from the 1980s and 90s, while 21st century tech is rewriting our case law.


Do you have a strong opinion that could add insight, illuminate an issue in the news, or change how people think about an issue? We want to hear from you. Here'show to pitch to us.