Federal government use of AI in hundreds of initiatives revealed by new research database - Action News
Home WebMail Thursday, November 14, 2024, 11:27 AM | Calgary | 6.4°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Politics

Federal government use of AI in hundreds of initiatives revealed by new research database

Canada's federal government has used artificial intelligence in nearly 300 projects and initiatives, new research has found including to help predict the outcome of tax cases, sort temporary visa applications and promote diversity in hiring.

'There needs to be more public information available about how these systems are being used,' says expert

A man stands in front of a sign and a group of people.
Prime Minister Justin Trudeau speaks during an announcement on innovation for economic growth in advance of the 2024 federal budget in Montreal on Sunday. Canada's federal government has used artificial intelligence in nearly 300 projects and initiatives, new research has found. (Graham Hughes/The Canadian Press)

Canada's federal government has used artificial intelligence in nearly 300 projects and initiatives, new research has found including to help predict the outcome of tax cases, sort temporary visa applications and promote diversity in hiring.

Joanna Redden, an associate professor at Western University in London, Ont., pieced together the database using news reports, documents tabled in Parliament and access-to-information requests.

Of the 303 automated tools in the register as of Wednesday, 95 per cent were used by federal government agencies.

"There needs to be far more public debate about what kinds of systems should be in use, and there needs to be more public information available about how these systems are being used," Redden said in an interview.

She argued the data exposes a problem with the Liberal government's proposed Artificial Intelligence and Data Act, the first federal bill specifically aimed at AI.

WATCH | Professor flags major flawsin AI regarding racial biasin response to Bill C-27 draft:

Racial bias safeguards missing from Bill C-27's Artificial Intelligence Data Act draft, says U of C professor

6 months ago
Duration 5:43
University of Calgary assistant professor of AI and law, Gideon Christian, has sent a letter to the House of Commons committee reviewing Bill C-27 to flag major flaws in AI regarding racial bias, especially affecting people of colour.

"That piece of legislation is not going to apply to, for the most part, government uses of AI. So the sheer number of applications that we've identified demonstrates what a problem that is."

Bill C-27 would introduce new obligations for "high-impact" systems, such as the use of AI in employment. That's something the Department of National Defence experimented with when it used AI to reduce bias in hiring decisions, in a program that ended in March 2021.

A spokesperson said the department used one platform to shortlist candidates to interview, and another to assess an "individual's personality, cognitive ability and social acumen" and to match them to profiles. The candidates provided explicit consent, and the data informed human decision-making.

Pilot projects become permanent

Immigration, Refugees and Citizenship Canada said two pilot projects from 2018 to help officers triage temporary resident visa applications have become permanent. The department uses "artificial intelligence tools to sort applications and determine positive eligibility."

The register also says the department employs AI to review study permit applications by people from other countries, though a spokesperson said it does not use AI for "final decision-making."

The department's automated systems can't reject an application or recommend a rejection, the spokesperson said.

Not all experiments become permanent initiatives.

The Public Health Agency of Canada said it discontinued a project analyzing publicly available social media information to look for warning signs of suicide, due to factors including cost and "methodologies."

Siri for warships

Health Canada, on the other hand, continues to use a social listening tool with a "rudimentary AI component" to search online news for mentions of incidents related to a consumer product, a spokesperson said.

Some of the experiments would be familiar to Canadians the Royal Canadian Navy, for example, tried out a system similar to Apple's Siri or Amazon's Alexa to verbally relay commands to ships.

A spokesperson said efforts to integrate voice-activated technology in warships continue, but "information security concerns" have to be "considered before such technology could be used."

AI is also put to work for legal research and predictions.

A computerized image of a face is shown with identification points.
Visitors check their phones behind the screen advertising facial recognition software during Global Mobile Internet Conference in Beijing in 2018. Facial recognition is also used by the Canada Border Services Agency on a voluntary basis to 'help authenticate the identities of incoming travellers' though kiosks at some airports. (Damir Sagolj/Reuters)

The Canada Revenue Agency said it uses a system that allows users to input variables related to a case that will "provide an anticipated outcome by using analytics to predict how a court would likely rule in a specific scenario, based on relevance and historical court decisions."

And the Canadian Institutes of Health Research uses labour relations decisions software. It compares a specific situation to previous cases and simulates how different facts might affect the outcome, the register outlines.

At the Office of the Superintendent of Bankruptcy, AI flags anomalies in estate filings.

A spokesperson said the system detects "potential debtor non-compliance based on key attributes found in insolvency filings." Cases flagged by the system are evaluated by analysts.

The register also includes examples of AI being employed by the RCMP. A spokesperson confirmed the RCMP has used AI to identify child sexual assault material and to help in rescuing victims.

A "type of facial recognition technology called face matching" has been used on lawfully obtained internal data, the spokesperson said.

CBSA and facial recognition

Facial recognition is also used by the Canada Border Services Agency (CBSA). A spokesperson said the agency uses the technology on a voluntary basis to "help authenticate the identities of incoming travellers" though kiosks at some airports.

Redden said there are a lot of reasons to ask questions about facial recognition, including examples in the United States where it has led to wrongful arrests.

More broadly, she argued that the government should be keeping better track of its own uses of AI.

The federal government said that in cases where AI use "can have significant impacts," such as in helping make administrative decisions, its directive on automated decision-making requires an algorithmic impact assessment.

Those assessments are then published in a public register, the Treasury Board outlined in an email.

The register currently only has 18 entries.

Asked why the number is so much smaller than Redden's total, a spokesperson said the directive and the register are "specifically focused on uses of AI with direct impact on individuals or businesses. Many AI applications in the federal government do not fall under this category."

One such example: the tech that is used to keep tabs on nature.

The Canadian Food Inspection Agency employs machine learning to track invasive plants, insects and molluscs, the registry outlines.

WATCH | How AI is changingmodern warfare:

How is artificial intelligence AI changing the face of modern warfare?

5 months ago
Duration 7:12
Get the latest on CBCNews.ca, the CBC News App, and CBC News Network for breaking news and analysis

A spokesperson said the agency uses an AI tool to scan a social network crowdsourcing observations of plants and animals. Fisheries and Oceans Canada says it uses AI to "detect marine mammals from aerial, drone and satellite imagery."

It took Redden two years, with some assistance, to compile the data based on limited information from a variety of sources.

The information available often doesn't indicate when an AI system was introduced or why, whether it is still in place, what data is being used or if there have been any issues with the system, she said.

"It's very difficult for those on the outside to do this kind of work."

It's unclear what happened to some of the pilot projects Redden documented.

A January 2023 document tabled in Parliament shows the CBSA said it was developing an algorithm for postal X-rays to automatically detect guns and gun parts, while Global Affairs Canada was experimenting with AI-generated briefing notes.

Global Affairs didn't respond to a request for more information, and CBSA declined to provide an update on those efforts.

"While we can tell you that the CBSA is currently closely following the development of machine learning algorithms for X-rays to automatically detect items of interest, we do not disclose details of specific targeting, enforcement or intelligence as it may render them ineffective," the agency said.

What the register demonstrates, Redden said, is "how widespread use of AI is across government bodies in Canada" and how little we know about that use.