Ethics and artificial intelligence: These researchers say tech has to have a moral backbone - Action News
Home WebMail Monday, November 11, 2024, 02:24 AM | Calgary | -0.9°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
NL

Ethics and artificial intelligence: These researchers say tech has to have a moral backbone

When it comes to artificial intelligence technology, 'there's no putting the genie back in the bottle,' says a MUN scientist.

Ethical AI movement is growing but outpaced by growth of technology, says Microsoft engineer

Members of the Uighur Muslim ethnic group held demonstrations in cities around the world last year to protest a sweeping Chinese surveillance and security campaign that has sent thousands of their people into detention and political indoctrination centers. (Seth Wenig/Associated Press)

In the wake of reports that the Chinese government is using artificial intelligence-basedtechnology to track and detain some of its citizens, a Newfoundland and Labrador scientist is questioning how AI is being used and who should answer for its misuse and he's not alone.

"It seems to me that there's not enough people actively fighting against what's happening,"saidDavid Churchill, an associate computer science professor at Memorial University (MUN) in St. John's.

David Churchill is an assistant professor in the computer science department at Memorial University. (Submitted)

Last month, Human Rights Watch published an investigationallegingofficials in China's Xinjiangprovince were using a mobile app to aggregate personal data and flag suspicious individuals mostly Uighur Muslims to authorities.

Other reports have shown the government uses a system of surveillance cameras backed up by facial recognition software to spot and track Uighurs. The UN estimates a million Uighurs are now being held by Chinese authorities in massive "re-education" camps.

People crack jokesabout sci-fi flick The Terminator, where a cyborg assassin sent from an army of machines toterrorizehumans. But in reality, it's not the technology that does harm, he said.

People protest at a Uyghur rally on Feb. 5 in front of the U.S. Mission to the United Nations in New York. (Timothy A. Clary/AFP/Getty Images)

"These sort of tracking systems are the exact same technologies that are able to detect brain tumours in MRI images or to help doctors diagnose patients with certain diseases at a better rate than human doctors are able to," he said.

"The real existential threat are the people who are willing to use AI, which was invented with the best intentions in mind, for their intentions which may not be the best."

Canadian connections

With China one of the world AI juggernauts SenseTime, a company identified by the New York Timesand Buzzfeed as being tiedto the software used by the Chinese government, is now the highest-valued AI company in the world Churchill is worried his colleagues are quiet because they're afraid of killing opportunities.

There is no putting the genie back in the bottle.- Jana Rosales

CBC News has found researchers at atleast one Canadian university have published papers on object recognition AI with scientists from both the Chinese National University of Defense Technology and SenseTime. They did not respond to a request for comment.

In early June, the Alberta Machine Intelligence Institute (AMII)launched a partnership with the Hong Kong AI Lab, a non-profit funded in part by SenseTime.

AnAMII spokesperson told CBC News the partnership is not about sharing research, but about sharing business knowledge and developing an AI ecosystem.

AI unlike any tech in history

The whiplash rate at which artificial intelligence technologies are developed sets AIapart from anything in history,says Jana Rosales, a professor inMUN's engineering department who helps scientists think about the social consequencesof their work.

That makes it prime territory for design regret the remorse someone feels when their work is used for harm, she said.

"Our institutions have to find ways to keep up with the pace of change and be nimble enough to make decisions about what responsible AI looks like."

Jana Rosales is an assistant professor in MUN's faculty of engineering and applied sciences. (Sarah Smellie/CBC)

And they need to find ways to support researchers who want to slow down and be more thoughtful about their workand researcherslike Churchill, who speak up about itsunintended consequences.

Abhishek Gupta agrees. He's thefounder of the Montreal AI Ethics Institute, a driving force behind the growing Canadian movement toward ethical AI.

"It's been very recent that this [ethical AI] work has started to become mainstream," said the Microsoft engineer. Right now, he saidthe technology is still outpacing the movement.

man
Abhishek Gupta is a founder of the Montreal AI Ethics Institute and a machine learning engineer at Microsoft. (Facebook)

Nothing will change without awareness, he said, and the situation in China is a major flashing light for scientists,institutions and government all of which need to commit to better practices and policies, he said.

The public, too, has a responsibility to learn about thetechnology they're using and anypotential to cause harm, he said.

Will declarations have teeth?

Both Rosales and Gupta have hope.

"I try to take comfort in the fact that people are actually saying, 'Hang on, wait a minute AI is actually is qualitatively different from anything we've been working on,' or at least they recognize how complex it is and that there is no putting the genie back in the bottle," Rosales said.

She pointsto initiatives like the Montreal Declaration for Responsible Development of Artificial Intelligence, to which more than 2,000 scientists and institutions have signed their names.

"Are they going to have teeth? Who really knows?" she said.

Visitors experience facial recognition technology at Face++ booth during the China Public Security Expo in Shenzhen, China, in October 2017. (Bobby Yip/Reuters)

Chinese scientists on board

Gupta said he is pleased with the Canadian government's efforts, pointing to their release of guiding principles for ethical AI use.

And there are groups and individuals in China who are fighting for ethical AI practices, he said, pointing to the Beijing Academy of Artificial Intelligence's release of the Beijing AI Principles in May.

The move was criticized as a smokescreen, but Gupta said after hosting a session with Chinese scientists, he sees their situation with more nuance.

"My biggest takeaway is that we need to have these open dialogues and that we need to have people who have these different perspectives share their opinions and insights and really use that in making decisions rather than having a unilateral view on how someone is using certain technology."

Read morefrom CBC Newfoundland and Labrador