AI-powered hate content is on the rise, experts say - Action News
Home WebMail Wednesday, November 13, 2024, 04:15 AM | Calgary | -1.4°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Politics

AI-powered hate content is on the rise, experts say

Experts say that artificial intelligence technology is allowing for a rapid increase in the amount of hateful content and misinformation online.

Deepfakes have caused the spread of false information around the Israel-Hamas war, researcher says

Three people stand by a podium.
Richard Robertson, Bnai Brith Canada director of research and advocacy, holds up a document outlining antisemitic incidents in Canada, on May 6. (Sean Kilpatrick/The Canadian Press)

The clip is of a real historical event a speech given by Nazi dictator Adolf Hitler in 1939 at the beginning of the Second World War.

But there is one major difference. This viral video was altered by artificial intelligence, and in it, Hitler delivers antisemitic remarks in English.

A far-right conspiracy influencer shared the content on X, formerly known as Twitter, earlier this year, and it quickly racked up more than 15 million views, Wired magazine reported in March.

It's just one example of what researchers and organizations that monitor hateful content are calling a worrying trend.

They say AI-generated hate is on the rise.

"I think everybody who researches hate content or hate media is seeing more and more AI-generated content," said Peter Smith, a journalist who works with the Canadian Anti-Hate Network.

Chris Tenove, assistant director at the University of British Columbia's Centre for the Study of Democratic Institutions, said hate groups, such as white supremacist groups, "have been historically early adopters of new Internet technologies and techniques."

It's a concern a UN advisory body flagged in December. It said it was "deeply concerned" about the possibility that antisemitic, Islamophobic, racist and xenophobic content "could be supercharged by generative AI."

WATCH | The threat of AI deepfakes:

AI experts urge governments to take action against deepfakes

7 months ago
Duration 2:00
Hundreds of technology and artificial intelligence experts are urging governments globally to take immediate action against deepfakes AI-generated voices, images, and videos of people which they say are an ongoing threat to society through the spread of mis- and disinformation and could affect the outcome of elections.

Sometimes that content can bleed into real life.

After AI was used to generate what Smith described as "extremely racist Pixar-style movie posters," some individuals printed the signs and posted them on the side of movie theatres, he said.

"Anything that is available to the public, that is popular or is emerging, especially when it comes to technology, is very quickly adapted to produce hate propaganda."

Greater ease of creation and spread

Generative AI systems can create images and videos almost instantly with just a simple prompt.

Instead of an individual devoting hours to making a single image, they can make dozens "in the same amount of time just with a few keystrokes," Smith said.

B'nai Brith Canada flagged the issue of AI-generated hate content in a recent report on antisemitism.

The report says last year saw an "unprecedented rise in antisemitic images and videos which have been created or doctored and falsified using AI."

Director of research and advocacy Richard Robertson said the group has observed that "really horrible and graphic images, generally relating to Holocaust denialism, diminishment or distortion, were being produced using AI."

He cited the example of a doctored image depicting a concentration camp with an amusement park inside it.

"Victims of the Holocaust are riding on the rides, seemingly enjoying themselves at a Nazi concentration camp, and arguably that's something that could only be produced using AI," he said.

WATCH | Risk of misinformation online:

Defence minister testifies on evolving threat of false info online

5 months ago
Duration 1:39
Defence Minister Bill Blair, the former minister of public safety and emergency preparedness, spoke Wednesday at the ongoing inquiry into foreign interference in Canada. He said that with Canadians now receiving much of their information through social media, there is a 'legitimate concern' about misinformation and disinformation that creates 'a public perception not based in fact.'

The organization's report also says AI has "greatly impacted" the spread of propaganda in the wake of the Israel-Hamas war.

AI can be used to make deepfakes, or videos that feature remarkably realistic simulations of celebrities, politicians or other public figures.

Tenove said deepfakes in the context of the Israel-Hamas war have caused the spread of false information about events and attributed false claims to both the Israeli military and Hamas officials.

"So there's been that kind of stuff, that's trying to stoke people's anger or fear regarding the other side and using deception to do that."

Jimmy Lin, a professor at the University of Waterloo's school of computer science, agrees there has been "an uptick in terms of fake content ... that's specifically designed to rile people up on both sides."

Amira Elghawaby, Canada's special representative on combating Islamophobia, says there has been an increase in both antisemitic and Islamophobic narratives since the beginning of the conflict.

WATCH | Is artificial intelligence too great of a risk?

Artificial intelligence could pose extinction-level threat to humans, expert warns

6 months ago
Duration 8:08
A new report is warning the U.S. government that if artificial intelligence laboratories lose control of superhuman AI systems, it could pose an extinction-level threat to the human species. Gladstone AI CEO Jeremie Harris, who co-authored the report, joined Power & Politics to discuss the perils of rapidly advancing AI systems.

She says the issue of AI and hate content begs for both more study and discussion.

There's no disagreement that AI-generated hate content is an emerging issue, but experts have yet to reach a consensus on the scope of the problem.

Tenove said there is "a fair amount of guesswork out there right now," similar to broader societal questions about "harmful or problematic content that spreads on social-media platforms."

Liberals say new bill will address some concerns

Systems like ChatGPT have safeguards built in, Lin said. An OpenAI spokesperson confirmed that before the company releases any new system, it teaches the model to refuse to generate hate speech.

But Lin said there are ways of jailbreaking AI systems, noting certain prompts can "trick the model" into producing what he described as nasty content.

David Evan Harris, a chancellor's public scholar at the University of California, Berkeley, said it's hard to know where AI content is coming from unless the companies behind these models ensure it is watermarked.

He said some AI models, like those made by OpenAI or Google, are closed-source models. Others, like Meta's Llama, are made more openly available.

Once a system is opened up to all, he said bad actors can strip safety features out and produce hate speech, scams and phishing messages in ways that are very difficult to detect.

A statement from Meta said the company builds safeguards into its systems and doesn't open source "everything."

"Open-source software is typically safer and more secure due to ongoing feedback, scrutiny, development and mitigations from the community," it said.

In Canada, there is federal legislation that the Liberal government says will help address the issue. That includes Bill C-63, a proposed bill to address online harms.

Chantalle Aubertin, a spokesperson for Justice Minister Arif Virani, said the bill's definition of content that foments hatred includes "any type of content, such as images and videos, and any artificially generated content, such as deepfakes."

Innovation Canada said its proposed artificial intelligence regulation legislation, Bill C-27, would require AI content to be identifiable, for example through watermarking.

A spokesperson said that bill would also "require that companies responsible for high-impact and general-purpose AI systems assess risks and test and monitor their systems to ensure that they are working as intended, and put in place appropriate mitigation measures to address any risks of harm."