Home | WebMail | Register or Login

      Calgary | Regions | Local Traffic Report | Advertise on Action News | Contact

Sign Up

Sign Up

Please fill this form to create an account.

Already have an account? Login here.

World

Elon Musk shared an AI video of Kamala Harris. Here's why it matters

Elon Musk recently amplified a deepfake video that uses AI to mimic Kamala Harriss voice. Its raising concern about the power of artificial intelligence ahead of the U.S. presidential election in November.

Post has been viewed more than 130 million times, appears to violate Xs own policies

A screengrab of a social media post shows a photo of a smiling woman, standing at a lectern.
Elon Musk on Friday posted a video that combines footage from a Kamala Harris campaign video with an AI-generated imitation of her voice. (elonmusk/X)

Kamala Harris smiles as crowds of supporters cheer in a video that's making the rounds on X, the social media platform formerly known as Twitter. But there's an issue the person speaking in the video isn't really Harris. It's artificial intelligence mimicking her voice.

The manipulated video gained widespread attention after tech billionaire and X owner Elon Musk shared it on the social media platform on Friday without noting it was parody. Experts say it's the latest example of the influential role AI could play in the leadup to the U.S. presidential election in November.

What's in the video?

The video features many visuals from a real campaign video Harris recently released. But the voiceover makes it sound like the presidential candidate is saying things she didn't.

The voice can be heard describing Harris as "the ultimate diversity hire," calling U.S. President Joe Biden a "deep state puppet" and claiming that Harris doesn't "know the first thing about running the country."

CBC News is not linking to the digitally altered video.

Musk's post has since been viewed more than 130 million times and appears to violate X's policies, which prohibit sharing "synthetic, manipulated or out-of-context media that may deceive or confuse people and lead to harm."

The video does not contain any parody disclaimer, however, the account that first uploaded it @MrReaganUSA, described it as "ad parody" in accompanying text.

A man in a grey suit gestures while speaking.
Musk faced widespread criticism for posting the video, responding on Monday that 'parody is legal in America.' (Kevork Djansezian/Getty Images)

Some X users have suggested Musk's post should be labelled with a "community note" a feature that adds context to inaccurate posts. No label has been added at the time of this article's publication.

Others have gone as far as suggesting that Musk's post violates the Federal Election Campaign Act, which prohibits fraudulent misrepresentation of federal candidates or political parties. The law, which was introduced in 1971, doesn't have any clear rules around technology like artificial intelligence or social media.

Following widespread criticism over the weekend, Musk said on Monday "parody is legal in America," replying to a post by California Democratic Gov. Gavin Newsom.

When asked for comment via its press relations email, X replied: "Busy now, please check back later."

The value of 'transparency'

The altered video confirms something Henry Ajder, a researcher and expert adviser to organizations like Meta, Adobe and the U.K. government, says he's felt for a long time.

"Satire," he said, "is a an incredibly murky topic."

Ajder co-authored a 2020 report from the human rights organization Witness and the Co-creation Studio at MIT Open Documentary Lab that examined the political and policy implications of AI media and deepfakes. Ajder and his colleagues examined 70 cases from a wide range of deepfake videos to understand the growing relationship between satire and deepfakes.

He says deepfakes should be clearly labelled, and points to something called the Content Authenticity Initiative, where he is an advisor, as an example.

He describes it as a "nutrition label for media."

Labelling a deepfake is "not about saying, 'This is bad or this is good,'" he said. "It's about providing transparency about how a piece of media is being created."

Many popular social media companies have rules in place to try to manage AI-generated content. Meta, the company that owns Facebook and Instagram, requires that "manipulated media" be labelled as such and that context be appended to the post. In March, Google, which owns YouTube, announced a policy requiring users posting videos to disclose when content has been made with AI.

Growing trend in politics

This isn't the first time AI has been used in relation to the upcoming U.S. presidential election.

In January, ahead of the New Hampshire Democratic primary, a robocall using AI technology mimicked Biden's voice in an attempt to discourage people from voting. Following that, the U.S. Federal Communications Commission ruled that robocalls using AI-generated voices were illegal and proposed a $6 million US fine.

During this year's Republican primary, deepfake videos depicting former U.S. secretary of state Hillary Clinton endorsing Republican Florida Gov. Ron DeSantis began popping up on social media.

Ajder, who also points to similar instances in Slovakia and the U.K., says there is a place for satire in politics, pointing to publications like Babylon Bee and the Onion, but that it is important to be clearly defined as such.

"There is, in my view, a space for AI-generated satire and deepfake satire, but it has to be created and shared in a responsible manner."

Clarifications

  • A previous version of the story stated that Henry Ajder helped develop the Content Authenticity Initiative. In fact, he is an advisor for it.
    Jul 30, 2024 9:54 AM ET