Meta says users must label AI-generated audio and video or they could be penalized - Action News
Home WebMail Tuesday, November 26, 2024, 08:13 AM | Calgary | -16.5°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Business

Meta says users must label AI-generated audio and video or they could be penalized

Meta Platforms could penalize users who fail to label AI-generated audio and visual content posted on its platforms, its top policy executive said on Tuesday.

Comments made by top policy executive who didn't elaborate on nature of penalties

The Meta logo on a lit-up screen.
The Meta logo is seen at the Vivatech show in Paris on June 14, 2023. Meta Platforms will begin detecting and labelling images generated by other companies' artificial intelligence services in the coming months, using a set of invisible markers built into the files, its top policy executive said on Tuesday. (Thibault Camus/The Associated Press)

Meta Platforms couldpenalize users who fail to label AI-generated audio and visual content posted on its platforms, its top policy executive said on Tuesday.

The comments were made by Nick Clegg, the company's president of global affairs, during an interview with Reuters.

Cleggsaid he felt confident that technology companies could label AI-generated images reliably at this point, but said tools to mark audio and video content were more complicated and still being developed.

"Even though the technology is not yet fully mature, particularly when it comes to audio and video, the hope is that we can create a sense of momentum and incentive for the rest of the industry to follow," Clegg said.

In the interim,Meta would start requiring people to label their own altered audio and video content, and could apply penalties if they failed to do so, Clegg added. Hedid not describe the penalties.

CBC News has reached out to Meta for more information.

Meta will label AI-generated images on its platforms

Black lines with white letters written on them woven across a screen on a spartphone.
This photo, taken in New York, Thursday, July 6, 2023, shows Meta's new app Threads. Meta will apply labels to AI-generated images posted to its Facebook, Instagram and Threads services, in an effort to signal to users that the images which in many cases resemble real photos are actually digital creations. (Richard Drew/The Associated Press)

The comments came following Clegg's announcement in a blog post that Meta wouldbegin detecting and labelling images generated by other companies' artificial intelligence services in the coming months, using a set of invisible markers built into the files.

Meta will apply the labels to any content carrying the markers posted to its Facebook, Instagram and Threads services, in an effort to signal to users that the images which in many cases resemble real photos are actually digital creations.

"This is going to be a work in progress that's always going to be a game of essentially cat and mouse, but it's a start," said RiteshKotak, a Toronto-basedcybersecurity and technology analyst.

He cautioned that, just as Meta's own technology for detecting and labelling AI-generated imagery improves, so too will an AI tool's ability to deceive detection technology.

As for what kind of penalty users might be subject to, Kotak said it might entail being suspended or removed from the platform. That could lead to broader repercussions, he added.

"If you're unable to access your account, if you're unable to leverage the platform in itself, [that]might have additional repercussions such as economic loss if you're using your account to generate money," he said.

The company already labels any content that wasgenerated using its own AI tools. Once the new system is up and running, Meta will do the same for images created on services run by OpenAI, Microsoft, Adobe, Midjourney, Shutterstock and Alphabet's Google, Clegg said.

He added during the interview that there was currently no viable mechanism to label written text generated by AI tools like ChatGPT, saying, "that ship has sailed."

A Meta spokespersondeclined to tell Reuters whether the company would apply labels to generative AI content shared on its encrypted messaging service WhatsApp.

WATCH | How AI-generated videos can be weaponized in elections:

Can you spot the deepfake? How AI is threatening elections

8 months ago
Duration 7:08
AI-generated fake videos are being used for scams and internet gags, but what happens when theyre created to interfere in elections? CBCs Catharine Tunney breaks down how the technology can be weaponized and looks at whether Canada is ready for a deepfake election.

The announcement provides an early glimpse into an emerging system of standards technology companies are developing to mitigate the potential harms associated with generative AI technologies, which can spit out fake but realistic-seeming content in response to simple prompts.

The approach builds off a template established over the past decade by some of the same companies to co-ordinate the removal of banned content across platforms, including depictions of mass violence and child exploitation.

Meta's independent oversight board on Monday rebuked the company's policy on misleadingly doctored videos, saying it was too narrow and that the content should be labelled rather than removed. Clegg said he broadly agreed with those critiques.

The board was right, he said, that Meta's existing policy "is just simply not fit for purpose in an environment where you're going to have way more synthetic content and hybrid content than before."

He cited the new labelling partnership as evidence that Meta was already moving in the direction the board had proposed.

With files from CBC's Meegan Read