Google research shows the fast rise of AI-generated misinformation - Action News
Home WebMail Monday, November 11, 2024, 12:25 AM | Calgary | -0.4°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Science

Google research shows the fast rise of AI-generated misinformation

From fake images of war to celebrity hoaxes, AI technology has spawned new forms of reality-warping misinformation online. New analysis co-authored by Google researchers shows just how quickly the problem has grown.

Artificial intelligence has become a source of misinformation with lightning speed

An instagram post of a woman in a floral ball gown on the carpet at an event
Singer Katy Perry posted this AI-generated image to her Instagram account on May 6, explaining that she couldn't attend the 2024 Met Gala because she had to work. The photo went viral. (katyperry/Instagram)

From fake images of war to celebrity hoaxes, artificial intelligence technology has spawned new forms of reality-warping misinformation online. New analysis co-authored by Google researchers shows just how quickly the problem has grown.

The research, co-authored by researchers from Google, Duke Universityand several fact-checking and media organizations, was published in a preprint last week. The paper introduces a massive new dataset of misinformation going back to 1995 that was fact-checked by websites like Snopes.

According to the researchers, the data reveals that AI-generated images have quickly risen in prominence, becoming nearly as popular as more traditional forms of manipulation.

The work was first reported by 404 Media after being spotted by the Faked Up newsletter, and it clearly shows that "AI-generated images made up a minute proportion of content manipulations overall until early last year," the researchers wrote.

Last year saw the release of new AI image-generation tools by major players in tech, including OpenAI, Microsoftand Google itself. Now, AI-generated misinformation is "nearly as common as text and general content manipulations," the paper said.

The researchers note that the uptick in fact-checking AI images coincided with a general wave of AI hype, which may have led websites to focus on the technology. The dataset shows that fact-checking AI has slowed down in recent months, with traditional text and image manipulation seeing an increase.

A line graph with various colours.
This chart shows the increase in AI-generated image misinformation in early 2023. (Dufour, Pathak, et al., 2024)

The study looked at other forms of media, too, and found that video hoaxes now make up roughly 60 per cent of all fact-checked claims that include media.

That doesn't mean AI-generated misinformation has slowed down, said Sasha Luccioni, a leading AI ethics researcher at machine learning platform Hugging Face.

"Personally, I feel like this is because there are so many [examples of AI misinformation] that it's hard to keep track!" Luccioni said in an email. "I see them regularly myself, even outside of social media, in advertising, for instance."

AI has been used to generate fake images of real people, with concerning effects. For example, fake nude images of Taylor Swift circulated earlier this year. 404 Media reported that the tool used to create the images was Microsoft's AI-generation software, which it licenses from ChatGPT maker OpenAI prompting the tech giant to close a loophole allowing the images to be generated.

The technology has also fooled people in more innocuous ways. Recent fake photos showing Katy Perry attending the Met Gala in New York in reality, she never did fooled observers on social media and even the star's own parents.

The rise of AI has caused headaches for social media companies and Google itself. Fake celebrity images have been featured prominently in Google image search results in the past, thanks to SEO-driven content farms. Using AI to manipulatesearch results is against Google's policies.

WATCH | Taylor Swift deepfakes taken offline. It's not so easy for regular people:

Taylor Swift deepfakes taken offline. Its not so easy for regular people

7 months ago
Duration 1:47
Fake, AI-generated sexually explicit images of Taylor Swift were feverishly shared on social media until X took them down after 17 hours. But many victims of the growing trend lack the means, clout and laws to accomplish the same thing.

Google spokespeople were not immediately available forcomment. Previously, a spokesperson told technology news outlet Motherboard that "when we find instances where low-quality content is ranking highly, we build scalable solutions that improve the results not for just one search, but for a range of queries."

To deal with the problem of AI fakes, Google has launched such initiativesas digital watermarking, which flags AI-generated images as fake with a mark that is invisible to the human eye. The company, along with Microsoft, Intel and Adobe, is also exploring giving creators the option to add a visible watermark to AI-generated images.

"I think if Big Tech companies collaborated on a standard of AI watermarks, that would definitely help the field as a whole at this point," Luccioni said.