The real 'fake news': how to spot misinformation and disinformation online - Action News
Home WebMail Monday, November 11, 2024, 04:35 AM | Calgary | -1.3°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Science

The real 'fake news': how to spot misinformation and disinformation online

So you think a story or photo you've seen online might be fake - or exaggerated. Here's what you need to know about fake news online. First tip - stop calling it fake news.

For starters, lets stop calling it fake news

Disinformation proliferates online - and like the mythical unicorn, often blends some plausible elements in conveying untruths. This CBC News guide will help you identify disinformation and misinformation online. (Illustration by David Morgan/CBC News)

So you think a story, photo or video you've seen online might be fake or exaggerated, at least. Maybe you spotted a photo that's generating outrage or ridicule, or a headline that seems too bizarre to be accurate.

But you're not sure.

How do you know if what you're seeing is real? How can you find out where it's coming from?

This guide will give you some tips on how to evaluate what you're reading and seeing, so you'll be better equipped to decide whether to trust it.

First off, we're going to avoid using the term "fake news." The parliamentary committee on digital, culture, media and sport in the United Kingdom recommended against using "fake news" in favour of more specific terms:

"The term 'fake news' is bandied around with no clear idea of what it means, or agreed definition. The term has taken on a variety of meanings, including a description of any statement that is not liked or agreed with by the reader. We recommend that the Government rejects the term 'fake news,' and instead puts forward an agreed definition of the words 'misinformation' and 'disinformation.'"

This U.K. committee conducted an 18-month investigation of the influence of social media and the Cambridge Analytica scandal, which saw the U.K.-based company collect the personal data of an estimated 87 million Facebook users without their consent through an app on Facebook. That information was used to target voters through advertisingin both the 2016 U.S. presidential election and the referendum on the U.K. leaving the European Union.

Cambridge Analytica collected the personal data of an estimated 87 million Facebook users without their consent through an app on the platform. The information was used to target voters through ads in both the 2016 U.S. presidential election and the referendum on the U.K. leaving the European Union. (Dado Ruvic/Reuters)

For this guide, we'll use the terms "misinformation" and "disinformation"instead.

The worst kind of disinformation might be incredibly hard to spot, but much of it isn't, and you can easily equip yourself to be a more critical news consumer.

What is the difference between disinformation and misinformation?

The U.K. government has very useful definitionsof both terms. Here, we've simplified those definitions to make them easier to understand.

Disinformation is the deliberate creation and/or sharing of false information in order to mislead.

Misinformation is the act of sharing information without realizing it's wrong.

What does disinformation look like?

Kaleigh Rogers, CBC's senior reporter covering disinformation, investigated claims made in a blog post about Justin Trudeau that circulated widely on social media. The post claimed Justin Trudeau's government sent $465 million in foreign aid to Afghanistan, only to see it "disappear."

Rogers found that the figure of $465 million is partly correct the federal government announced that funding in 2016but that amount actually is just part of the total sum of foreign aid Canada has sent to Afghanistan.

A blog post falsely claiming the Trudeau government gave $465 million to Afghanistan, only for it to disappear, has been circulating on social media. (Screengrab: Cultural Action Party, filter by Radio-Canada)

Other claims in the article were either misleading or wrong. A report cited to bolster the "disappearance" claim is actually a report about U.S. aid to Afghanistan that doesn't mention Canada at all. Rogers also detailed the amount of funding former prime minister Stephen Harper set aside for Afghanistan and cited international criticism directed at Trudeau for not providing enough foreign aid.

The Canadian Anti-Hate Network also told Rogers that the group behind the post regularly circulates false or misleading stories online to spread anti-immigrant and anti-Muslim sentiment.

What does misinformation look like?

Radio-Canada's disinformation reporter Jeff Yates was struck by how popular a story from CBC P.E.I. was on Facebook. The story was about a new law in the province that punishes drivers who illegally pass school buses by suspending their drivers'licences for a period of time.

The story was picked up on social media and posted to many pages in the United States because people thought the law applied to their own communities. It was the most popular CBC News story on the social media platform in the past year (June 2018 - June 2019) and generated 5.8 million Facebook interactions 37 times more interactions than there are people living in P.E.I.

Some people who posted the story outside of P.E.I. knew it didn't apply to their region; others thought it applied to them when it did not. So this was a case of misinformation, because some people spread the information under the mistaken belief that the law applied to them.

What kinds of misinformation and disinformation are out there?

The U.K. Parliament's Digital, Culture, Media and Sport Committee suggested some useful definitions for the kinds of fake content you're likely to see online:

Fabricated content: completely false content.

Manipulated content: content that includes distortions of genuine information or imagery a headline, for example, that is made more sensationalist to serve as "clickbait."

Imposter content: material involving impersonation of genuine sources by using the branding of an established news agency, for instance.

Misleading content: information presented in a misleading way by, for example, presenting comment as fact.

False context of connection: factually accurate content that is shared with false contextual information for example, a headline that does not reflect the content of an article.

Satire and parody: humorous but false stores presented as if they are true. Although this isn't usually categorized as fake news, it may unintentionally fool readers.

Let's look at those categories in more detail:

Fabricated content

These are the stories, images or websites that are totally fake. These stories may come from unknown outlets or social media accounts that aren't well-known, or don't have a lot of followers. The websites themselves may try to appear as if they're legitimate.

A half-dozen websites, all claiming to be English-language newspapers based in Quebec, are in fact part of a network of fake newspaper sites run from Ukraine. (Graphic illustration: Sophie Leclerc/Radio-Canada)

Radio-Canada's Jeff Yates discovered a group of websites that looked like English-language, Quebec-based local newspaper websites. He found a site called The Sherbrooke Times which looks like a local site. But no such newspaper exists; the site's office was listed in Toronto and the articles were poor translations of articles taken from French-language Quebec media. The network of fake sites is actually based in Ukraine, and Yates discovered their goal was to generate ad revenue.

Manipulated content

Recently, The Tyee debunked an ad circulating online that appeared to show NDP leader Jagmeet Singh standing in front of a $5.5 million mansion. The headline on the ad said, "Jagmeet Singh Shows off His New Mansion." The photo of Singh was a real photo taken by a Reuters photographer. The house shown in the photo is a real mansion available for rent called the Villa Fiona; Jagmeet Singh doesn't own it and it's located in Los Angeles, not B.C.

Imposter content

An example of this type of contentwould bea story that appears to come from a reputable online news source. The story might have the correct branding and colours but still seem slightly 'off', or the headline might be something that the real outlet would never publish. One giveaway might be the URL if it's incorrect, has extra letters or numbers or doesn't end in .com or .ca.

A screenshot from an impostor website displaying a fake version of The Washington Post, with a fake story about U.S. President Donald Trump leaving the White House. A group called 'The Yes Men', which says it uses "trickstery fun to further campaigns," said it was behind the paper and the website. (Screenshot/The Yes Men)

A recent example is this fake version of The Washington Post, distributed both in a printed version and as a website, with false stories about U.S. President Donald Trump departing the White House. (The URL of the website my-washingtonpost.com gave the game away.)

Misleading content

Online content can become misleading when an opinion piece is circulated online as objective reporting, when one element of a story is blown out of proportion to attract clicks, or when an entire story is presented by a special interest group as proving or disproving something when it actually might not do anything of the sort.

Jeff Yates investigated a story that was the most popular piece of misleading news in Quebec for a time and turned out to be something that originated in Antibes Juan-les-Pins, France. In Antibes, the mayor's assistant sent a letter to parents of schoolchildren saying that any requests to remove or increase the number of pork dishes in school cafeterias for religious or personal reasons would be denied due to the principle of secularism.

The mayor's assistant never mentioned the religion of those making the requests, but a far-right blog drew a link to Muslims. One of the blog's writers wrote an open letter supporting the mayor of Antibes, saying he was right to refuse any "concession to Islam."

This is an example of a fact (no change to pork dishes on school menus) that was altered to fit an agenda in this case, anti-Muslim attitudes.

A similar open letter subsequently circulated on social media in Quebec, congratulating the mayor of Dorval. It even included a note the mayor's secretary supposedly sent to parents which actually was just the open letter from the right-wing French blog with the locations changed.

This Facebook post circulating a fake story about the mayor of Dorval, Que., was published in September of 2015 and shared 124,000 times. (Screenshot/Facebook )

Here we see misleading content sliding into fabrication: the mayor of Dorval never said any such thing and none of the content reflects anything that actually happened in Dorval.

False context of connection

This commonly happens during a natural disaster when, for example, photos circulate purporting to show a terrible flood, while the images themselves might not be from the actual event mentioned, the same location, the same year or even the same continent. The origins of these images are often easy to track through reverse-image searches but they can fool a lot of people in the meantime.

Kaleigh Rogers found that some CBC stories were being shared online as if they were new,such as a story from 2014 about an RCMP study that found hundreds of cases of police corruption. Some of the comments on the story posts mentioned Prime Minister Justin Trudeau, even though the study period was from 1995 to 2005 and Trudeau wasn't elected prime minister until 2015.

Often people may share stories online as if the information is current, when in fact it's several years old. (Adam Killick/CBC)

Satire and parody

Stories written as satire or parody are sometimes passed around as if they're true. The American satirical news outlet The Onion fools people regularly.

Jack Warner, the former vice president of FIFA, famously cited an Onion story in his defence when he was indicted on corruption charges in the U.S. in 2015. In video posted to his Facebook page, Warner said "all this thing has stemmed from the failed U.S. bid to host a World Cup" a rumour the L.A. Times noted had been stoked by Russian President Vladimir Putin.

In the video, Warner holds up a copy of an Onion article titled, "FIFA Frantically Announces 2015 Summer World Cup In United States."

"If FIFA is so bad, why is it the USA wants to keep the FIFA World Cup?" Warner asked.

(The U.S. is hosting the World Cup in 2026, along with Canada and Mexico.)

Deepfakes

A deepfake is video, audio or images that have been altered with artificial intelligence software to make it seem as if a real person said or did something they didn't actually say or do. The term "deepfake" is a combination of the words "deep learning" (by A.I.) and "fake."

One good example is a video of actor Bill Hader from an appearance on 'Late Night with Conan O'Brien' in 2005. During his conversation with O'Brien, Hader imitated actor Al Pacino. In a deepfake released this year, Pacino's face appears on Hader's body during the imitation.

Actor Jordan Peele and Buzzfeed famously made a deepfake video to demonstrate the dangers implicit in the technology. In it, former U.S. President Barack Obama appears to be speaking about the dangers of deepfakes; Peele provided Obama's "voice" and video of Obama was matched to Peele's "performance."

There are several different methods for making deepfakes, some more complicated than others.

Some deepfakes are easy to spot because the people in the videos don't look quite real (a phenomenon known as "the uncanny valley"), or look like they're wearing masks that "slip" as they move around. The Daily Dot also notes that skin tones might change near the edge of a person's face, or the person might have double chins or double eyebrows.

Another way to spot a deepfake according to an American professor who makes them is to watch the eyes; performers in deepfake videos sometimes don't blink as often as real people.

"When a deepfake algorithm is trained on face images of a person, it's dependent on the photos that are available on the internet that can be used as training data. Even for people who are photographed often, few images are available online showing their eyes closed," wrote Siwei Lyu, director of the Computer Vision and Machine Learning Lab at the University at Albany, State University of New York.

Lyu also said that, since he published his post about blinking, he's seen videos that have fixed the blinking problem.

Because the technology is always improving, it will get harder to detect deepfakes. That's why it's so important to find out where a video came from, who made it and whether there are other versions of the video showing the same person doing the same thing. Context can determine if the video is real or not.

That notorious video of U.S. House Speaker Nancy Pelosi that made her appear drunk is not a deepfake. The video was slowed down a standard editing technique and Pelosi did give the speech shown in the video. So this was an altered video circulated with misleading information to make it appear to be something it was not. (Some people have dubbed such videos "shallowfakes.")

In Part 2 of this series, wetell you about tools you can use to spot disinformation and to stop spreading it.

Andrea Bellemareis part of a CBC team investigating online misinformation and attempts to disrupt the upcoming Canadian election.