Home | WebMail | Register or Login

      Calgary | Regions | Local Traffic Report | Advertise on Action News | Contact

Posted: 2023-10-13T20:25:16Z | Updated: 2023-10-13T20:25:16Z

Kate Starbird had been studying online conspiracy theories for years when she realized last year that she was at the center of one.

I can recognize a good conspiracy theory, she recalled to HuffPost. Ive been studying them a long time.

Right-wing journalists and politicians had begun the process of falsely characterizing Starbirds work which focused on viral disinformation about the 2020 election as the beating heart of a government censorship operation. The theory was that researchers working to investigate and flag viral rumors and conspiracy theories had acted as pass-throughs for overzealous bureaucrats, pressuring social media platforms to silence supporters of former President Donald Trump.

The year that followed has changed the field of disinformation research entirely.

Republicans gained control of the House of Representatives last fall, and Rep. Jim Jordan (R-Ohio) a key player in Trumps attempt to overturn the 2020 election results began leading a Weaponization of the Federal Government committee shortly thereafter. Among other things, the group zeroed in on researchers who rang alarm bells about Trumps Big Lie that the election had been stolen.

Around the same time, conservatives cited billionaire Elon Musks release of the so-called Twitter Files, which consisted of internal discussions on moderation decisions prior to his ownership of the company, to select journalists as evidence of government censorship. Despite legitimate concerns about the nature of the federal governments relationship with social media platforms, the documents never bore out accusations that government officials demanded Twitter take down certain posts or ideologies.

The fight spilled into Congress and the courts: Disinformation researchers became the targets of Republican records requests, subpoenas and lawsuits, and communication between some researchers and government officials was briefly restricted the result of a federal judges order that was later narrowed. Conservative litigators accused Starbird and others in the field of being part of a mass censorship operation, and the political attacks moved some researchers to avoid the public spotlight altogether.

It is a really tenuous moment in terms of countering disinformation because the platforms see a lot of downside and not so much upside.

- Samir Jain, co-author of CDT report on counter-election-disinformation initiatives

Disinformation researchers who spoke to HuffPost summarized the past year of legal and political attacks in two words: Chilling effect. And its ongoing. Amid widespread layoffs at social media platforms, new limitations on data access, the antagonism of Musks regime at X (formerly Twitter), and complacency from some who think the dangers of Trumps election denialism have passed, the field and the concept of content moderation more generally is in a period of upheaval that may impact the 2024 presidential election and beyond.

Even close partnerships between researchers and social media platforms are fraying, with experts more frequently opting to address the public directly with their work. The platforms, in turn, are heading into the 2024 presidential cycle and scores of other elections around the globe without the labor base theyve used to address false and misleading content in the past.

Many in the field are coming to terms with a hard truth: The web will likely be inundated with lies about the political process yet again, and fact-checkers and content moderators risk being out-gunned.

Right now, it is a really tenuous moment in terms of countering disinformation because the platforms see a lot of downside and not so much upside, said Samir Jain, co-author of a recent Center for Democracy and Technology report on counter-election-disinformation initiatives. Next year might be one of the worst times we have seen, and maybe the worst time we have seen, for the spread of election-related mis- and disinformation.

Ending The Golden Age Of Access

After Trump was elected in 2016, social media companies invested in content moderation, fact-checking and partnerships with third-party groups meant to keep election disinformation off of their platforms.

In the years that followed, using spare cash from sky-high revenue, the platforms invested in combating that harm. With the help of civil society groups, journalists, academics and researchers, tech companies introduced a mix of fact-checking initiatives and formalized internal content moderation processes.

I unfortunately think well look back on the last five years as a Golden Age of Tech Company access and cooperation, Kate Klonick, a law professor specializing in online speech, wrote earlier this year .

The investment from platforms hasnt lasted. Academics and nonprofit researchers eventually began realizing their contacts at tech companies werent responding to their alerts about harmful disinformation the result of this years historic big tech layoffs , particularly on content moderation teams .

Weve been able to rely even less on tech platforms enforcing their civic integrity policies, said Emma Steiner, the information accountability project manager at Common Cause, a left-leaning watchdog group. Recent layoffs have shown that theres fewer and fewer staff for us to interact with, or even get in touch with, about things we find that are in violation of their previously stated policies.

Weve been able to rely even less on tech platforms enforcing their civic integrity policies.

- Emma Steiner, information accountability project manager at Common Cause

Some companies, like Meta, which owns Instagram and Facebook , insist that trust-and-safety cuts dont reflect a philosophical change. We are laser-focused on tackling industrywide challenges, one spokesperson for the company told The New York Times last month.

Others, like Musks X, are less diplomatic.

Oh you mean the Election Integrity Team that was undermining election integrity? Yeah, theyre gone, Musk wrote last month, confirming cuts to the team that had been tasked with combating disinformation concerning elections. Four members were let go, including the groups leader.

More broadly, Musk has rolled back much of what made X a source for reliable breaking news, including by introducing verified badges for nearly anyone willing to pay for one, incentivizing viral and often untrustworthy accounts with a new monetization option, and urging right-wing figures whod previously been banned from the platform to return.

In August, X filed a lawsuit against an anti-hate speech group, the Center for Countering Digital Hate, accusing the organization of falsely depicting the platform as overwhelmed with harmful content as part of an effort to censor viewpoints that CCDH disagrees with. Musk also left the European Unions voluntary anti-disinformation code. In August, the EUs Digital Services Act , which includes anti-disinformation provisions, went into effect, but Musk has responded to warnings from the EU about Xs moderation practices by bickering with an EU official online.

Earlier this year, Musk also started charging thousands of dollars for access to Xs API or application programming interface, a behind-the-scenes stream of the data that flows through websites. That data used to be free for academics, providing a valuable look at real-time information. Now, Starbird said, monitoring X is like looking through a tiny window.