Twitter trolling victims at mercy of 'daunting' complaints system - Action News
Home WebMail Tuesday, November 19, 2024, 08:06 PM | Calgary | -7.8°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Science

Twitter trolling victims at mercy of 'daunting' complaints system

Twitter's mechanisms for reporting online abuse are failing its users, as the company's CEO admits. But why can't the social media platform protect its users from online tormentors?

'Most of my reports of abuse are rejected,' feminist writer Lindy West says

Twitter users from left, Adria Richards, Lindy West and Shireen Mitchell, have all endured varying degrees of online harassment, which they agree has been difficult to tackle due to the social media platform's limited abilities to stop online trolls. (YouTube, Twitter)

In the hashtag-laden language of its users, Twitter's anti-trolling measures might qualify as an epic #fail.

"We suck at dealing with abuse and trolls on the platform and we've sucked at it for years," Dick Costolo, the CEO of the microblogging site, conceded in a recently leaked staff memo.

"We lose core user after core user by not addressing simple trolling issues that they face every day," he added in one of two internal emails obtained by the technology and culture website The Verge.

In a leaked memo, Twitter CEO Dick Costolo admits Twitter has failed to protect its users from vicious internet trolling, and said it was time to begin 'kicking people off right and left' for their abusive comments to other users. (Eric Gaillard/Reuters)

Costolo blamed himself for letting down victims such as feminist writer Lindy West, whose encounters with online abusers reportedly prompted the CEO's vow to be "more aggressive" about stopping harassment.

But his pledge to fix the problem rang somewhat hollow for West, who has endured online threats for years for writing about what she calls "big, fat, bitchy" topics.

"I'll believe it when I see it. And I want to believe it," she said from Seattle.

"I'll screen-grab [the abusive messages] and block the user and report it. But there's so much out there that I'm sure, cumulatively, it eats up a lot of time."

In a recent episode of the U.S. radio program This American Life, West interviewed her "cruellest" troll a man who created a bogus Twitter account posing as her recently deceased father.

Her story sparked some good discourse, she said, but the taunts and cyberthreats polluting her social media timelines carried on as usual.

Twitter won't budge on anonymity

As in the cases of an untolled number of women who, according to a Pew Research study, receive the most serious forms of online harassment, West felt there was little she could do.

You could get a reply saying we looked at your report and we don't care- Lindy West, media critic

"Most of my reports [to Twitter moderators] of abuse are rejected," she said. "I could spend all day collecting hundreds of links. Now, I'll just report it if it's a graphic rape threat or something. If it's Kill yourself, bitch,' I don't even bother."

Filtering online abuse is a challenge, in large part because the platform affords its users anonymity an option not available to Facebook or Google Plus users, who are required to sign up using real names.

Complicating matters is an apparent shortage of Twitter staff to process harassment complaints.

The volume of tickets Twitter handles from its 288 million users a month would be "overwhelming," said Andrea Weckerle, founder of CiviliNation, a non-profit organization taking a stand against online hostility.

"This goes back to needing a bigger team to give quicker responses, because people's lives are impacted," she said.

Compiling the evidence is time-consuming, too.

Twitter's abusive-user form requires a complainant to collect and send URLs of specific threatening tweets for evaluation.

"Imagine hours, days, weeks. It's daunting to compile it all," digital strategist Shireen Mitchell said.

The turnaround might take a day or two, West added, "and you could get a reply saying we looked at your report and we don't care."

The nastiest tweets directed to Mitchell, who is black, might concern her race, her status as a female gamer, or her work advocating for young girls of colour to embrace futures working in tech, she said.

Twitter added a 'report abuse' button in 2013. (CBC)

"But when I'm engaging [perpetrators] in this back and forth, they're actually deleting those tweets, so the URLs are gone," she said.

Banned users also resurface under new account names.

To fulfil demand for better blocking tools, third-party developers have created filters such as Block Together, The Blockbot and Twitter Quick Blocker.

Racist comments also posted to Facebook

Some victims of harassment have called for reduced anonymity so that abusers have to use their real identities.

Mitchell points out, however, that even Facebook failed to deter people from openly expressing racist, sexist and violent comments attacking developer Adria Richards, whose tweet outing a male developer in 2013 for misogynistic remarks inadvertently led to him getting fired.

"The amount of people using their full names on Facebook and threatening her, oh my goodness," Mitchell said. "I actually took screenshots."

Richards has shared her own recommendations for Twitter to clamp down on trolling, which include making trending harassment patterns available via the application programming interface (API) and allowing users to subscribe to blacklists to exclude words from their timelines.

In December, Twitter released new tools to streamline abuse reports. It introduced a "report abuse" button in 2013, following threats to high-profile women in Britain.

Richards has noticed improvements over time, such as the ability to report on behalf of another tweeter, and more prominently displayed buttons to the ticketing form.

"It's still a very manual process, but at least you can now report multiple tweets," she said.

West believes the technical tweaks can only go so far.

"The solution really is to change the culture so that when men disagree with a woman, he doesn't have to immediately reach for a gendered slur," she said.

As for the free speech argument, the idea of protecting voices behind violent threats or hateful comments doesnt square with her.

"Staying neutral on hate speech affects my free speech," she said. "It's important to realize that taking steps to safeguard the online experiences of marginalized groups is actually an action that protects speech."