Home | WebMail | Register or Login

      Calgary | Regions | Local Traffic Report | Advertise on Action News | Contact

Science

Online harassment: 'Dedicated' trolls will find way around anti-abuse rules

Twitter's move to bolster its anti-abuse policy is a step in the right direction, say experts, but it also demonstrates the ongoing difficulty of quashing online harassment.

New Twitter features make it easier to report abuse

When the number of destinations on the internet surpasses 512,000, users will experience overall internet slowness if they rely on an internet service provider that uses older, affected routers.
Efforts to clean up the tone of discourse on social media such as Facebook and Twitter rely as much on users as the platforms themselves, says author Jeff Jarvis. (Shutterstock)

Twitter's recent move to bolster its anti-abuse policy is a positive step in fostering more respect on the web, say observers, but it also demonstrates the ongoing difficulty of quashing online harassment.

Earlier this week, Twitter announced that in an attempt to identify and mitigate online harassment, it was improving its abuse-reporting system.



The company says the changes make it easier to report harassing tweets and more effectively block abusive accounts.

"The days of the Wild West are over, in terms of the internet it's starting to grow up, as are the social media platforms," says Kirsten Thompson, a counsel in the National Technology Group at Canadian law firm McCarthy Tetrault.

While she applauds Twitter's new safeguards, she is dubious that social media sites can totally silence the haters.

"Any dedicated troll out there will find a way around it."

Twitter's terms of use say that users "may not engage in targeted abuse or harassment," which can include "sending messages to a user from multiple accounts" or if "the sole purpose of your account is to send abusive messages to others."

In a post on the site's blog on Dec. 2, Twitter's director of product management and user safety acknowledged that the millions of daily tweets "can sometimes include content that violates our rules around harassment and abuse and we want to make it easier to report such content."

More effective reporting

Twitter seems to be responding to past criticism that its anti-abuse mechanisms were inadequate, says Jeff Jarvis, a social media commentator and author of Public Parts: How Sharing in the Digital Age Improves the Way We Work and Live.

Jarvis says he has had run-ins with trolls and imposters on Twitter, and found that the site's existing forms made reporting these incidents "rather difficult."

"It was almost as if they were trying to discourage it, frankly."

He says one of the reasons there's so much trolling on Twitter is because of its open nature while it is possible to make Twitter accounts private, because the platform is often used as a self-promotion toolmost users choose to keep theirs public.

Users on Facebook and Google Plus can choose who reads their posts, but that doesn't necessarily make them impervious to abuse.

According to a recent survey in the U.S. by Rad Campaign, Lincoln Park Strategies and Craig Newmark the founder of Craigslist more than 60 per cent said they had been harassed on Facebook.

The survey also found that 42 per cent said they "were unsure how to respond effectively."



Jarvis says Twitter is taking a positive step in neutralizing insulting posts, but says "bad actors" can always sign up for a new account under a different name to continue their harassment.

He says that establishing civility online is the shared responsibility of social media sites and their users.

"If you look at what motivates trolls, it's attention, it's disruption, it's causing other people pain. So the first rule of internet activity is, don't feed the trolls," says Jarvis.

"The art of this is finding ways of giving trolls less attention."

'Don't feed the trolls'

When nude images of Jennifer Lawrence leaked online in the summer, many social media users took it upon themselves to shame others who shared them. (Getty Images)
One of the ways of doing that is by making it easier to block user accounts on a site like Twitter, he says, but also by taking more of a community-policing approach.

Jarvis says one example of this arose after nude photos of celebrities such as Jennifer Lawrence were leaked online this past summer. Many users on sites like Twitter and Reddit not only refused to share links to the photos, but also shamed people who did.

"That was the first hopeful sign in the development of a new norm," says Jarvis.

One of the reasons online threats are so prevalent is because they're easy to make and because the perpetrator is often anonymous, says David Fraser, a technology and privacy lawyer at the Halifax firm McInnes Cooper.

Some people have suggested banning online anonymity as a way of fostering greater respect on social media. But that argument ignores the benefits of a hidden identity for whistleblowers and people protesting oppressive governments, says Fraser.

If you insist on real names on Twitter, "you also have the possible effect of ruining what Twitter is," says Fraser.

Establishing a 'reasonable threat'

Cases of online harassment are increasingly showing up in courtrooms, but there's still a lot of debate about what constitutes a reasonable threat, says Fraser.

The U.S. Supreme Court is currently hearing a case involving a Pennsylvania man who made rape and death threats, in the form of rap lyrics, against his estranged wife on Facebook in 2010.

Anthony Elonis maintains it is free speech and that he never intended to follow through with his boasts, but the prosecution is arguing that it's not about actual intent, but rather whether his ex-wife genuinely feared for her life.

Recent changes to Canadian laws regarding online harassment have been driven by "a victim's rights agenda," that doesn't mean it's easier to prosecute people who utter threats, says Thompson of McCarthy Tetrault.

Any court case revolving around online harassment needs to determine whether the victim believed it to be a reasonable threat and that hinges largely on perception.

"The difficulty with the perception of the victim is that people perceive things differently. A rape threat against LeBron James will be very different from a rape threat against a 13-year-old high school student," says Thompson.

She says that when it comes to an online harassment case, a court will also weigh evidence that would confirm or discredit the severity of a given threat.

"For instance, if I say, 'I didn't mean it,' but I also happen to have made threats of shooting somebody before and have a loaded gun and a prior conviction, then saying I didn't mean it is probably not going to withstand evidentiary scrutiny.

"Similarly with a victim, somebody who is a target of these things can't simply retaliate by saying, 'He threatened to shoot me' if it's in the context of an ongoing relationship where there's constant joking about [shooting guns]."