Twitter uses spam-fighting technology to target accounts promoting terrorism - Action News
Home WebMail Wednesday, November 13, 2024, 08:04 AM | Calgary | -0.5°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
World

Twitter uses spam-fighting technology to target accounts promoting terrorism

Twitter is now using spam-fighting technology to seek out accounts that might be promoting terrorist activity and is examining other accounts related to those flagged for possible removal, the company announced Friday.

White House had said it would reach out to Silicon Valley to combat extremist groups

Social media has increasingly become a tool for recruitment and radicalization that's used by ISIS and its supporters, who by some reports have sent tens of thousands of tweets per day. (Reuters)

Twitter is now usingspam-fightingtechnology to seek out accounts that might be promotingterroristactivity and is examining other accounts related to those flagged for possible removal, the company announced Friday.

The effortsignalledefforts by Twitter to automatically identify tweets supportingterrorism, reflecting increased pressure placed by the U.S. government for social media companies to respond to abuse more proactively. Child pornography has previously been the only abuse that was automatically flagged for human review on social media, using a different kind of technology that sources a database of known images.

Twitter also said Friday it has suspended more than 125,000 accounts for threatening or promotingterroristacts, mainly related to Islamic State in Iraq and Syria (ISIS)militants, in the last eight months. Social media has increasingly become a tool for recruitment andradicalizationthat's used by ISISand its supporters, who by some reports have sent tens of thousands of tweets per day.

Tech companies are dedicating increasingly more resources to tracking reports of violent threats. Twitter said Friday that it has increased the size of its team reviewing reports to reduce their response time "significantly." The San Francisco-based company also changed its policy in April, adding language to make clear that "threatening or promoting terrorism" specifically counted as abusive behavior and violated its terms of use.

In January, the White House made good on PresidentBarackObama'spromise to reach out to Silicon Valley to tackle the use of social media by violent extremist groups. Those particularly include ISIS, which inspired attackers who killed 14 in San Bernardino, Calif., last December.

A post on one of the killers'Facebookpages that appeared around the time of the attack included a pledge of allegiance to the leader of ISIS.

Facebookfound the post which was under an alias the day after the attack. The company removed the profile from public view and informed law enforcement. But such a proactive effort is fairly uncommon.

TheObamaadministration sent several top officials to San Jose, Calif., including FBI Director JamesComey, Attorney GeneralLorettaLynch and National Security Agency Director Mike Rogers.

Among issues discussed was how to use technology to help speed the identification of "terrorist content," according to a copy of the White House briefing memo obtained by The Associated Press.

"We recognize that identifying terrorist content that violates terms of service is far more difficult than identifying images of child pornography, but is there a way to use technology to quickly identify terrorist content? For example, are there technologies used for the prevention ofspamthat could be useful?" the memo stated.

No 'magic algorithm'

Since late 2015, Twitter began using "proprietaryspam-fightingtools" to find accounts that might be violating their terms of service by promotingterrorism, sending them to be reviewed by a team at Twitter. That group also now looks into other accounts similar to those reported to them by other users.

Twitter said it has already had seen results, "including an increase in account suspensions and this type of activity shifting off of Twitter."

But it also noted that there is no "magic algorithm" for identifying terrorist content, which is why even humans reviewing the material are ultimately making judgment calls "based on very limited information and guidance." Free speech and local law in an area can also complicate matters.

"Like most people around the world, we are horrified by the atrocities perpetrated by extremist groups. We condemn the use of Twitter to promote terrorism," Twitter said in a statement released Friday. It said it would continue to "engage with authorities and other relevant organizations to find solutions to this critical issue and promote powerful counter-speech narratives."