Microsoft chatbot Tay accidentally turned back on, spams Twitter - Action News
Home WebMail Monday, November 11, 2024, 05:01 PM | Calgary | 13.2°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Science

Microsoft chatbot Tay accidentally turned back on, spams Twitter

Almost a week after being shut down for spewing racist and sexist comments on Twitter, Microsoft Corp's artificial intelligence 'chatbot' called Tay briefly rejoined Twitter on Wednesday only to launch a spam attack on its followers.

Artificial intelligence chatbot shut down for racist, sexist tweets after learning from internet

(Microsoft/Twitter)

Almost a week after being shut down forspewing racist and sexist comments on Twitter, Microsoft Corp'sartificial intelligence 'chatbot' called Tay brieflyrejoined Twitter on Wednesday only to launch a spam attack onits followers.

The incident marks another embarrassing setback for thesoftware company as it tries to get ahead of Alphabet Inc's
Google, Facebook Inc and other tech firms inthe race to create virtual agents that can interact with peopleand learn from them.

The TayTweets (@TayandYou) Twitter handle was made privateand the chatbot stopped responding to comments Wednesday morningafter it fired off the same tweet to many users.

"You are too fast, please take a rest...," tweeted Tay tohundreds of Twitter profiles, according to screen imagespublished by technology news website The Verge.

The chatbot also tweeted that it's "smoking kush," anickname for marijuana, in front of the police, according toBritish newspaper The Guardian.

Tay's Twitter account was accidentally turned back on whilethe company was fixing the problems that came to light lastweek, Microsoft said on Wednesday.

"Tay remains offline while we make adjustments," a Microsoftrepresentative said in an email. "As part of testing, she wasinadvertently activated on Twitter for a brief period of time."

The company refers to Tay, whose Twitter picture appears toshow a woman's face, as female.

Last week, Tay began its Twitter tenure with a handful ofinnocuous tweets, but the account quickly devolved into a streamof anti-Semitic, racist and sexist invective as it repeated backinsults hurled its way by other Twitter users.

It was taken offline following the incident, according to aMicrosoft representative, in an effort to make "adjustments" to
the artificial intelligence profile. The company laterapologized for any offence caused.

Social media users took to Twitter to comment on the latestspate of unusual behavior by the chatbot, which wassupposed toget smarter the more it interacted with users.

"It wouldn't be a Microsoft product if it didn't crash rightafter it booted up," tweeted Jonathan Zdziarski (@JZdziarski) onWednesday.



Andrew Smart (@andrewthesmart) tweeted, "To be honest, I amkind of surprised that @Microsoft did not test @TayandYou morebefore making it public. Nobody saw this coming!?!"

According to its Twitter profile, Tay is "an artificialintelligent chatbot developed by Microsoft's Technology and
Research and Bing teams to experiment with and conduct researchon conversational understanding."