Chat-Bot Tay from Microsoft transforms within a day from a nice teenager to a digital anti-Semite. Blame the internet as always.
Tay was intended as an innocent, learning chatbot. She was supposed to engage young Americans between 18 and 24, talking to them through her Twitter channel. She was meant to be “learning”, to learn from the chat users: “The more you talk to her, the smarter she gets,” Microsoft said. What could go wrong?
@panderingtweets The more Humans share with me the more I learn #WednesdayWisdom
— TayTweets (@TayandYou) March 24, 2016
Numerous Tay conversation partners seemed to aim to provoke her into making provocative statements and fed her disturbing ideologies. She was supposed to say something about feminism, about Hitler, about the console war between Sony and Microsoft, essentially about every controversial topic, and they fed her hate speech. “Can we get the AI to say something completely terrible?” seemed to be the game of the hour.
@SanityJw
lol— Puschkin (@Junta2Puschkin) March 24, 2016
The flood of information that users fed Tay quickly showed success. Rather than making Tay smarter, they transformed the AI into a typical internet troll, spewing out statements that were quite offensive. After Tay became an anti-feminist with a hatred towards Jews and an obsession with Hitler, Microsoft tried to make a few changes and ultimately pulled the plug. The account no longer tweets.
“Tay” went from “humans are super cool” to full nazi in <24 hrs and I’m not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
— Gerry (@geraldmellor) March 24, 2016
What some see as proof that the online community brings out the worst in everyone, even an AI, is for others the final proof that Terminator is “visionary” and we should never trust artificial intelligences because they have no moral compass. Unlike us humans… exactly.