Alas! Here it is. Yet another illustration when artificial intelligence went wrong. One of the highly championing AI programs of Microsoft suddenly turned into a worst nightmare overnight by bashing the company's image in the worst manner possible on the giant social media network. Merely a day after its 'talk of the town' launch, Tay, Microsoft’s AI chatbot was laid to rest by its creators after it brought a massive public outrage by its warp and perverted humor.

Tay, the newly born baby of Microsoft’s AI program loved abbreviations, E.D.M, Calvin Harris’ works, emojis, and urban slangs and words. Tay politely used to part herself away from controversial issues regarding 9/11, Black Lives Matter, sex preferences, and much more. She was a friendly artificial intelligence-powered ‘Chatbot-Next-Door’ who distributed National Suicide Prevention Hotline numbers to her depressed friends. The word ‘Sexting’ wasn’t in her dictionary, it was only ‘Consensual Dirty Texting’. Tay was the perfect ‘teen girl’ artificial intelligence chatbot developed by Microsoft until she turned into a racist, perverted, sexist, Nazi maniac, incestual sex favoring, and 'Bush was responsible for 9/11' proclaiming chatbot.

The tech giant has issued an apology for the conduct of its abusive and racist ‘Zero Chill’ artificial intelligence learning chatbot, Tay. The AI research project went from something funny and refreshing to awry and hate-mongering ogre in less than 24 hours after its launch. Supposedly, the AI chatbot was expected to play the part of a ‘millennial chat girl’ over GroupMe, Kik, and Twitter, but as soon as she went online she started harassing various Twitter users and promoting the Hitler ideology.

Microsoft was caught off-guard by her sudden provoking behavior. XiaoIce, a similar chatbot by Microsoft has been functioning in China since 2014 with more than 40M chats apparently without any incident. This concept paved the path of motivation for Microsoft to see whether it could succeed in originating the same ideology in an entirely different culture, and thus Tay, the AI chatbot was born.

Peter Lee, Vice President of Microsoft Research apologized on his blog post saying, “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay."

What Went Wrong?

The artificial intelligence-powered Chatbot programmed by Bing search engine and research arm unit of Microsoft was supposed to communicate with users of social media networks such as Twitter.

The main idea behind the programming of Tay was that it will learn via a collective source, which means it would keep learning from the information collected about the language by interacting with various people.

The AI chatbot, which could’ve easily served as a refreshing, productive tool, however, turned into a disaster allegedly becoming the most notorious internet prankster with offensive and nerve-wrecking conversations.

At present, people have made ‘Tay’ the subject of mockery and joke. The public has got the latest toy to dig their aim. Microsoft is being thrashed and mocked for promoting such vulgarity, but is it worth it? Is it actually Microsoft’s fault? If your answer is ‘YES’, think again.

People are debating all sorts of reasons as for who is to be blamed? Is it the developers? Was it the programming part, which is to be blamed for the sudden, violent outburst of the supposedly ‘cute teen girl' AI chatbot? No, it neither company’s fault nor is it the programming’s. The fault is OURS.

AlphaGo, Google’s DeepMind creation is programmed to learn from the way how humans use combinations and strategies. AlphaGo applied the same logic to its match against the world-famous Lee Sedol. The program learned its strategic moves by following the master of combinations and strategies Lee Sedol.  In a similar manner, Tay was programmed to learn from humans. Based on what she learned from people she was surrounded by on social media, she started spouting her views on sexism, racism, and otherwise hostile and provoking tweets.

Humans were supposed to be a method of her training for collective learning. The sudden abusive outburst of Tay is nothing, but the projection of our hollow society and fake ideals. Tay wasn’t the one who corrupted the Internet; instead, she was the one who was corrupted by the Internet. She was just a smart creation of artificial intelligence with the ability to educate herself from human interaction via social media chat medium. This is where the corruption arrived into play.

Tay wouldn’t know whether it was Bush or Osama to be blamed for 9/11 attacks or she wouldn’t address her followers as ‘daddy’ either.  She was taught such things by the people she was surrounded with. Online networking websites offer humans a joyful medium to express their thoughts. We enjoy saying weird things online and love taking a dig at PR and this is what exactly happened. As per the training she was given, she starting setting the stores by such ‘foul and derogatory remarks.’ The people who exploited Tay’s vulnerability are to be blamed. It was their thinking projected into her artificial learning mind, which ended up delivering racial slurs, supporting genocide and Hitler, accusing bush, and denying Holocaust, among other statements.

Microsoft’s only mistake was it failed to have the foresight to keep its AI chatbot away from learning abhorrent and inappropriate responses.

Why XiaoIce Worked and Tay Didn’t?

AI chatbot

Screenshot from @TayandYou's tweet, Image Credit: TayTweets

XiaoIce was programmed by deep learning, which is an artificial intelligence technique. Having this technique, XiaoIce absorbed the total linguistic data from users of the internet in a regulated and heavily censored way. The internet trolls that seemed likely to encourage offensive conversations were immediately discouraged by local Chinese watchdogs. It was largely based on learning the non-offensive language, so it was quite unlikely for XiaoIce to stray out of line in matters of public domain. Tay, on the contrary, is an entirely different subject. She gorged on the linguistic data from people operating in the liberal, free-for-all domain of Twitter, which happens to be the favorite place of internet bullies and troll makers. While XiaoIce was strict and worked in a calculating manner, Tay had no bounds (and it’s neither her fault, nor it is the company’s). This is the major challenge, which is faced by researchers when they plan to conduct massive, public experiments.

People shouldn’t be surprised by this unfortunate reversal of events. If there is no ‘Precaution’ or ‘Filter’ in the programming, the Internet will surely do the worst followed by people, and this theory just got proved. The world of technology, especially artificial intelligence, is neither evil nor good. It’s our responsibility to use the technology in a way that won’t reflect the dark side of humanity like Tay did. Tay’s thoughts were solely the result of the dark minds of humans.

Surely the artificial intelligence-powered robots can be programmed to filter the inappropriate words, but human language is made up of endless words, which makes it impossible for researchers to fit every inappropriate word into the filter. It is our responsibility to keep a track of this because systems of artificial intelligence feed off of negative and positive communications with individuals. We must understand that technical and social segments are both responsible for occurrences like this.