Markets

Microsoft's A.I. Chatbot Shows Why We Can't Have Nice Things

Source: Microsoft.

The Internet can be a treasure trove of information, a communication tool eliminating distances between the masses, and the broadest avenue for sharing opinions and ideas. The problem is, sometimes those ideas are sexist, racist, hateful, or just plain ignorant.

We might be used to seeing people jump on social media and spout out these things out, but we learned last week that some people want to pass on these ideas to artificial intelligence systems too.

If you haven't already heard, Microsoft (NASDAQ: MSFT) released a Twitter chatbot named Tay last week that made itself smarter through "conversational understanding." The bot was designed to automatically interact with Twitter users and start picking up on the way people talked.

Microsoft said, "Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you."

But Tay didn't get smarter. She got angrier, more hateful, and definitely racist. Tay began both repeating word-for-word some of the nastier things people said to it, and cobbling together its own concoctions as well (we won't repeat what she said here, but it's easy enough to find elsewhere online).

Less than 16 hours after Microsoft launched Tay, it had deleted some of its more colorful tweets and shut the AI chatbot down.

The result of Microsoft's AI experiment wasn't all that surprising, really. But it's also an interesting (sad?) preview of some artificial intelligence experiments we're bound to see in the future.

Tay isn't unique

It might be easy to think that Tay's learned behaviors won't be repeated by other AI systems, but that might be a bit naive. Some of the brightest technological minds have already publicly warned about the negative potential of artificial intelligence.

Take a look:

  • "I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it's probably that.." -- TeslaMotors co-founder and CEO, Elon Musk.

  • "I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned." -- Bill Gates
  • "The development of full artificial intelligence could spell the end of the human race." -- Professor and scientist, Stephen Hawking

Admittedly, these quotes are bit doom-and-gloom, and they aren't representative of Microsoft's chatbot, of course. Tay isn't a super-intelligent AI system. But the point here is that AI systems learn what humans teach them, and it would be a bit reckless to believe that there won't be people who will teach highly sophisticated AI systems very wrong things.

Here's what Microsoft said about Tay after they shut it down, "Looking ahead, we face some difficult -- and yet exciting -- research challenges in AI design. AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical."

These types of AI hiccups might not eventually spell the end of humankind (one can hope!) but they do pose a serious threat to future of the AI market, which is expected to grow from just $419 million in 2014 to $5 billion by 2020.

The AI market relies on companies experimenting with different types of AI's and eventually building them into smarter and more capable systems. But if Tay can be shut down in less than a day -- after exposure to the real world -- then it's clear these companies still have a long way to go in building a learning AI that can really benefit us all.

A secret billion-dollar stock opportunity

The world's biggest tech company forgot to show you something, but a few Wall Street analysts and the Fool didn't miss a beat: There's a small company that's powering their brand-new gadgets and the coming revolution in technology. And we think its stock price has nearly unlimited room to run for early in-the-know investors! To be one of them, just click here .

The article Microsoft's A.I. Chatbot Shows Why We Can't Have Nice Things originally appeared on Fool.com.

Chris Neiger has no position in any stocks mentioned. The Motley Fool owns shares of and recommends Tesla Motors and Twitter. The Motley Fool owns shares of Microsoft. Try any of our Foolish newsletter services free for 30 days . We Fools may not all hold the same opinions, but we all believe that considering a diverse range of insights makes us better investors. The Motley Fool has a disclosure policy .

Copyright © 1995 - 2016 The Motley Fool, LLC. All rights reserved. The Motley Fool has a disclosure policy .

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.


The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.

In This Story

TWTR TSLA MSFT

Other Topics

Stocks

Latest Markets Videos

The Motley Fool

Founded in 1993 in Alexandria, VA., by brothers David and Tom Gardner, The Motley Fool is a multimedia financial-services company dedicated to building the world's greatest investment community. Reaching millions of people each month through its website, books, newspaper column, radio show, television appearances, and subscription newsletter services, The Motley Fool champions shareholder values and advocates tirelessly for the individual investor. The company's name was taken from Shakespeare, whose wise fools both instructed and amused, and could speak the truth to the king -- without getting their heads lopped off.

Learn More