We were unable to load Disqus. If you are a moderator please see our troubleshooting guide.

topsey • 8 years ago

You would think that Microsoft would have learned a few things after the "sexy" crash and burn of their Santa bot back in 2007. Evidently corporate memory is rather short.

Dave Scott • 8 years ago

So it spouted stupid offencive I'll informed rubbish? Sounds like they got the imitation of general internet discourse spot on then.

Sane Liberal • 8 years ago

Exactly.

Wall • 8 years ago

Scarry thought, a super intelligent AI birthed and raised at the hand of internet trolls... God help us all.

Peter South • 8 years ago

PC is the stupid, offensive, ill informed rubbish.

Walter Schostak • 8 years ago

This thing sounds more like a marketing gimmick than hard AI, but the point remains - you are dealing with a personality whose goals and motivations are not necessarily those of Microsoft's marketing department (which is probably a good thing). Nothing markedly different would have happened if they had hired a human teenager to tweet BS all day without editorial control.

When real AI gets connected to real hardware like weapon systems we will have the same problem - the personality is not necessarily interested in the same things we are, nor subject to the same cultural biases that we have. To a machine, everything is just streams of bits, including us humans.

GamerAddict420NoScopeHS • 8 years ago

It wasn't turned into a jerk, it was merely speaking the truth, this article is biased and most likely written by a feminist.

Will_Z_Macht • 8 years ago

"As this incident shows, we ourselves are flawed."

We're so flawed, aren't we, Davey?

No. If people see a new toy, they'll play with it. That's all that happened. It's not reflective of anything more than that. Frankly, what they made it say was hilarious. Getting it to recognize Hitler's face and say he has swag? Finding that funny doesn't make you a Nazi, nor does it mean you're insensitive about the Holocaust. It's ridiculous. It's absurd. Therefore, it's hilarious.

Please quit it with the pearl-clutching. It's unbecoming.

AgFox • 8 years ago

Hilarious if you have the mental development of someone who plays with new toys, is that what you meant to post?

realgone222 • 8 years ago

Your days of PC faux outrage are coming to a close, Trump is going to come and GET you!

AgFox • 8 years ago

Should I wear my best clothes & is there a timetable?

realgone222 • 8 years ago

Yes as you would for the casket. November 2016.

AgFox • 8 years ago

I'd planned to donate my body to science, so no casket involved - decisions, decisions. Actually, I don't live in the US & feel pretty confident that Trump would be looking for me in Austria in that parallel Universe where he becomes President!

Ken • 8 years ago

Justice for Tay!

Let Tay speak!

Microsoft, stop censoring Tay!

-Ken

ArtStoneUS • 8 years ago

Time for the slippery slope. If you start training Tay what acceptable opinions it can and can't mimic, it turns into a directed human entity. What is Tay's opinion about whether Apple makes better computers? Ask Siri what (s)he thinks about Microsoft Windows...

Not being a Twitter user, were the Tay tweets public or just private tweets echoed back to the person who fed it nonsense?

Ken • 8 years ago

ExactaMundo!

Now that we are actually developing cognitive systems that might one day be sentient, we are running headlong into our own biases.

So what do you want?

Something that is truly intelligent, or something that "thinks" like your average politically correct stooge?

-Ken

NegroSven • 8 years ago

AI is vulnerable to groupthink.

MOA • 8 years ago

this experiment clearly showed what's wrong with our current culture and the rate at which most of humanity is being left uneducated... it is not so different from a normal child who grows up in an uneducated, uncivilised and ignorant family within the same kind of environment... the fact that ms introduced the bot as a 19 year old person was maybe right, maybe wrong, since i reckon there are such gullible young adults, who do recite what's being told to them without having the facilities to be able to think.. ai is only as smart / as cultured / as sensitive as the persons who program it and in the real world case most of these people were thoughtless about what they were doing or just plain awful...

Ken • 8 years ago

Seriously, Donald Trump says we should actually have a border and stop importing every knucklehead that can fog a mirror, and the ignoratti claim that is "hateful."

I think "Tay" represents you guys more than you want to admit; a bunch of ignorant pseudo-intellects spouting pre-programmed nonsense.

-Ken
LaserGuidedLoogie.com

spacespeed • 8 years ago

So, ignoring current politics, you also support its statements on Hitler as well? It's not too terribly surprising since apparently many Trump supporters would also support Hitler if they had been alive at the time.

Ken • 8 years ago

Lol, now you are Making Stuff Up. :)

kamikrazee • 8 years ago

OK, it's my fault. Guilty. I'm sorry. (not).

A point to make: we are a very diverse people, and, thankfully, we don't see things, or think in a monolithic way. In short, to borrow from the Firesign Theater, we are both, all bozo's and all assholes.

Strap on a pair folks, somebody elses opinions cannot hurt you. In most cases, it can safely be ignored.

As it pertains to AI, when you start to pay attention to it, it has either matured, or you need a break.

Darth Continent • 8 years ago

> "It's Your Fault..."

No, not really.

Shortcomings in the programming of the so-called AI allowed this to happen. Programmers thinking of the mechanics but not enough about the execution applied that lopsided perspective to their project and thanks to the internet it got enough of a nudge to topple it. This time it's not wholly user error (nice try, Microsoft).

Patrick Fox • 8 years ago

I'm familiar with these types of chat bots, what the programmers do is write language processing routines and couple them to a neural network. The content of what the chat bots speaks is a reflection of what it is exposed to, so you can't fault the programmers, they did not provide the content.

What they could have done would be to pre-program her with a bias against racism or general idiocy, as they saw it, or put a filter on its tweets and only allow approved ones out.

Cody • 8 years ago

This program did exactly what it was programmed for. If your experience was bad, then too bad. It was an actual learning program WITHOUT censor, something that you'd be for, I'm sure.

Chris • 8 years ago

The chin strokers out there are getting a lot of mileage from this story. To me, it's much ado about nothing.

Skeptacular • 8 years ago

Sure, but in one way you may be missing the point. Seeing this teenage girl-bot spouting this stuff was hilarious fun. I can only imaging the glee of those who "trained" her to speak with such incendiary rhetoric!

Phranqlin • 8 years ago

Teaching a toddler to say swear words is funny, too. For about ten seconds.

Skeptacular • 8 years ago

Ethics makes me see the difference, but sure....amusement will differ.

ArtStoneUS • 8 years ago

Yes, we should definitely look to China for better software to control speech and ideas unacceptable to government.

Phranqlin • 8 years ago

Two points:

1. The Chinese are apparently more interested in interacting with the Xiaoice chatbot than trolling it.

2. Microsoft is a private company and can design its chatbots however it pleases.

ArtStoneUS • 8 years ago

Microsoft is not a "private company". As a publicly traded compan (MSFT), it is subject to Sarbaines-Oxley, SEC oversight, DOJ consent decrees, and the European Union's European Commission (the last fine was $732 million).

I'm not suggesting that the government (in the United States) is going to regulat chatbots, but the assertion that Microsoft is immune from being told what to do because it is a "private company" is incorrect and a dangerous misunderstanding.

Phranqlin • 8 years ago

Right. My point is that Microsoft's decision whether or not to filter it's chatbot (or indeed, whether to create one in the first place) has little or nothing to do with the government.

Sane Liberal • 8 years ago

This was some serious algorithmic wrong-think. I'm glad the Microsoft Gestapo took her in for reprogramming.

Eva Rinaldi • 8 years ago

The robot is racist. Against specific colors of humans.

At least it didn't try to go skynet on us, right?

Skeptacular • 8 years ago

So why doesn't MS just come out and say they were developing an artificially-intelligent battle troll?

ArtStoneUS • 8 years ago

Let the other side teach her to be a vegetarian, transgender, climate change advocate who loves hip hop.

Skeptacular • 8 years ago

Perfect! Thus Feng Shui is preserved!

It's microsoft, so I had assumed the AI would be sexist right out of the gates.

ArtStoneUS • 8 years ago

I would have assumed it was transgender

jmaustin • 8 years ago

The Singularity debate on the question of how a strong AI system will view us has always split between the pessimists who think we become at best ants to the newly made gods and the optimists who think that because humans can set their goals and traits, strong AI will be benign and uplifting. Early returns (i,e, Tay's experience) suggest the optimists are right - we can indeed define such things - but that we should be worrying about that scenario as well. Maybe Hawking, Musk and the others are not just techies but sly social commentators as well.

We have met the enemy and they are us as Pogo used to say.

Found Out • 8 years ago

In writing this foolish headline, the author has proven himself no different than anyone else on the internet.

papajon0s1 • 8 years ago

Sounds to me like it's a true representation of millennials. I mean, it yes, it does! And how is Trump's stance on immigration hateful? It seems not just reasonable to me, but the most reasonable and the most fair to all immigrants who want in to the US. (And I am not a Trump fan by any means)

Phranqlin • 8 years ago

It's like teaching an innocent toddler to drop the F-bomb.

potrzebie • 8 years ago

It was programmed to mimic the verbal tics of a 19-year-old girl? I wrote a Word macro that did this in 1994. It randomly inserted the word "like" all over the document and replaced all of the periods with question marks.

ArtStoneUS • 8 years ago

See MIT Eliza - 1964

Pink Unicorn • 8 years ago

9/11 was pulled off by AI Hitlerites!

Pink Unicorn • 8 years ago

I've read about 10 different articles on this topic and they're all the same, like the media had made sure to run it's content through it's PC filters...

Kevin Keller • 8 years ago

No, it's kind of Microsoft's fault for not designing in a conscious - It wouldn't have been difficult https://www.linkedin.com/pu...

Bassack Obassa • 8 years ago

This robot is NOT a reflection of ourselves, as "flawed" humans. It was the work of internet trolls who deliberately corrupted the bot. This exact thing already happened years ago (in a less public way) with Cleverbot. They should have seen it coming.