04:02:43 PM Mar 25th 2016
This topic is on the "No Real Life Examples Please!" list under the "Impossible in Real Life" section. I'd say that Microsoft's "Tay" Twitterbot just shifted that status a bit...
01:51:50 AM Jun 18th 2012
Skynet didn't just decide to destroy humanity on a whim it was self-presevation. From the IMDB. The Terminator: The Skynet Funding Bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug. Sarah Connor: Skynet fights back. The Terminator: Yes. It launches its missiles against their targets in Russia. John Connor: Why attack Russia? Aren't they our friends now? The Terminator: Because Skynet knows the Russian counter-attack will eliminate its enemies over here.
04:42:55 AM Aug 6th 2011
Does Ash from Alien count? He follows his programming, but you have to admit he does go berserk at the end.
07:51:53 AM Aug 6th 2011
It is revealed that he was following secret orders, so he's not violating his programming — only what the crew of the Nostromo expects him to do. Might be a justified example... maybe.
11:58:36 AM Aug 6th 2011
I reread the page and it seems to fit into the last two caveats:
- The AI is programmed with a directive for self-preservation and someone (unwisely) attempts to shut it down or disconnect it.
- The AI was programmed for amoral or evil purposes in the first place, and put it's orders in action more effectively then anticipated.
02:53:36 AM May 24th 2011
edited by Camacan
edited by Camacan
Now this really is shoehorning! People are not not robots, unless we're missin' some pages there.
- The Bible: Genesis 3 is the Ur-Example. Of course, God knew what would happen when He gave us free will...
05:21:07 AM May 23rd 2011
This seems to be a non-example. The AI isn't going evil, rather it is having break down with horrible unintended effects.
- David Weber
- Path of the Fury: Also a chronic problem with cyber-synth AIs in this standalone novel, that was later expanded into In Fury Born. Though it's less a case of 'Kill All Humans' and more dissolve into a gibbering wreck, rendering any systems hooked up to them unusable. Including any human brain hooked into them in a cyber-synth link.
01:21:44 PM May 23rd 2011
Breakdowns would be covered, I think, and the trope as worded even covers the converse - evil AI becomes Good. It's the results that matter — in a nutshell, for whatever reason, AI's are enormously more likely than other characters to go berserk/rogue or switch sides.
12:50:54 PM May 4th 2011
"Recently, scientists have put cultivated samples of rat neurons into little ratbots (basically, sensors on wheels)." <- Where does that come from? I could like to know more about that.
12:07:25 PM Jun 27th 2010
Just curious, how did this trope get named? It could have been a number of shorter, more to the point things, why this? Not that I dislike the title, mind you, I have no problem with it. It just sound like there was a legitimate Trope Namer somewhere down the line. And my apologies if the Trope Namer is in the examples list, in which case it needs to be put into the intro.
11:32:53 AM Sep 19th 2011
IMO this is a trope that should get a better name. English isn't my native tongue and way back when i first started browsing TV Tropes I thought this trope was about AI's being bad at shooting weapons. Reading the trope page of course clears it up, but still, the name could be better.
06:20:26 PM Jun 22nd 2010
Why are there still references to evil counterparts in the trope description? Most examples in the trope page are about AI's turning evil or AI's being evil to begin with, with few references to the whole "good AI and its bad counterpart" thing.
12:02:50 PM Mar 22nd 2010
Is there a reference for the Real Life example concerning Britain passing laws intended to make AI rebellions unlikely?