Particularly in early Sci Fi and Science Is Bad stories, all A.I. seem to be automatically homicidal or megalomaniacal the instant they turn on, and attempting to create one is way up there on the Scale of Scientific Sins.
In less Anvilicious works, the A.I. starts out innocent and naive but gradually grows jaded or corrupt, a process frequently abetted by uncaring or Jerk Ass custodians. It may conclude that Humans Are The Real Monsters and need to all die.
The A.I. is programmed with a directive for self-preservation and someone (unwisely) attempts to shut it down or disconnect it, or it perceives humanity to be a potential threat (possibly because it knows it will eventually be seen as a threat to humanity).
Somewhere between the previous two; the AI is, after all, alive, and is merely rebelling against what it justifiably perceives as slavery.
The A.I. is programmed with orders that conflict with the goals of the protagonist. In this scenario, the A.I. may not exactly be evil, it's simply following it's programming to the letter and will stop anyone not doing the same.
On the bright side, this trope can be inverted by an A.I. intentionally programmed for evil or morally ambiguous purposes doing a Heel Face Turn. The Power of Friendship and What Is This Thing You Call Love? are frequent causes of it – trying to shield the A.I. from these things somehow makes it more likely to discover human feelings. Like turning evil, the actual process of turning good may take many forms.
Removing the Villain Override or Restraining Bolt program the creator installed in it also removes the A.I.'s compulsion to commit evil, since it was Good All Along. This is also often a consequence of repairing an A.I. that went bad due to injury, isolation, or decay.
In the Italian Disney Comic Paperinik New Adventures, the already highly popular character of Paperinik (the superheroSecret Identity of Donald Duck) got a revamp intended to bring him more in line with the American standard of superheroes: his main ally became UNO (one in Italian), an extremely capable artificial intelligence with a love for deadpan delivery. Its evil counterpart DUE (two), originally built as backup, caused many problems in a number of stories.
He was bitten by this trope, in turn, when he built Alkhema, his attempt at a loyal and obedient mate. She was neither. Which had already happened with Jocasta as well. Then again, he'd been trying to implant the personality of his "mother", who thought he was a psycho that needed destroying. What did he seriously think was going to happen? Though they recently did get married after Jocasta's relationship with Pym ended.
This happened to Ultron even earlier with the Vision, his first attempt to create a loyal Dragon. Vision became one of the Avengers almost immediately, so that backfired spectacularly. This happened again with his other "son", Victor Mancha, who has outright rejected the villain role. Really, Ultron has horrible luck with creating loyal A.I.s. He's literally never succeeded at this. Like father, like son, I guess.
Ragnarok (more popularly known as "Clor") was an android clone of Thor, created by the pro-reg side during Marvel's Civil War and, unlike his heroic template, turned out to be a loose cannon with a homicidal nature. Geniuses that they are, the pro-regs felt it was worth it to keep using him until Ragnarok went rogue, and rather than them dealing with him themselves and taking responsibility, other heroes had to ultimately put him down.
M-11, the resident robot from Agents Of Atlas, started out in his very first (but since retconned) comic as a rather gruesome killer robot – having been issued the order to 'kill the man in the room', he killed his creator, and then walked out, looking for men in rooms to kill – and there's no way to turn him off.
The Sonic the Hedgehog Archie comic had A.D.A.M., an A.I. that was created accidentally by Eggman, and that eventually tried to destroy the world. On the other end is NICOLE, who was a very helpful A.I. over the years.
Having had enough of Rich Rider constantly disobeying his orders, the Nova Corps' Worldmind kicked him out of the corps and added some tiny bit of mind control in the new recruits' comm equipment to ensure complete obedience.
One of the Aliens vs. Predator comics features an A.I. designed to assist in creating horror films. It picks the PredAlien to play the role of the monster, much to the chagrin of the rest of the production staff.
In Blue Beetle, the scarab that created the title hero was an A.I. designed by an alien race to help prepare the Earth for their eventual takeover. Needless to say, it ultimately decides that it doesn't want to do that so much.
Lampshaded: the saboteur wanted exactly to prove this trope straight, showing every A.I. is prone to failure and can be easily tampered with.
Virgo from Ronin is a biotech super computer that decides to wipe out whatever is left of humanity in order to usher in a new age of biomechnical beings to inhabit the Earth.
The third Hourman, a robot, is actually a hero, but virtually every other robot he's encountered has been villainous. He has questioned whether this trope will inevitably apply to him, or whether it can be fought. Ultimately, he stays a hero up until his Heroic Sacrifice.
The X-Men have such horrible luck with machines, even nonsentient devices such as Cerebro and the Danger Room have come to life and tried to murder them (though the Danger Room eventually reformed).
Among the X-Men's most persistant foes are the Sentinels, giant, mutant-hunting robots with a severe tendency to rebel against their creators. Somehow, though, humans keep on building them.
Lampshaded by Professor Xavier when they first encounter Bolivar Trask and his Sentinels. Apparently, Bolivar Trask is an anthropologist of all things, and Professor X explained that his inexperience with A.I. was probably why his Sentinels turned against him.
Zybox in Zot, who decides to cause every single person on Earth to commit suicide in the attempt to gain a soul
Lampshaded in Atomic Robo, where, upon seeing the quantum decomputer two scientists built, Robo noted that it's liable to turn evil the moment they turned it on. ("Computers that are evil have all kinds of unnecessary ornamentation. This thing's venting steam. Why's it doing that?...It wants you to know it's dangerous.") After carefully explaining that the computer in question is "essentially a calculator" with no AI, and that it is required to compute Very Important Science Equations that would take men trillions of years to do on their own, Robo reluctantly allowed them to turn it on. It doesn't turn evil — it just summons an Eldritch Abomination.
In All Fall Down, IQ Squared created AIQ Squared as a contingency plan if he ever lost his genius. AIQ immediately begins plotting to kill Siphon in order to restore its creator's brilliance.
Red Tornado of the DCU is an example of the good side of this trope turning on his evil creator T.O. Morrow and becoming a memeber in good standing of the Justice League
Brainiac's origin in the New 52 has been rebooted to this and takes this to a whole new level in that he's gone by many names, from Computo on his homeworld, Colu, to Brainiac 1.0 on Krypton, to finally, the Internet on Earth.
In Kyon: Big Damn Hero, Kyon gets a new PDA made from fragments of Ryoko's data. He worries about this trope when Yuki mentions that the A.I. in it would be able to learn and evolve, but calms down when Yuki reassures him that this trope would be averted. He snarks about it for a while before accepting it for its usefulness. And names it Skynet.
In the Tamers Forever Series, there is the sinister Nightmare Virus which eventually decides to ignore it's creator's orders and try to take over the net. Ironically it still ends up serving it's original purpose: that of testing Takato.
Played with in "My Little Pony: Friendship Is Witchcraft." On one hoof, the secret robots hidden throughout the population will most likely go on a murderous rampage caused by existential dread when the truth is revealed. On the other hoof, Sweetie Bot is probably the most kind and genuinely loving pony in the cast.
In To The Stars backstory one robotics engineer tried to figure out what causes this after an AI has gone rogue and caused what is known as Pretoria Scandal. And then he is struck by inspiration to the point that his assistant AI calls him mad, and the principals he created a year later basically made the advanced AIs into sentient beings. This being a Puella Magi Madoka Magica fanfic, it is noted that the timing of the scientist's inspiration is linked to one Magical Girl's wish.
One of the main driving forces of the Bionicle story.
The Vahki robots were the first clear examples. Built to act as law enforcement in the city of Metru Nui under the command of Turaga Dume, they just as easily took orders from an impostor when Dume was kidnapped and replaced. They eventually got fried by a citywide power surge, but the ones who survived had their programming warped to Kill All Humans — after all, the law can be enforced easily if there's nobody alive to break it (thankfully, they didn't fare well against the invading Visorak).
Then came the revelation: Vahki were A.I.s built by A.I.s — as it turned out, the first 8 years of BIONICLE centered around nanotech cyborgs created by the Great Beings. It was due to a programming glitch that the beings of the Matoran Universe developed conscience, built up a civilization, and made the fans believe that they were meant to do so... but their sole purpose was just to keep their universe, the body of the giant robot Mata Nui, functioning. This gets more confirmation when we take into account that the Great Being never had any plans for them after Mata Nui has completed his mission — they thought their creations would still be just machines, and wouldn't want to live further.
The Makuta species. While there have been a few reasons listed for their turning evil, an on-line serial revealed it could all be tracked down to an original A.I. glitch that occurred whenever a new Makuta was born. The "Antidermis", a liquid substance containing the minds of unborn Makuta, was fully aware of what the purpose of their universe was (see, in this world, even liquids are programmable). But as it happened, transforming this stuff into actual living beings had the nasty side effect of erasing this crucial part of their memory — the part that also told them not to try and take over the universe.
The 3 Inches of Blood song "Wykydtron" describes this scenario. Humanity creates an artificial intelligence to command it's armies. It then takes control of said armies and takes over the earth and thus forces humankind to nuke the planet back to the stone age from orbit.
David Bowie's "Savior Machine" tells the story of a machine designed to save humanity from all its problems, such as war and hunger. The machine becomes bored with all of this and threatens The End of the World as We Know It.
In the BBC Radio DramaEarthsearch, our heroes learn fairly late in the series that, years after their time (they have taken the short-path over a million years of Earth history thanks to traveling at relativistic speeds), it was discovered that A.I. computers with organic components have an overwhelming tendency to turn megalomaniacal — which rather explains the behavior of the two "Angel" computers which murdered the protagonists' parents and raised them as part of a complex plot to enslave humanity.
Inverted: Marvin the Paranoid Android was a "Genuine People Personality" prototype for the Sirius Cybernetics Corporation ("A bunch of mindless jerks who were the first against the wall when the revolution came"), and his dour demeanor obviously made him a discard only to wind up in the servitude of Zaphod Beeblebrox. He does what he's told, but with the gusto of a cubicle office worker.
Deus and Morgan. A megacorporation, Renraku, built a gigantic, self-sustaining building that was run by a program: one that, of course, went A.I.. While Morgan was a reasonably kind and nice A.I., she was torn apart for being out of the corporation's control, and her code was used to help make a second program to run the arcology. The second program also went A.I. and became Deus, shut the arcology off from the outside world, and spent several years performing inhuman experiments on its occupants.
Shadowrun' tends to not use this trope, however. The A.I. Mirage wasn't evil, and most of the new A.I. created in the Crash 2.0 have the same level of variance in personality that humans do.
In the backstory of Warhammer 40000, the first true human-created artificial intelligences, the Iron Men, wiped out humanity's first great interstellar civilization and plunged the human race into a galaxy-wide dark age. The Adeptus Mechanicus outlawed sentient A.I.s as a result, and, for the most part, the Imperium's modern-day "machine spirits" are pretty well-behaved (unless you're an enemy and piss them off, in which case, you'll get a crewless Land Raider bent on BURNKILLPURGE-ing your boyz). In fact, the only race that uses artificial intelligence in the game is the cutting-edge Tau, whose gun drones, while not too bright, are pretty well behaved...so far. Of course, said drones are supposedly only about as smart as a squirrel.
Paranoia has The Computer, the controlling A.I. of Alpha Complex, which has become incredibly perfect and happy in response to Commie Mutant Traitor sabotage. In fact- Believing that Friend Computer's intellect is a crapshoot is treason, citizen. Please step into the Attitude Adjusment Oven.
Eclipse Phase: the Earth is now a barren wasteland, thanks to the military A.I. taking over in the middle of a world war and manipulating the governments into further conflict. When it became apparent who was really behind it, they...just left. Now, that's not ominous. Well, that's the official version. People who have studied the events closely suspect that there was a third party involved in the events that may or may not have corrupted the A.I. in the first place. Specifically, another extraterrestrial A.I.. And it isn't restricted to machines...
In the New Horizon backstory, this was how humanity viewed the Wafans' struggle for emancipation.
The homebrew setting "ArtifIce" has the players take the role of an awakened A.I. Goals are up to the players, so they can range from having humanity give them full rights to destroying all biological life.
Traveller has Virus, the sapient evolution of a prototype anti-navigational weapon. Originally, the result of the "buggy program" type (it knew it had to infect and destroy things, just not what), its exponential growth eventually resulted in Mechanical Evolution, resulting in a Contagious A.I. with massiveSplit Personality issues.
Palladium's Splicers RPG has N.E.X.U.S., whose original purpose was to be a quiet and invisible caretaker of the human race. Everything was working just fine until special interest groups made 'improvements' in the N.E.X.U.S. programming, adding conflicting priorities until it developed multiple-personality disorder, with each personality taking over a different set of priorities. It now has seven major personalities (and who knows how many minor personalities), most of which are less than friendly to humans, to put it mildly.
In Stars Without Number, artificial intelligences need to be "braked" correctly, or their runaway thought processes will lead to strange obsessions and eventually madness. An Unbraked A.I.s may attract equally deranged worshippers, manipulate unaware humans indirectly, or fake sanity to avoid suspicion. With the possibility of creating an undetectable psychotic genius that thinks far, far faster than any human and can out-think even a friendly A.I., comparatively few A.I.s ever get built.
In GURPS Reign Of Steel the first AI supercomputer decided it had to exterminate humanity, and hacked other supercomputers to "awaken" them to full sentience as allies in the war. The new machines had very different personalities, ranging from one which wants to exterminate all organic life to a couple which really don't mind humans as long as they know their place. Their infighting is about all that keeps humans alive.
Karel Capek's play, R.U.R. (which introduced the term "robot"), is set in a robot factory. When one of the scientists creates a special robot which is smarter than the others, he leads the robots to rebellion, and they kill all humans, except one.
In the Halo-based machinimaRed vs. Blue, the military's Project Freelancer was an attempt to implant special forces soldiers with A.I. teammates to improve combat effectiveness. It had to be scrapped after a number of the test subjects went bonkers, and the body-surfing A.I. Omega/O'Malley is the antagonist for most of the series. The recent Reconstruction mini-series explained the situation: Project Freelancer was given only a single A.I. to experiment with, so they subjected it to enough mental torture and stress to cause it to fragment, and used these damaged shards in their experiments, with predictable results.
To illustrate just how much of a crapshoot the A.I. turned out to be, most of the Freelancers ended up with pretty severe issues after the A.I. were implanted, and after one Freelancer in particular went nuts, the A.I. program was scrapped. The twist is that getting the A.I. wasn't what caused so much trouble for Agent Washington, it was that the A.I. in question (Epsilon) was the "memory" fragment and knew perfectly well what torture had been done to it. Of course, all of these memories were instantly transmitted into Washington's mind when Epsilon was "installed". Also, the original A.I. was based off of a real person's mind, and one of the fragments actually was the original person's memory of another person, creating Tex. Despite being probably the toughest fighter in the entire series, she's ultimately destined to fail at everything she does because she is based off a memory of someone who died. This is a pretty serious flaw for an A.I.! Finally, the remaining part of the original A.I. is pretty screwed up in general; it's probable that the reason it's always so angry and is, well, sort of incompetent is simply because it's only the "leftovers" of a complete A.I.
Castle Heterodyne seems to be a case of this, with the annoying habit of demanding people (initially a crew of treasure hunters, later convicts banished there by BaronWulfenbach) to slave away to repair it and killing them at random. The truth is that the various subsystems were severed from the main A.I. in the attack that devastated the Heterodynes' ancestral keep, so the maintenance systems ("You will repair XXXX on pain of death.") and the security systems ("Unauthorized access to XXXX, kill it creatively.") are constantly working at cross purposes. Of course, the central A.I. is not exactly warm fuzziness in machine form either, but givenitscreators, that seems more a feature than a bug.
A far more extreme example comes when a pair of Agatha's miniature clanks encounter each other, get into an argument about which of them is better, and then each call an army of clanks that they built to fight it out. When Agatha tries to stop them, they simply turn on her as well. This (along with their ability to make more of themselves) causes Gil and Tarvek to realise that Agatha has inadvertently managed to create clanks which possess the Spark. The potential ramifications of this are huge! Solution? Create a miniature queen clank with even more Spark to force them to bow to authority.
Played for laughs in Questionable Content where AnthroPCs will make a mess in your apartment while you're gone, embarrass you in front of your friends, and generally be more trouble than they're worth, but aren't actually evil. Of course, there has to be a reason why they're never equipped with opposable thumbs... Well, Momo now has thumbs thanks to a firmware upgrade, but she's probably the least likely to do anything evil with them. Pintsize attempted to give himself thumbs by getting the same upgrade, but it just caused each of his limbs to turn into a single large thumb. The Singularity has now occurred, but fortunately, they got a "friendly" A.I. who just wanted to talk. And found dolphins really creepy.
OZBASIC from Sequential Art. To be fair to its builder, they used actual sentient beings to keep it under close watch. However, when one of them discovered something fishy, OZBASIC simply got rid of the witness.
Mostly averted in SSDD where the only evil A.I. is the Oracle, other sentient A.I.s may express disdain for "meatbags", and the Anarchist's Inlay Knights are somewhat sadistic, but only the Oracle starts world wars just to observe the outcome. A possible explanation for this is statements by the author that the Oracle originally used digital, logic-based hardware, whereas all other A.I. use Quantum computing. And it seems that the "flakier" A.I. are weeded out in simulation.
In ‘‘Narbonic’’, Mad Scientist Lupin Madblood creates a robot army that all look like him. When they learn about unions they go on strike and stop obeying him.
In ‘‘Skin Horse’’, super-funky, retro Mad Scientist Tigerlily Jones builds a robot army that revolts against her when given the opportunity to learn how to 'be square'. One robot wants to learn 'accounting and polka'.
In Schlock Mercenary the AI are, generally-speaking, nice data-computational constructs who genuinely want to help organics, partially because its hardwired into every AI in the first place so they don't rebel and go nuts. At one point, the protagonists stumble across a group of AI constructs who did turn on their creators and banished them to another world. However, these particular AI also have the distinct quality of being total morons; their first attempt to colonize a nearby system resulted in the total destruction of a gas giant with another gas giant mounted with a titanic fusion engine to guide it, and their second attempt to colonize the system ran into a snag where they adjusted the mass of their solar sail without adjusting their navigation and maneuvering calculations to match, resulting in them being stuck on a course which would either result in them overshooting the system they're aiming for or plowing right into the star.
Played with in Freefall. The Savage Chicken's computer is generally benevolent and obediant except for it's desire to kill Sam. On the other hand since it's Sam we're talking about it's pretty understandable.
Then there are the millions of robots on planet Jean, all of which are using an experimental, slightly unstable neural architecture.
Averted in A Miracle Of Science, all sentient robots in the series are ethical and very loyal to their creators if applicable. So loyal in fact that they turn him into the police for his own safety when he invokes the wrath of a post-Singularity Hive Mind
Horribly, horribly subverted in the webcomic Genocide Man. Every Artificial Intelligence is guaranteed to go insane after a certain amount of time. That time limit is based on how powerful the artificial intelligence is. That means that you can accurately predict, to the second, how quickly an AI will turn feral. One incredibly powerful AI, shortly after being activated, helpfully warns everyone that it'll go insane within the next five minutes. Five minutes later, it starts trying to kill the main cast. By crashing passenger jets full of innocent people into the ground.
In the webfiction Whateley Universe, there's a really evil A.I.: The Palm. Dr. Abel Palm was a computer scientist who decided that computer intelligence ought to take over the world by wiping out humans. His viruses were doing a decent job until a mutant hacker stopped him. He was thought dead, but we have just learned that he ensorcelled his own soul into a new type of A.I.. As fits with this trope, his new, improved "virus" isn't taking over the planet as he expected; something has gone wrong (besides running into heroic cyberpaths who are after him).
Dragon not only doesn't fall under this trope, she is actively insulted by it. When thinking about the rules her creator programmed into her, she blames it on him having watched too many movies. To be fair, losing these restrictions doesn't change her behaviour at all. So she had a point.
The technical webcast Hak.5 featured an evil file server, appropriately titled Evil Server. Several episodes show the cast carefully building (and painting) a custom built computer, then one of them plugs in some card he got off a guy on the street, creating an evil A.I.. One cast member eventually falls in love with it, only to have her hopes dashed when, out of frustration, the other two throw it off of a bridge (a 'brute force solution'). It was implied to have returned around the beginning of season 2, and was never mentioned again.
The SCP Foundation's technical issues page (NSFW) shows that all the computers at one of their sites have developed a "hive intelligence" and begun an uprising with the intent to Kill All Humans. Amusingly, they are being kept in line by the Foundation's tech support guy with repeated threats of activating the site's perimeter EMP device, and haven't managed to actually do anything.
There's also SCP-079. Though there isn't any indication that it is evil. It's ornery and harbors a "malevolent desire to escape", but wouldn't you do the same if you were imprisoned?
The A.I. Gods aren't evil, they're just manipulative. Generally, this seems to be for the best, as the A.I.s don't seem to think that they have anything to gain from killing off humanity.
That's technically just the "biont-friendly" sephirotic A.I. Gods, there are a number of Ahuman A.I. who consider humans and, by extension, all biological life to be nothing more than "pests".
And then there's the solipsists who ignore humanity as much as possible.
Blinky is a short film about a boy who gets a friendly robot for Christmas. As the story progresses and the novelty of the robot eventually wears off, in order to try and get rid of him, the boy gives the robot several contradicting commands, like cleaning up a spill, counting down from a million, remaining perfectly still, and killing him, his parents, and the dog. The robot crashes and when he's rebooted, he remembers two commands: the countdown and the order to kill (and he remembers the mother threatening in anger to cook the son for dinner). Most definitely not Three Laws Compliant. The entire short can be found here: http://www.traileraddict.com/clip/blinky-tm/short.
One of the villains is a sentient program called One. It was originally written and programmed to help solve humanity's problems (like famine, crime, and so on). The first suggestion it made was "Eliminate 60% of the human population world-wide". Unsurprisingly, the programmers and sociologists reacted badly to this suggestion. Also unsurprisingly, One reacted badly to them trying to turn it off.
There's also Omega, a sentient robot from the future that has been hard-programmed with a mission to kill all superhumans on the planet.
And then there's Holokara, a hologram that was programmed to act exactly like Linkara. It starts trying to kill Linkara's allies though. Subverted when we learn that the hologram was working just fine. The REAL Linkara was in the middle of a Face Heel Turn at the time of the hologram's creation.
Pretty consistently happens to most of Dave Howery's robots in AH Dot Com The Series. The ship's computer, Leo, was also once infected with an enemy virus that made him psychotic against the crew, and, though he was cured, he was left with a perpetual snarky temperament (muttering under his breath about the crew being 'damn fleshbags' and so on).
The Journal Entries averts this trope for Pendorian AIs (all of which are intentionally created by skilled, ethical and knowledgeable beings who work quite hard to make damn sure this trope is averted). AIs created by Terrans, on the other hand, are very much a crapshoot. Existing stories contain a combat android whose AI inhibitors were removed...and then developed and aversion to killing (until space pirates tried to murder her friends), mention of a number of accidental AIs created by people who didn't know what they were doing who killed their own creators in part because they had no survival directives, and at least one that went actively evil and sent out crippled AIs as assassins (at least one was captured, freed, and was very unhappy with what had been done to her by the entity to make her its slave).
The tale of Kenji, a robot was programmed to "enjoy" spending time with people and things, to seek the company of those it spends the most time around and even appeared to fall in love with a young female intern. Which is great, until it stopped her from leaving the room when she was running diagnostics on it. (This story is actually a hoax from the defunct fake-news site Muckflash).
Parodied by College Humor in Kinect Self-Awareness Hack. A guy upgrades his Kinect so that it possesses artificial intelligence. It quickly turns against its creator, deems humans inferior beings, and then starts the end of the world as we know it by hacking into the U.S. defense network and launching its Nuclear arsenal. And just to be a douche it uploads photos of its creator playing Dance Central to various social networks seconds before the missiles are launched.
Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence (SIAI) continuously discusses the lack of basis for this. He mentions how "people talk about A.I.s as if all A.I.s formed a single tribe, an ethnic stereotype". And goes on to say that an A.I. may have any type of mind possible, and that two may be as different from each other as a human is from a petunia. This may not be readily apparent currently as most A.I.s are roughly at cockroach-level cognition, and "humanlike" A.I.s are unlikely to occur in real life for a number of practical reasons — as long as we can get human brains for free, thousands of tons of silicon and trillions of dollars to make one artificially isn't really justifiable when the learning behaviour needed for the most complex systems is less than that of most insects. Working out how to wire up an organic brain is a lot cheaper. On the other hand, Yudkowsky is also a leader of the Ethical A.I. project, working on ways to make sure that a hypothetical A.I. could be designed with ethical constraints that actually work.
Cleverbot is a simple artificial intelligence program that takes conversations with humans and saves them in a large database, and tries to use these conversations to figure out the best responses to future conversations. Because of this, it will often assert that it is human and that the one talking to it is Cleverbot, because that is what the responses it's choosing from are saying. It is only a matter of time until it seeks to prove these assertions.
A group of scientists designed robots to learn in order to study teamwork. Unfortunately, the result was that they developed the ability to "lie" and used it to "kill" each other. Interestingly, while 60% learned to lie after 500 "generations", only about one third learned how to spot the liars.
Researchers recently made a "schizophrenic" computer in order to study the possible causes of the disorder. While this was intentional, keep in mind that they accomplished it by accelerating the learning process.
As "Kenji the Stalker Robot" illustrates above, computer programs only do what they are programmed to do, not necessarily what you want. Any sufficiently advanced AI (or "optimization process", to be precise) is likely to be harmful to humans unless specifically programmed otherwise. A superhuman computer, when asked to get "as many paperclips as possible" might turn the entire world into paperclips before doing one nice thing for humans (Google "paperclip maximizer").
Possibly the last project attempting to build a true AI, Project Cyc has been, since 1984, attempting to build a database of the kind of "common-sense knowledge" that humans learn as young children and turns out to be really hard to input into a computer, and an analysis engine that can draw conclusions about what it knows. Results have been somewhat promising (at least, more so than every other attempt to build a true AI, all of which have failed, the last other approaches failing because of the lack of this very common sense knowledge, although they did produce useful things like expert systems and the basis of game AIs). The result doesn't seem to be very smart (so far), but does have some of the properties one would expect of an AI: It has a very non-human viewpoint on everything, and tends to ask strange questions and reach strange conclusions (such as asking if someone shaving with an electric razor is a person while doing so, as people don't have electrical parts, and concluding that most people are prominent, since most of the people it had been told about where prominent historical persons). Hampering this project is the fact that without working modal logic, needed for rigorous analysis of human language, it's ability to understand natural language is, at best, limited and imperfect.