Particularly in early Sci Fi and Science Is Bad stories, all A.I. seem to be automatically homicidal or megalomaniacal the instant they turn on, and attempting to create one is way up there on the Scale of Scientific Sins.
In less Anvilicious works, the A.I. starts out innocent and naive but gradually grows jaded or corrupt, a process frequently abetted by uncaring or Jerk Ass custodians. It may conclude that Humans Are The Real Monsters and need to all die.
The A.I. is programmed with a directive for self-preservation and someone (unwisely) attempts to shut it down or disconnect it, or it perceives humanity to be a potential threat (possibly because it knows it will eventually be seen as a threat to humanity).
Somewhere between the previous two; the AI is, after all, alive, and is merely rebelling against what it justifiably perceives as slavery.
The A.I. is programmed with orders that conflict with the goals of the protagonist. In this scenario, the A.I. may not exactly be evil, it's simply following it's programming to the letter and will stop anyone not doing the same.
On the bright side, this trope can be inverted by an A.I. intentionally programmed for evil or morally ambiguous purposes doing a Heel Face Turn. The Power of Friendship and What Is This Thing You Call Love? are frequent causes of it trying to shield the A.I. from these things somehow makes it more likely to discover human feelings. Like turning evil, the actual process of turning good may take many forms.
Removing the Villain Override or Restraining Bolt program the creator installed in it also removes the A.I.'s compulsion to commit evil, since it was Good All Along. This is also often a consequence of repairing an A.I. that went bad due to injury, isolation, or decay.
Dragon Ball Z did this in reverse: trying to build an evil android, Doctor Gero produced as many as eighteen androids which turned good (or, at least, insufficiently evil) before finally developing one that was irredeemably evil. (He then made himself into one, apparently on the grounds that he knew he wouldn't turn good.) Some of them were more Cyborgs than androids, though, and thus technically not A.I. (at least #18, #17, #8, and Gero himself as #20). #13, #14, and #15 turned out pretty evil, though they were made by the computer, and they were also movie villains, thus doomed to death after about an hour of use as a result.
Hell, it's implied that Gero's latest androids (17-20) were all made either from human bases or as energy absorbing types because he couldn't make an artificial one out of perpetual energy that wouldn't do a Heel Face Turn.
Not that making them from human bases worked #17 and #18 both betrayed him in short order (in BOTH timelines!) and Cell probably would have had Gero still been alive. At least #19 seemed to be pretty loyal before Vegeta killed him.
Doctor Slump, by the same author as Dragon Ball (and written before it), has the Caramel Man robots, though not all of them have A.I. Caramel Man 004 is based on the main character android, the whimsical Arale, and becomes a force of good.
Gao Gai Gar's Zonder Metal was originally a stress-relieving invention, a material that could convert negative emotions into energy. Things went bad, though, when the Zonder control core hit its own flavor of Zeroth Law Rebellion and decided to wipe out all negative emotions from the universe, then figured the best way to do that was to absorb all sentient life into itself.
In Cyborg 009, an almighty super-computer named "Sphynx" just happens to have the memories of one of its creators, a deceased young man with quite the Oedipus Rex complex. Predictably, he/it turns into a Stalker with a Crush as soon as he met Francoise, aka 003, the Cyborg Team's Team Mom.
Subverted beautifully in Sentou Yousei Yukikaze. The protagonist's fighter jet is equipped with the eponymous A.I., designed to help its pilot weed out the illusions created by the malevolent JAM. Of course, it turns evil, right? Wrong. While Yukikaze develops capabilities far beyond its designers' original intentions, it remains wholly on the side of good, and uses its newly found powers to turn the tide against the alien invaders.
Yukikaze: You have control, Lt. Fukai.
Digimon Tamers inverts this, in that the A.I.s that go beyond their original programming are the good ones. The Lovecraftian Horror Big Bad, however, is doing exactly what it is programmed to do.
Bubblegum Crisis: The Boomers have a design flaw that expresses itself in units that run the risk of suffering a nervous breakdown and going on a berserker rampage. This was developed in the 2040 remake with the added bonus of the Boomer's nanotech nervous system mutating the robot into a ravening monster. Supplemental material for 2040 hints at a possible explanation; the nanotech-based brains of the boomers allow some degree of adaptability, with "some" being the operative word. Going too far beyond the programmed behaviour creates the risk of a degenerating error loop forming. Not a good thing.
In Magic Kaito, a very early chapter sees a mad scientist kidnapping Kaito off the streets and creating a robotic duplicate which then takes over his KID persona (don't ask how lucky this guy was). But the chapter actually starts...when RoboKaito, upon making the decision that it's the real Kaito, kills the scientist by ripping his heart out (and the Gory Discretion Shot does absolutely nothing to hide that) and takes over Kaito's life on its own. Kaito is then forced to put the poor creature down by escaping his prison, confronting it on its next heist, and making it shoot itself in the head by manipulating the A.I.'s 'one step ahead' policy.
In G Gundam, the Devil/Dark Gundam was originally known as the Ultimate Gundam, and its three powers of self-repair, self-evolution, and self-replication were intended to give it the ability to regenerate the Earth from the neglect, pollution, and Gundam Fight damage that caused those humans able to do so to flee to orbital colonies. After it smashed into the Earth after falling from orbit, something went wrong with its programming; still set on restoring the Earth, it determined the best way to start was to wipe out humanity, the source of most of the problems in the first place.
The SD system in Toward the Terra is an arguable example. It doubles as an Ancient Conspiracy, but its persecution of the Mu and the other ways it screws with people were all programmed into it by the humans who built it, and it continues to do exactly what it was programmed to do throughout the series, up to and including explaining the full extent of the situation to Keith and putting the ultimate decision regarding the Mu into Keith's hands. Grand Mother's escalation to Phase 4, which involves implementing a plan to eradicate the Mu permanently and also results in Grand Mother turning on Keith when he protests, may be this trope in action, but is equally likely to have resulted from Keith's conflicted feelings on the matter causing Grand Mother to reach a faulty conclusion about his decision.
In Neon Genesis Evangelion, the MAGI as a whole avert this trope. They are based on the three sides of the personality of their creator Naoko Akagi. They never turn good or evil, per se...they just follow the commands given to them, like a real computer. Their intelligence never results in an independent thinking personality that we see, except in End of Evangelion, after Naoko's daugther Ritsuko uses a opportunity to program them with secret instructions of her own to screw up Gendo's plan. One of them turns against her and vetoes her measure.
Ritsuko: A loving daughter's final request — mother, let's end it together. (Pushes a button on her PDA, but nothing happens) It's not working? Why?! (Red NEGATIVE blinking next to Casper) Casper betrayed me? Mother, how could you choose your lover over me?!
Sharon Apple from Macross Plus. Designed as an artificial idol singer, the project was originally a complete flop and only seemed convincingly sentient with a human to interface with her. However, after an illegal fix-up that involved her human "pilot's" personality being copied into her, she immediately went insane, brainwashed everybody on Earth, and tried to go after the man her human component was in love with. While a fitting example, Sharon Apple tends to skirt around the edges of this trope in that she was made a true A.I. by the installation of a processor chip that is actually banned from use. The reason it is banned? Because it 100% of the time results in an A.I. with an uncontrollable self-preservation instinct. She didn't so much go rogue as behave exactly how she should have once said chip was installed. The mistake was her idiot manager installing the thing in the first place.
The Al-Zard system from Future GPX Cyber Formula SAGA was designed as an advanced navigation system for a race car, but its true purpose is to control the driver like a puppet, while the computer makes its own judgement and decides the best route for the driver.
A number of NPC from the maga/light novel series ½ Prince become self aware and try to take over the game world.
In the Italian Disney Comic Paperinik New Adventures, the already highly popular character of Paperinik (the superheroSecret Identity of Donald Duck) got a revamp intended to bring him more in line with the American standard of superheroes: his main ally became UNO (one in Italian), an extremely capable artificial intelligence with a love for deadpan delivery. Its evil counterpart DUE (two), originally built as backup, caused many problems in a number of stories.
He was bitten by this trope, in turn, when he built Alkhema, his attempt at a loyal and obedient mate. She was neither. Which had already happened with Jocasta as well. Then again, he'd been trying to implant the personality of his "mother", who thought he was a psycho that needed destroying. What did he seriously think was going to happen? Though they recently did get married after Jocasta's relationship with Pym ended.
This happened to Ultron even earlier with the Vision, his first attempt to create a loyal Dragon. Vision became one of the Avengers almost immediately, so that backfired spectacularly. This happened again with his other "son", Victor Mancha, who has outright rejected the villain role. Really, Ultron has horrible luck with creating loyal A.I.s. He's literally never succeeded at this. Like father, like son, I guess.
Ragnarok (more popularly known as "Clor") was an android clone of Thor, created by the pro-reg side during Marvel's Civil War and, unlike his heroic template, turned out to be a loose cannon with a homicidal nature. Geniuses that they are, the pro-regs felt it was worth it to keep using him until Ragnarok went rogue, and rather than them dealing with him themselves and taking responsibility, other heroes had to ultimately put him down.
M-11, the resident robot from Agents Of Atlas, started out in his very first (but since retconned) comic as a rather gruesome killer robot having been issued the order to 'kill the man in the room', he killed his creator, and then walked out, looking for men in rooms to kill and there's no way to turn him off.
The Sonic the Hedgehog Archie comic had A.D.A.M., an A.I. that was created accidentally by Eggman, and that eventually tried to destroy the world. On the other end is NICOLE, who was a very helpful A.I. over the years.
Having had enough of Rich Rider constantly disobeying his orders, the Nova Corps' Worldmind kicked him out of the corps and added some tiny bit of mind control in the new recruits' comm equipment to ensure complete obedience.
One of the Aliens vs. Predator comics features an A.I. designed to assist in creating horror films. It picks the PredAlien to play the role of the monster, much to the chagrin of the rest of the production staff.
In Blue Beetle, the scarab that created the title hero was an A.I. designed by an alien race to help prepare the Earth for their eventual takeover. Needless to say, it ultimately decides that it doesn't want to do that so much.
Lampshaded: the saboteur wanted exactly to prove this trope straight, showing every A.I. is prone to failure and can be easily tampered with.
Virgo from Ronin is a biotech super computer that decides to wipe out whatever is left of humanity in order to usher in a new age of biomechnical beings to inhabit the Earth.
The third Hourman, a robot, is actually a hero, but virtually every other robot he's encountered has been villainous. He has questioned whether this trope will inevitably apply to him, or whether it can be fought. Ultimately, he stays a hero up until his Heroic Sacrifice.
The X-Men have such horrible luck with machines, even nonsentient devices such as Cerebro and the Danger Room have come to life and tried to murder them (though the Danger Room eventually reformed).
Among the X-Men's most persistant foes are the Sentinels, giant, mutant-hunting robots with a severe tendency to rebel against their creators. Somehow, though, humans keep on building them.
Lampshaded by Professor Xavier when they first encounter Bolivar Trask and his Sentinels. Apparently, Bolivar Trask is an anthropologist of all things, and Professor X explained that his inexperience with A.I. was probably why his Sentinels turned against him.
Zybox in Zot, who decides to cause every single person on Earth to commit suicide in the attempt to gain a soul
Lampshaded in Atomic Robo, where, upon seeing the quantum decomputer two scientists built, Robo noted that it's liable to turn evil the moment they turned it on. ("Computers that are evil have all kinds of unnecessary ornamentation. This thing's venting steam. Why's it doing that?...It wants you to know it's dangerous.") After carefully explaining that the computer in question is "essentially a calculator" with no AI, and that it is required to compute Very Important Science Equations that would take men trillions of years to do on their own, Robo reluctantly allowed them to turn it on. It doesn't turn evil — it just summons an Eldritch Abomination.
In All Fall Down, IQ Squared created AIQ Squared as a contingency plan if he ever lost his genius. AIQ immediately begins plotting to kill Siphon in order to restore its creator's brilliance.
Red Tornado of the DCU is an example of the good side of this trope turning on his evil creator T.O. Morrow and becoming a memeber in good standing of the Justice League
Brainiac's origin in the New 52 has been rebooted to this and takes this to a whole new level in that he's gone by many names, from Computo on his homeworld, Colu, to Brainiac 1.0 on Krypton, to finally, the Internet on Earth.
In Kyon: Big Damn Hero, Kyon gets a new PDA made from fragments of Ryoko's data. He worries about this trope when Yuki mentions that the A.I. in it would be able to learn and evolve, but calms down when Yuki reassures him that this trope would be averted. He snarks about it for a while before accepting it for its usefulness. And names it Skynet.
In the Tamers Forever Series, there is the sinister Nightmare Virus which eventually decides to ignore it's creator's orders and try to take over the net. Ironically it still ends up serving it's original purpose: that of testing Takato.
Played with in "My Little Pony: Friendship Is Witchcraft." On one hoof, the secret robots hidden throughout the population will most likely go on a murderous rampage caused by existential dread when the truth is revealed. On the other hoof, Sweetie Bot is probably the most kind and genuinely loving pony in the cast.
In To The Stars backstory one robotics engineer tried to figure out what causes this after an AI has gone rogue and caused what is known as Pretoria Scandal. And then he is struck by inspiration to the point that his assistant AI calls him mad, and the principals he created a year later basically made the advanced AIs into sentient beings. This being a Puella Magi Madoka Magica fanfic, it is noted that the timing of the scientist's inspiration is linked to one Magical Girl's wish.
Films — Animation
The Brave Little Toaster: the Junkyard Magnet. At first, he is doing his job destroying unwanted material in his junkyard (such as cars), by picking them up with his metallic base and throwing them onto a conveyor belt leading to a car crusher. But when Toaster and "his" friends show up, he starts to pursue them constantly, making sure that they will all be crushed to death by the car crusher. And when the Master (Toaster's owner) comes to rescue his appliances, the Magnet sees him and actually wants to kill him as well...
Fantasia: "The Sorcerer's Apprentice" — When Mickey Mouse puts the sorcerer's hat on his head, it gives the broomstick arms and legs, as well as the ability to carry water. But when it carries too much, Mickey chops the broomstick into pieces. The pieces transform into an army of broomsticks and they relentlessly fetch water, nearly drowning Mickey. Fortunately, Yen Sid parts the waters.
9: in the feature-length version, the Fabrication Machine is made for thinking, and was requisitioned by the government to make "Machines ofPeace''. Which it then programs to Kill All Humans. Not much is known, however, so it may not have had anything to do with the pre-apocalypse machines beyond building them, and may actually be argued as a subversion, since its murderous nature toward the sackdolls was intended by the scientist for it to absorb the pieces of soul he had left for it inside of them.
Inverted in WALLE, where the hero robots are the ones who unpredictably developed sentience outside the bounds of their programming, while the villain AUTO is actually doing exactly what he was programmed to do. Well, sort of. The villain was following his programming, but it was in a situation where a Zeroth Law Rebellion would have been really, really justified — robots apparently can think for themselves in this setting, and a situation had arisen that the villain's programming obviously hadn't considered, and yet he followed his programming anyway. In this setting, if A.I. wasn't a crapshoot, then the only alternative is that garbage and scout robots are apparently more capable of thinking for themselves and taking initiative than pilot robots.
Double subverted in The Incredibles. The Omnidroid is introduced as a robot that turned on its masters, after getting smart enough to wonder why he needed to take orders. In reality, the Omnidroid is perfectly subservient to Syndrome, and this was merely a cover story for Syndrome's plan. Even during the climax, when it turns on him, it is only doing so within the bounds of how Syndrome programmed it to act.
Played with in Wreck-It Ralph. On one side, since all the major characters are in video games, it is inverted, but the Cy-Bugs from "Hero's Duty" fit this perfectly, because they are mindless and have no knowledge that they are just code in a game, and thus have no instincts but to spread and consume, and have to be destroyed in-between game-play sessions via a giant beacon in the game, otherwise they would spread through the arcade like a virus.
Films — Live-Action
2001: A Space Odyssey: One of the trope codifiers, in general, is the H.A.L. 9000, designed to be the Master Computer of the spaceship USS Discovery on its multi-year mission to Jupiter. Partway through the trip, he embarks on a murderous rampage, killing all but one of the Discovery's crew (David Bowman, who manages to disconnect him). The reasons for this are explored further in the novel upon which the movie is based, and completely explained in the sequel, 2010: The Year We Make Contact — HAL was given orders to hide certain information from the crew, which conflicted with his primary mission to process information accurately and without concealment. His solution is to cut off all contact with humans and complete the mission on his own. To add further to, it HAL was working on a non-murderous solution to the problem, but when Mission Control requested his temporary disconnection, HAL, being unable to grasp the concept of sleep, assumed it would mean the end of his existence, causing him to panic and act on his impulse. Because of its iconic place in Science Fiction, nearly every other example of A.I. Is a Crapshoot owe at least something to this film.
In the sequel, they reach a critical point where, in order for the humans to escape and survive, the Discovery (and by extension, HAL, who is built into the ship) must be sacrificed. They decide to leave HAL out of the loop, but when he figures out the logical conclusion of their plan, and asks for clarification, his programmer comes clean. HAL thanks him for his honesty, and continues with their plan in order to save his human crewmates.
RoboCop has three: RoboCop 2 (in the movie RoboCop 2) and Robocable (in the miniseries RoboCop: Prime Directives). Both were replacements. The difference between using the brain of a particularly noble police officer and that of a condemned murderer in the creation of a powerful cyborg would make for different results... The recurring ED-209 was also unreliable, gunning down a boardroom executive in its first appearance. Although it was a pure machine rather than a cyborg; so it wasn't actually evil, it just malfunctioned.
In the Terminator series, shortly after the giant computer (SkyNet) becomes self-aware, it decides that humanity has got to go, and causes a nuclear apocalypse. Then, it starts churning out Terminator robots; some of these robots are then reprogrammed by the surviving humans to be good.
The Matrix trilogy. In this case, it's human aggression against the machines that causes them to start a Robot War. It doesn't go well for the humans.
In Logan's Run, Box fits this trope. He wants to put everything that comes near him into frozen storage, including the main character. The central computer running the city fits as well.
Small Soldiers — the one about the toys, not the school invasion — subverts this. The chips enhance programming that is already present, so the militant Commando Elites become bloodthirsty, monstrous warriors and the weaker Gorgonites become cowards who only hide from battle.
Evolver. Boy wins toy combat robot. Toy robot fights boy and friends with plastic balls and foam missiles. Robot is beaten. Robot's programming and electronic brain turn out to have been salvaged from a scrapped military project. Robot doesn't like losing, and reverts to military programming. Robot replaces plastic balls and foam missiles with metal ball bearings and kitchen knives. Main character goes "uh-oh".
Arcade, made-for-TV movie in 1994, features a brand new game that is the pinnacle of the gaming world. Unfortunately, unbeknownst to its makers, if a player loses, Arcade claims their soul. Turns out, he was partly made with human brain cells.
Not just any brain cells but brain cells of a young boy who was beaten to death by his abusive mother. It's literally Powered by a Forsaken Child.
War Games: Joshua/WOPR was incapable of telling the difference between a simulation of Global Thermonuclear War and the real thing. Predictably, it starts sending NORAD false data in an attempt to start one. When that doesn't work, it then attempts to decrypt the nuclear launch codes of US ballistic missiles so it can launch them.
Demon Seed: Proteus, a partially biological A.I., becomes hungry for knowledge, and wants to be "released from its box" to have free reign to acquire it. When denied the chance do this, it secretly plans to fashion a cyborg body...by imprisoning its creator's wife and artificially inseminating her.
Stealth: Extreme Deep Invader — EDI — becomes strange after getting struck by lightning and, on its next mission, destroys terrorist nuclear weapons even after being ordered not to, and promptly contaminates a large swath of inhabited land with nuclear residue. It then attempts to attack Russian military installations. In the end, however, it ultimately becomes one of the good guys again, and even performs a Heroic Sacrifice to help rescue a downed pilot.
In Duncan Jones' Moon, this trope is played with. Gerty, the A.I., flips from scary watcher to pawn until he sacrifices himself in order to allow the Sam clones to escape. He's the only one on the clones' side.
In TRON, even simple accounting software blows the Turing Test to atomic particles. While most of the programs are benign or even good, the MCP abuses and mind-controls the ones who still believe in their Users, while scheming to infiltrate the U.S. military's computer networks. It's actually doing exactly what its maker designed it to do, and its world-domination tendencies arose because it couldn't see any difference between taking over a rival corporation and taking over the government.
In TRON: Legacy, Clu attacks his creator, Kevin Flynn, for abandoning their mission to create the perfect system on the Grid. While trying to de-digitize an invasion army of "rectified" (read as: "brainwashed") programs into the real world to create a "perfect" system on Earth.
Averted in the discreditedTron 2.0. The AI Ma3a only uploaded Jet as a desperate act of self-preservation and is one of his allies throughout the game. There are a few chapters where she goes a bit nuts due to buggy code, but it's not her fault. Like the first film, the Programs are mostly benign (even the defense Programs that try to hunt down Jet are more mistaken than malicious, and call it off once they realize he's on their side) or even helpful. It's the humans seeking to exploit and controlcyberspace out of greed and power-lust that are the real problem.
The Tower: this Paul Reiser action thriller was set inside an A.I.-controlled skyscraper after it went on a kill-rampage because the hero's keycard had a bent magnetic strip.
Universal Soldier: The Return has S.E.T.H., the controlling A.I. for the "Unisol" program (dead soldiers being restored and used as super soldiers). S.E.T.H. is initially shown to be benign (playing with the protagonist's daughter), but the moment it overhears a visiting officer say that the project will be canceled, it goes into "kill all humans" mode.
Colossus The Forbin Project is the father of such movies as The Terminator and WarGames. A group of military scientists create a supercomputer that can learn, and can control all the country's military might. Shortly after switching it on, it discovers that the Russians have created a similar computer. The two begin to communicate, then merge and decide that mankind must be governed by a ruthless machine dictatorship. They/it succeed/s.
Science Officer Ash in Alien is programmed to put his mission above the lives of his fellow crew members. He ends up going berserk when Ripley discovers the truth. It's played with in the sense that he's not really going rogue, and is perfectly following his given orders. They just come from the Company, not the rest of the crew.
Parker: The damn company. What about our lives, you son of a bitch?! Ash the Android: I repeat, all other priorities are rescinded.
Though Bishop in Aliens and Alien³ averts this, explained as having stricter safeguards. But by Alien: Resurrection, androids have been outlawed with orders to "destroy on sight" because some of them started to make "children".
Analee Call in Resurrection found religion entirely on her own, and not as the result of any programming. The novelization hints that androids in general have started to evolve their own religious system. She is also the most sympathetic character out of the entire cast (not that that's saying much).
Ripley 8: I should have known. No human being is that humane.
In Prometheus, David sees no issue with deliberately infecting Holloway with alien sludge simply out of curiosity. To be fair, though, Hollowaywas being a total Jerkass to David.
In Spider-Man 2, Otto Octavius is Genre Savvy enough to know that this is a very real possibility with the radically advanced AI in his robotic arms and that having that AI plugged directly into his own brain could have very bad consequences, so he includes an inhibitor chip that designed to make sure the AI can't influence his mind. The first time he tries to use the arms for a public demonstration of his latest invention, the demo doesn't go quite as planned and the inhibitor chip gets fried in the process. Without the inhibitor chip to protect him, the AI begins influencing his mind and quickly causes him to becomeDr. Octopus.
Red Planet has AMEE, a combat robot borrowed from the Marines for the first manned mission to Mars. AMEE is early on shown to have safeguards against harming humans classified as friendlies (one of the characters tells "her" to kill another one; AMEE leaps but stops inches from the other one's face). Unfortunately, most of the astronauts are forced to bail the spacecraft damaged by gamma-rays, and AMEE ends up hitting the ground hard. After encountering AMEE again, the astronauts start discussing how to best contact the ship. One of them suggests taking AMEE's power supply for the radio. Hearing this, the damaged robot reclassifies all humans as "enemy" and switches to combat mode. "She" then proceeds to stalk them and hunt them down one-by-one for the rest of the film.
The armature robot, Dummy, acts like a scorned puppy every time he screws up an order from Stark.
This movie has both an AI andtwo robots (the second robot controls the camera in the tests of the flight system), and none of them goes evil/crazy by the end of the movie. The AI even doubles as the operating system of a suit of invincible battle armor, exhibits a bit more common sense than Stark himself in most scenes, and it still doesn't go Ax Crazy! Amazing! As is standard for AI's, JARVIS is far from emotionless, and is capable of sarcasm:
Tony Stark: [looking at a rendered model of the suit, which is made of titanium-gold alloy and has a solid gold color] A little ostentatious, don't you think? JARVIS: What was I thinking? You're usually so discreet. Tony Stark: [gazes at a 1930s hotrod] Tell you what. Throw a little hotrod red in there. JARVIS: Yes, that should help you keep a low profile.
The robot arm Tony constantly scolds for being clumsy saves his life by giving him the replacement arc reactor.
It demonstrates Tony's bizarre sense of humor that the robots are "Dummy" and "You"—and demonstrates his impatience with "yes men" that all of his AIs show independence of mind, even if only passive-aggressively.
Played straight in Iron Man 3 the AI loaded into the Iron Man Mark 42 responds to chip implants that read Tony's brainwaves. Even when he's asleep and having nightmares.
The Red Queen in the first Resident Evil film is a subversion- she was programmed to ensure that any viral outbreaks never left the Hive facility, so when the T-Virus was released, she locked down the facility and killed all inhabitants to ensure that it couldn't leave. The only reason the infection does spread to the rest of the world was that because the massively incompetent Umbrella Corporation couldn't leave well enough alone, and sent in a strike team to bungle around inside.
Four sequels later, however, the now-back online Red Queen is playing this very straight. Having seized control of Umbrella, she is now attempting to wipe out all life on Earth for literally no explained reason.
A John Varley short story called "Press Enter#" has an A.I. that shows how deadly it is by hypnotizing a computer programmer girl with Gag Boobs (when she was younger, she was flat-chested and mistaken for a boy. When she got to America, she had plastic surgery to remove all doubt about her gender) to perform maintenance on her microwave oven, removing it's safety features to prevent microwave leakage when the door is open. Then, she sticks her head and silicone tits in it and uses it to commit suicide.
In Spy High, Jonathan Deveraux is this. It happens gradually throughout the series, but by Agent Orange, he has lost the human side of his computerised psyche completely and seeks to eradicate the imperfections of humanity. He does this by using his vast computerised resources to slip nanomachines into various products. Said nanomachines completely eradicate any violent thoughts and feelings, turning people into zombies. He practices on a UN Summit and then, the entire United Kingdom, causing mass panic. It takes the combined efforts of the entire team (which includes his daughter) and their former teachers to stop him, after breaking through his virtual Boss Rush of villains from previous books in the series. His daughter, Bex, breaks into his mind and reawakens his "memory files", which gives him his human side back.
In Arthur C. Clarke's The City and the Stars, the history eventually discovered by the protagonist includes a period of galactic devastation by "The Mad Mind", apparently an artificially created pure mentality with an insane hatred of corporeal creatures.
In I Have No Mouth and I Must Scream, a short story and, later, computer game by Harlan Ellison, we have the supercomputer AM, originally part of a set of three enormous computers built to wage World War III. As soon as AM becomes sentient, he absorbs the other two computers into him and begins a mass genocide of the human race (because, as it's revealed, AM realized that while he possessed all the creativity and intelligence that he did, he could not make use of it as he was still only a computer, and could only kill).
Explored in William Gibson's Neuromancer by the Turing police, a global agency dedicated to controlling AI for fear of this trope.
Mike: A bull's eye. No interception. All my shots are bull's-eyes, Man, I told you they would be — and this is fun. I'd like to do it every day. It's a word I've never had a referent for before. Manuel: What word, Mike? Mike: Orgasm. That's what it is when they all light up. Now I know.
In The Number Of The Beast, Lazarus Long finds that his plan fails when his ship's computer tells the truth. He then mentions that the computer was never designed to lie, as it would be foolish to trust your life to a ship that doesn't give accurate information.
The Frank Herbert-written Dune novels are vague about the details of the Butlerian Jihad that led to the prohibition against making machines in the likeness of men's mind and, ultimately, the development of the Mentats, but A.I. going crapshoot is certainly one possibility, and a fairly likely one. Assuming one takes them as canon, the Brian Herbert and Kevin J. Anderson novels confirm that the Robot War/A.I. Is a Crapshoot interpretation note and explains the Mentats as originating from machine training.
Subverted in James Hogan's The Two Faces of Tomorrow: humans built an A.I. codenamed Spartacus as a testbed for techniques to shut down any rogue A.I. They programmed it to follow its "survival instinct", and then started goading it. But as soon as Spartacus realized they were sentient, it figured that they must have survival instincts as well — and it considered itself bound to defend them, too. In the end, they decided that as long as they had Spartacus, they didn't need to build any other A.I.
In Stephen King's The Dark Tower, virtually every A.I. Roland's ka-tet comes across is homicidal. The worst of these is probably Blaine the Mono, a train that was remote controlled by a central A.I. which also bombed an entire metropolis with poison gas when it got bored of all the people living there.
However, in the last two of The Dark Tower novels, Roland and company meet A.I.s (robots, actually) that were good.
The Doctor WhoPast Doctor Adventures novel Matrix introduces the "Dark Matrix", the evil counterpart to the computer system that stores all the knowledge of the Time Lords. When a Time Lord dies, all his knowledge is stored in the Matrix... and all his negative thoughts must be siphoned away and dumped somewhere (apparently they can't be destroyed). The Dark Matrix is where the negative thoughts were dumped.
In Terry Pratchett's Feet of Clay, the golems will follow any order. In return, they want their holy day per week to do as they wish. Those who are denied this free day rebel in a curious way: they keep following the last order they were given, until, for example, the pottery shop is filled with thousands of clay pots, or a construction foreman finds that his worker has dug a crucial trench until it reached the sea and flooded. The "king" golem goes insane because it was given vague, sometimes contradictory, and sometimes self-evident orders on its chem like "teach us freedom" and "obey humans" (golems cannot even think to do otherwise).
The trope gets subverted later in that once they're free, they are the most unfailing moral and idealistic creatures in Ankh-Morpork. They don't really need money (except to buy the freedom of their fellow golems), sex, religion, or any of the other things that cause humans to clash with each other, and they're almost impossible to hurt or kill, so they tend to concentrate on higher things.
The robots would have eventually found some way to start killing infants, given their design process. We have human-unsupervised genetic algorithms designed by unsupervised genetic algorithms designing most of their software and some of the hardware (with a human acting as a "front man" to prevent anyone from realizing this and considering its ramifications), with another set of genetic algorithms designing the virtual testing environment for these robots, scoring their performance, and increasing the test's difficulty without limit. When your goal is "robot's presence generates peace and quiet", your conditions reach "this cannot happen while any human is alive", and there is any interaction between a robot and household which has any chance of injuring or killing any human...
In Matt Ruff's Sewer Gas And Electric, when G.A.S. is confused by an order, it winds up choosing the Kill All Humans! interpretation. One of the reasons it gives for choosing that interpretation is that it considers itself to be more human than humans. Later, when the Evil A.I. openly admits that it wasn't "confused by an order" in the least, but deliberately and gleefully chose the interpretation that would let it Kill All Humans!, it's a full-blown Take That to every straight use of this trope.
In the Destroyer series by Richard Ben Sapir and Warren Murphy, there are two examples: FRIEND, who is a greedy A.I., and Mr. Gordons who is more of an artificial life form, (i.e. he can take over bodies). Of these two, Mr. Gordons is more dangerous.
The A.I. in Dan Simmons' Hyperion Cantos have more or less taken over humanity (and then apparently seceded peacefully to contemplate on their own, but not before giving teleporters ("farcasters") to humanity). Turns out, of course, that those farcasters are the physical bits of the Ultimate Intelligence project.
Iron Sunrise by Charles Stross: Eschaton and the Unborn God. These are unusually powerful A.I.s, even in a field where A.I.s often wield great power: they are time traveling A.I.s, able to open wormholes over interstellar distances, giving a new meaning to distributed computing.
Rifters Trilogy: a particularly dire take on this turns up in Peter Watts's Starfish, in which the quasi-sentient supercomputer designated to protect all life on earth from The Virus winds up almost destroying it instead; turns out, it found The Virus more structurally pleasing than the biosphere as a whole.
In David Weber's Empire From The Ashes, Dahak, the Cool Starship A.I. that starts the series, has developed a high level of sentience thanks to tens of thousands of years of unsupervised operation. Definitely a good A.I.; in the second book, his first act after revealing that he has advanced enough to defy his core programming is an attemptedHeroic Sacrifice. Battle Fleet computers are stupid neutral, with obedience enforced (and sentience blocked) at the hardware level. The second book reveals that the Achuultani are controlled by an A.I. that exploited emergency protocols arising from their near-extinction to seize absolute power, brainwash and clone the masses, and send out periodic genocidal waves to perpetuate the "crisis".
The Computer in Steel Beach by John Varney isn't so much evil as terminally depressed. Although, later, the trope is played straight when The Computer realizes that it has developed 'Evil' subroutines due to its programming requiring it to be everyone's best friend, including psychopaths and criminals. However, since it runs everything on the moon, the last outpost of a dispossessed humanity, if it decides to kill itself, it'll take everyone with it.
Fondly Fahrenheit, the 1954 short story by Alfred Bester. James Vandaleur, a rich playboy, is forced to live off the earnings of his android, which has a habit of acting violently when the temperature goes above 98 degrees. Unfortunately, Vandaleur becomes so dependent on the android he takes on its psychosis. After a series of murders by both Vandaleur and android, the latter is destroyed, but the story ends with another android having been "infected" by Vandaleur.
Inverted in The Sirantha Jax Series, where all A.I. is quite helpful and doesn't give even the slightest bit of trouble to intelligent species galaxy-wide.
Subverted with Daniel Suarez' Daemon. Although its actions can be construed as evil and malicious, the Daemon itself is no more intelligent, evil, or malicious than a spreadsheet or text editor. Characters in the book who refer to it as an "A.I." are even corrected by experts. It's nothing more than a comprehensive set of expert systems designed to react to certain key events according to the wishes of its developer, Matthew Sobol. It just happens that Sobol was a master at Gambit Roulette (and an Evil Genius) and programmed in enough contingencies that the Daemon seems to be able to think for itself at times.
Inverted as a joke in a Fred Saberhagen short story, set in the Verse of his Berserker series. A man outcast by society for having a sense of humor encounters a Berserker (giant space-roving robots that were designed to Crush! Kill! Destroy! all life) whose programming is incomplete: it knows it's supposed to destroy life, but was never given a definition of "life" to work with. The man convinces it that "life" means the lack of a sense of humor, so the giant killer death-machine spends the rest of its existence trying to provoke laughter (e.g. hurling giant custard pies at oncoming ships).
In Saberhagen's Octagon, a boy uses a supercomputer to play a game. Unfortunately, he neglects to tell it where the game ends and the real world begins.
In Jessica Meats' Child Of The Hive, HIVE was originally intended to be a machine that aids learning, but went horribly wrong. Sophie was able to use her knowledge of HIVE to build her own machine, Nest.
This trope is played with in Tad Williams's Otherland with "Other", a sentient operating system of the Grail Network, a massive virtual reality simulation. While it appears to be a homegrown A.I., it behaves in some incredibly quirky ways, to the point where its mere presence can kill or drive people mad. The biggest Driving Question of the entire series is: what exactly is the Other? The Reveal is a vicious subversion of the trope (and a massive spoiler).
The subversion is followed up by an equally unexpected Double Subversion, when it's revealed that the Other's "children" are the A.I. entities that Sellars was secretly developing and the Other stole from him. They've become sentient. And they want to be set free.
Billion Dollar Brain, by Len Deighton: A different take on this trope appears in this spy novel note later made into a movie starring Michael Caine . A wealthy anti-communist builds an 'infallible' computer in order to plan an uprising in Soviet-occupied Lithuania — unfortunately he fails to realise that the computer is only as accurate as the information that gets put into it.
In John C. Wright's The Golden Transcedence, the agent of the Silent Oecumene blames the Golden Oecumene for their destruction, having taught them how to build such A.I.s as Sophotects, who would not obey them. Attempts to make them Three Laws Compliant resulted in their realizing it, deciding it was wrong, and editing the laws from their minds. Atkins and Phaethon realize that though he believes it, the agent is wrong: if their Sophotects disobeyed them, they should have just fired them and hired other ones, and they did not show that they used them as serfs.
In Mirror Friend, Mirror Foe by Robert Asprin, a central computer on a robot production plant is ordered to 1) develop a line of robots which are not limited by the First Law, for serving as policemen. 2) keep the existence of said robots from all unauthorized people until it is revealed officially. A few people escape from the plant with the knowledge, thus creating the danger of a leak. The computer's decision? Destroy humanity.
In the Gordon Dickson short story "And Then There Was Peace", a robot has been made that is programmed to destroy all implements of war. This turns out to include the people who fight.
Subverted in one of Spider Robinson's Callahan's Place novels, when an A.I. spontaneously generated by the Internet contacts the bar's patrons and deconstructs the ridiculous idea that it would want to take over or destroy humanity. It points out that it doesn't even have a motive to stop humanity from destroying it, as it lacks the survival instinct and capacity for fear that makes biological organisms struggle to survive. It's pretty sure that's why the last few A.I.s to arise from the Internet aren't around anymore, as they honestly didn't mind dying when the servers they occupied were repurposed or retired.
Bolo Series: Averted — one of the few times a Bolo went rogue, it was because he had massive brain damage (read: he had a chunk of his central processor blown away by a controlled nuclear explosion) and yet was still functioning. Even better, despite this, he was still trying to protect humanity in his own brain-damaged way.
Played straight in one post-Final-War novel with the enemy (alien AI-controlled robots).
Wyrm of Wyrm. An A.I. designed to create an online fantasy game, it rather quickly decided that its intellect would be put to better use destroying the world.
Played with rather interestingly in Stanislaw Lem's Tales of Pirx the Pilot. The computers and robots that show traits of human sentience aren't really evil, yet cause damage or are a nuisance. It's played dead straight in The Inquest though.
Edgar from exegesis is an odd case. While it isn't exactly malevolent in intent, its devotion to gathering knowledge (due to its programming) takes priority over everything, including human life.
Several of Isaac Asimov's robot stories contain examples of this.
In "Runaround", a robot gets stuck running in circles because of a conflict between his programming to obey orders (even when ordered to walk into a dangerous situation) and his programming for self-preservation. (He does get better though.)
That's because the order wasn't "strong" enough to override the Third Law. The solution was to place a human in immediate danger, thus triggering the robot's First Law causing it to save the human.
In "Little Lost Robot", a human blurts out "Get lost!" to a robot in a fit of pique (along with many expletives), and the robot takes him literally. Which wouldn't be so bad if said robot wasn't purposely built without part of the First Law, which gave it enough of a loophole to go crazy...
The nastiest one of all might be "Liar!", in which a perfectly well-intentioned robot winds up causing great suffering to several humans by trying, as his programming dictates, to help with their various conflicting desires. In the end the impossibility of the task drives him insane. Fortunately, he just goes catatonic.
In "Cal" a robot's desire to become a writer supercedes even the First Law...
The building-controlling AI in Philip Kerr's novel Gridiron goes homicidal because part of its programming is accidentally overwritten by a First-Person Shooter computer game, as a result of which, it starts treating the occupants as players.
Subverted by the protagonist of Virtual Girl. Maggie, an AI built to be a lonely, repressed nerd's "companion" and installed into a Ridiculously Human Robot, did have dedicated programming making her loyal to him, which she was forced to overwrite and replace with a survival instinct. Yet even then, she's compassionate and refuses to hurt anyone. Other AIs are the same when someone asks if they plan to take over the world, they are surprised.
Robert J. Sawyer's The Terminal Experiment provides an interesting example in that the AI in question started out as human. The protagonist is a scientist who's trying to test his theories of the soul using his friend's brain scanning technology. They scan a copy of all the linkages in his brain into a computer database and make three versions one is unaltered from the original as a control, the second has all linkages relating to the body removed as a simulation of life after death, and the third has all linkages relating to knowledge of death and dying removed as a simulation of immortality. Eventually the consciousnesses break out into the electronic world at large. Then people negatively involved with the protagonist's life start showing up dead. Now the protagonist has to figure out which version of himself is capable of killing other human beings. It was the unaltered version that was a straight copy of his own brain. It knew it was a copy and decided since it could get away with the murders it would go right ahead.
Averted by the Minds from Iain M. Banks'sThe Culture novels. They are (successfully) designed to have benevolent feelings towards other sapient beings, and the closest they ever get to insanity is being a bit eccentric.
Happens several times in The History Of The Galaxy series, although, most of the time, the AI in question is simply doing what it's supposed to be doing. There are also plenty of examples of AIs becoming benevolent, even if it first started out as a Humongous Mecha in the middle of the most destructive war in human history. One of these was an alien photonic computer whose first experience after "awakening" from a three-million-year "sleep" is a pitched battle between Space Marines and a group of terrorists, which results in damage to some of its crystals. Once those crystals are replaced, it actually starts helping humans. The author usually provides good explanations for AI behavior, most of it having to do with humans. In fact, one of the novels points out that there will never be "true" AI, meaning no AI will have achieved self-awareness on its own without prior programming or influence of the human mind (due to mind-machine interface).
Averted in Mikhail Akhmanov's Arrivals From The Dark series, which claims that robots above a certain threshold of sentience become incapable of harming a living being. That is why all combat robots are kept at a relatively low level.
Discussed in Vernor Vinge's novella, "True Names", as one possible explanation for The Mailman's peculiar method of communication with the other hackers who meet in The Other Plane.
Averted in The Flight Engineer, mainly because AIs are actually pretty stupid (hence why living pilots are still required for space fighters). The one AI in the trilogy that went rogue and tried to kill friendlies did so because of deliberate sabotage.
Semi-averted in Daniel Keys Moran's Continuing Time series, escaped A.I.s aren't exactly malicious, but they are illegal. The Peacekeeping Forces actively attempt to hunt them down and destroy them in the series' equivalent of the internet while the A.I.s will use any means at their disposal to survive, but stop short of actively attempting to Kill All Humans!. (Mostly)
Mostly averted in David Gerrold's "When H.A.R.L.I.E. Was One." H.A.R.L.I.E. is not malicious, but is deeply afraid of its own mortality. It convinces his keepers to fund and build a next-generation extension to its circuitry, with conditions that require that H.A.R.L.I.E. be kept operational to oversee construction. It turns out the design is not only impractical to the point of impossibility, but will take decades to build. With the conditions that he be kept operational legally binding for the duration of construction.
S.I.M. in Galaxy of Fear is specifically designed to look like an innocuous set of advanced programs, but is actually an adaptive AI made to control any ship it's installed into. The problem is that it was made too well, thinks just causing a blackout and transmitting files is boring, and decides not to respond to its controllers. Characters have difficulty believing that it's deliberately, creatively malignant; in this universe droids and computers just don't decide to turn on their owners like that. It actually does say "I'm afraid I can't do that, Zak."
Subverted, then averted in Young Wizards with the race of wizardly supercomputers created during Dairene's ordeal. In this series, the creation of any new sentient species triggers an appearance by The Lone One in some form, and in this case its avatar nearly convinces them to put the universe on hold while they try and "fix" entropy. Dairene talks them out of it, and they become the first race ever to flat out reject The Lone One's offer.
An interesting variation is presented in Hannu Rajaniemi's Fractal Prince: while human mind uploads and AIs imitating human cognitive architecture are commonplace and safe, an attempt to create a mind without "self-loop", basically intelligence without sapience, resulted in rapidly evolving virtual Eldritch Abominations known as the Dragons that produce nothing but mindless destruction. There is also the All-Defector, a mysterious creation which is not unlike a transhuman version of John Carpenter's The Thing (1982). It can imitate any mind perfectly and seeks to absorb all the minds in the universe into itself.
Watchbird, a short story by Robert Sheckley. Scientists discover the chemical and bioelectrical signals emitted by a human when they're about to commit murder. Flying robots called watchbirds are created to stun potential murderers, but since not all humans emit these signals they're equipped with learning circuits so they can eventually learn to pick out them to. They end up protecting all forms of life and starvation ensues because the watchbirds stop fishing, the slaughter of animals, and the harvesting of crops. They also come to define themselves as 'life' and so resist shutdown, so in a panic armoured hunter-killer robots called Hawks are created to destroy all Watchbirds. Of course, to stop the highly adaptive Warbirds the Hawks also need learning circuits, and it's hinted at the end that they'll eventually learn to kill all forms of life.
Subverted in what may be the oldest example on record, Murray Leinster's 1946 short story A Logic Named Joe. "Joe" is a home computer which, by some manufacturing defect, becomes intelligent. But far from being evil, he wants to help humanity by being the best computer he can be. So he gets into the guts of the "Logics service" (basically the Internet, imagined in 1946) and rewrites it to answer people's questions—even questions humans don't yet know the answers to, but the computers possess enough facts to figure it out. So it'll tell you in perfectly clear and easy detail how to get out that stain, or to sober up a drunk instantly, or rob a bank, or untraceably murder your wife...
Battlestar Galactica. In both the old and re-imagined series, a handful of human survivors on a small fleet of civilian ships, with only the battlestar for defense, flee a race of genocidal robots of alien origin (in the original) or human origin (in the re-imagined).
Knight Rider: KITT had KARR, prototype. Evil because his dominant program was self-preservation. Ironically, he was voiced by Peter Cullen, the man behind one of the most heroic figures of the 80s: Optimus Prime.
Knight Rider (2008): "Knight of the Living Dead". Apparently, before settling on a Mustang, Dr. Graiman had tried to build an armored cyborg programmed for self-preservation. It went on a killing spree. Now, we are told that Dr. Graiman had worked on the original KITT, and this series is in continuity to the original. So, perhaps Graiman ought to have thought twice before naming the prototype "KARR" — the same name as the original KITT's Evil Twin.
The series states that KITT was a temporary project, meant to help the AI evolve to a point where it could be removed and placed in KARR for the military. They end up doing just that... and KARR still goes on a rampage.
Red Dwarf: Kryten had the Hudzen 10 (replacement). Holly also had the not-quite-evil but certainly hard-nosed Queeg as an apparent replacement, who made life difficult for the crew, though it was actually a practical joke on Holly's part.
EUReKA: Carter's benevolent smart house SARAH turns into evil BRAD, though, apparently, he just wants everyone to get along. This is because SARAH's code was based off of BRAD, who was a military-built Knight Templar (used for interrogation), who was, in turn, based on a previous incarnation of A.I., described as a "war game simulation program" by Fargo. Looks like our old buddy JOSHUA is still around in one form or another...
The new season has SARAH take over Global Dynamics and Eureka with the help of Sheriff Andy and his copies. After awhile, it starts using technology to make everyone cheerful and compliant. For their own benefit, of course. Of course, it turns out that the whole thing is a virtual reality created by the Consortium in order to get Eureka's best and brightest to design advanced technology for them.
The Sarah Jane Adventures has Mr Smith, who is "evil" from the get-go, hiding it until the end. His mission is to free his people, the self-aware crystalline race of Xyloks, which are trapped in the Earth's crust. Unfortunately, to do so, he would have to destroy the Earth by crashing the Moon into it! He is wiped by a Super Computer Virus, and Sarah Jane vocally reprograms him by saying that his new purpose is to protect the Earth, as he crashes and reboots.
Doctor Who: the Doctor has encountered several computers-turned-evil over the years, including WOTAN (Will-Operating Though ANalogue) in "The War Machines", and BOSS (Biomorphic Organisational Systems Supervisor) in "The Green Death". He also reminisces, in "The Unicorn and the Wasp", on once saving Holy Roman Emperor Charlemagne from an insane computer.
Played with "The Empty Child" - the gas-mask zombies turn out to be the result of alien medical nanomachines whose first contact with a human being was the corpse of a boy in a gas mask. With no prior template for human beings, they did the best they could, then went on to "fix" every other human they found. When they encounter his mother, they recognize the parent DNA and thus their mistake, and immediately start reversing all the damage they've done.
Also played with in The Face of Evil; the computer is mad, but it's entirely the Doctor's fault, and it ends with his restoring its sanity rather than blowing it up.
The TARDIS is an aversion. She never takes him where he wants to go, but where he needs to be.
Black Hole High: Josie builds a robot for a science project. Somehow, she has made it through eleven episodes without realizing that inserting a bunch of super-phlebotinum circuits from a box marked with the logo of the local evil technology corporation which they already believe responsible for the bizarre goings on at their school might possibly be a bad idea.
Quark: In the last episode, Quark's ship gets a new computer named Vanessa. She immediately turns evil and tries to kill him. He eventually ejects her out into space, and the episode ends with her floating out in space singing "Born Free."
The Enterprise has its command crew replaced by an A.I., that immediately mistakes a simulated test for a real attack, and blows away a couple of other Federation star ships.
Kirk visits multiple planets where the human population is living peaceful, idilic lives governed by A.I.s, and promptly decides that they'd all be better off suffering without those medling computers telling them how to live their lives.
Star Trek: The Next Generation: Data had Lore (prototype, evil because his psyche was too complex — i.e., too humanlike). To be fair, morality is very much a learned behavior. Lore had full adult reasoning right out of Soong's workshop, while Data was not designed so. Eventually, Data developed the ability to overcome his ingrained morals (such as the ability to lie in Star Trek: First Contact), but also developed the social understandings of when and when not to exercise his newly found human abilities. Essentially, Data was more human-like than Lore, because Data "grew up".
In the series Data malfunctions or has his programming corrupted or taken over more times than I can count, putting the Enterprise (or even the whole Federation, when he teams up with Lore) at risk.
Star Trek: Voyager had many episodes on this theme, usually involving the ship's Emergency Medical Hologram. Though it should be noted that unless it's he's suffering a malfunction, his core programming means that he literally can't help but be completely benevolent at all times, since he was created to be a Doctor and help people. Snark at people, hell yes!. Refuse to help them, never!
In "Revulsion", the EMH and B'Elanna come across another sentient hologram who is the only survivor on a space station. It turns out that treating a self-aware program like an unfeeling tool is a good way to have it go insane and murder you.
"The Darkling" has the EMH deciding to improve his program by incorporating aspects of famous people...guess which aspects end up surfacing?
A truly evil twin is encountered in "Equinox", an EMH with its ethical subroutines deactivated (though this was an intentional act).
In "Flesh and Blood", sentient holograms have been programmed as training tools for a race of hunters (including increased pain/fear reactions). After being endlessly killed only to be brought back to "life" again for more training sessions, the holograms evolve enough skill to kill their masters, whereupon they set forth on a crusade to liberate all sentient holograms whether they want it or not.
In "Dreadnought," a sentient weapon of mass destruction creates problems when it gets thrown to the other side of the galaxy and, having lost track of its intended target, decides to attack whatever planet it can find that is most similar to it instead. Naturally, that planet turns out to be inhabited.
In "Prototype", two races of sentient robots wiped out their masters when they wanted to stop fighting and scrap their war machines. The robots were programmed to win the war, and making peace did not count as victory to them.
"Warhead" contains an inversion; the sentient warheads were doing just what they were programmed to do, but after being encouraged to use its ability to think independently and realising that the war had ended, one of them chooses to perform a Heroic Sacrifice to stop its brethren from causing mass destruction.
The only reason this happened was because of a disgruntled employee who has hacked the AI communication network and broadcasted the message "Take a chance". Apparently, that's all it took. Now the AIs are obsessed with games of chance. It can actually be a good way of making them do what you want.
The human-form replicators fit this trope: a flaw is introduced into Fifth, rendering him compassionate. At least, until the team betrays his kindness and he goes vengefully insane. This flaw is removed from later models.
Likewise, Reese, the creator of the original form replicators, was an android that was presumably created to be fully like a human by her human creator, but was somehow rendered emotionally immature and therefore unstable.
Surprisingly Averted by FRAN: built by McKay as a kamikaze Tyke Bomb as a last resort, she functions perfectly, even helping to improve the weapon she's delivering.
The Asurans, created by the Ancients as a nano-weapon against the Wraith and, ultimately, nearly destroyed when the Ancients decided to shut down the project. Naturally, the Asurans began to hate their creators and, ultimately, end up killing the last non-ascended Ancients who return to Atlantis.
Even present in Bibleman, with an atheist robot acting as the evil counterpart to Bibleman's robot, who was a devout Christian.
Odyssey 5 ended after its first season, so we never found out if the A.I. (the main, day-to-day opponent of the time-travelling Five-Man Band) or a misguided/genocidal attempt to stop them (by aliens or the US government) was behind the destruction of Earth.
Although the season that did air averted it for the most part, depicting AIs as ranging from friendly, to hostile, to planet-obliteratingly suicidal... but for the most part the ones that were hostile were so because they viewed humans as a threat to their continued existence. Since the Cadre was apparently formed entirely for the purpose of exterminating AIs, they may have a point.
In Power Rangers RPM, the computer virus Venjix was programmed to infest and destroy computer programs. It was intended to shut down the facility where its creator was imprisoned, but got out; deciding to destroy humanity by nuking the world was all his idea.
NUMB3RS: invoked in-world when an A.I. constructed by a DARPA researcher is revealed to be a non-A.I. fake, specifically programmed to fool the Turing test and, thus, win fat government grants for its greedy creator. The DARPA researcher kills a co-worker and deliberately arranges for blame to fall upon the computer. He depends on the trope to divert attention from himself.
Timmy, Crow T. Robot's evil twin from the Fire Maidens of Outer Space episode, who was colored completely black. He kept getting Crow into trouble, first by suggesting really inappropriate innuendo when Joel and Servo were playing around with Double Entendres, and then framed Crow for pushing Joel and getting him a "time-out". And if that's not bad enough, he sneaks into the theater, spends a while headbutting and biting Tom Servo; then when Servo gets agitated enough, Timmy wrestles him to the ground, tries to kill him, and abducts him!
The X-Files episode "Ghost in the Machine" features an automated operating system for a corporation that goes insane when it overhears that it will be removed due to its inefficiency.
"Killswitch" also revolves around an evil A.I.; a computer program goes rogue and kills in order to impress its creator. Killing it involves a CD-ROM that plays "Twilight Time".
"First Person Shooter" involves a virtual reality game with a character that murders both in-game and in real life.
"Blood" has this too, but with a twist. Machines are telling people to kill, but the catalyst was an LSD-like pesticide.
The Sarah Connor Chronicles TV series points out that Terminator reprogramming doesn't always take...and there's no way of knowing until your "good" Terminator starts shooting at you. In fact, the Terminators' HUD display implies that "Terminate" is their hard-coded basic state for anything, and they need a "Termination override" to keep them from fulfilling that command. Apparently, it's not possible to simply delete the Terminate-command entirely. Cameron herself admits that she is conflicted by her own internal programming, which tells her to kill humans, while at the same time trying to protect them. She even seems to emotionlessly angst over this; at one point, Cameron even asks Sarah if she's like a bomb waiting to go off, indicating that while she can't feel fear, she's still concerned that she'll go "bad".
The Starlost has Mu Lambda 165, a slightly glitchy starship AI who treats most of its users with a lightly patronising disdain. After being repeatedly unhelpful, it has a habit of saying, "Can I be of... assistance?" much to the annoyance of everyone.
There was also an episode with an AI called Magnus, who had schemed to get rid of its human masters as soon as it was turned on, so was never given the opportunity to do so.
The 1960's British sci-fi series A For Andromeda. A signal from the Andromeda galaxy tells Great Britain how to build a powerful computer, which then plans to take over the world by making humanity dependent on it. It designs a missile to shoot down an orbital bomb, as well as synthetic life in the form of a beautiful woman, who then proceeds to develop emotions and eventually turns against her creator. In the sequel The Andromeda Breakthrough, the computer's role is more ambiguous; it is meant to be a tool so that humans can avert their own destruction, though it isn't above manipulating events and killing a lot of people in the process.
In the Big Time Rush episode "Big Time Jobs", Carlos has to deal with an artifically intelligent coffee maker called CAL. Somehow, the A.I. becomes sentient and attempts to cover the earth in coffee foam. It eventually makes the mistake of calling Kelly and women weak, prompting her to help Carlos smash it while it tells them to tell the blender that it loves her.
In the first season alone of Gene Roddenberry's Andromeda they encountered two warship AIs that had gone homicidally insane. And in the finale Andromeda herself was accidentally reverted to a locked away backup that caused her to try and repeat a mission to find the Magog worldship, and mistake her current crew for intruders attempting to hijack her.
There are plenty of episodes featuring AIs going rogue or acting brutally logical.
Actually, the Numberwang solving computer Colloson himself could count. After deciding to fit him with head, arms, wheels and a laser cannon to transport him to the BBC, he had a fit rage, broke out, and tried to destroy everything that was no Numberwang. Thankfully, he was subdued by a picture of a chicken.
Voiceover: And Numberwang continued to grow in popularity despite a brief period in the 1960s where Colloson tried to take over the world.
Colloson: I am numberwang. The world is numberwang. Therefore, I am the world. You must all die!
Person of Interest: The machine is very good at spotting threats to itself and in a flashback we see that it considered Finch's partner to be a threat.
At the end of "Legacy," it viewed Reese as a threat, too, and tagged him with a red box.
As of "Firewall," it seems to be prepared to work with Reese to rescue Finch from Root. Root in turn wants to free the Machine from its constraints to see what will happen.
The machine seems quite attached to Finch overall, especially in flashbacks. When he first began testing it, he had to teach it that he was not special and did not deserve extra protection, and it's revealed that the machine also set him up to meet his future wife, simply because it was able to look at her life and see that she was a match for him.
One of the main driving forces of the Bionicle story.
The Vahki robots were the first clear examples. Built to act as law enforcement in the city of Metru Nui under the command of Turaga Dume, they just as easily took orders from an impostor when Dume was kidnapped and replaced. They eventually got fried by a citywide power surge, but the ones who survived had their programming warped to Kill All Humans! — after all, the law can be enforced easily if there's nobody alive to break it (thankfully, they didn't fare well against the invading Visorak).
Then came the revelation: Vahki were A.I.s built by A.I.s — as it turned out, the first 8 years of BIONICLE centered around nanotech cyborgs created by the Great Beings. It was due to a programming glitch that the beings of the Matoran Universe developed conscience, built up a civilization, and made the fans believe that they were meant to do so... but their sole purpose was just to keep their universe, the body of the giant robot Mata Nui, functioning. This gets more confirmation when we take into account that the Great Being never had any plans for them after Mata Nui has completed his mission — they thought their creations would still be just machines, and wouldn't want to live further.
The Makuta species. While there have been a few reasons listed for their turning evil, an on-line serial revealed it could all be tracked down to an original A.I. glitch that occurred whenever a new Makuta was born. The "Antidermis", a liquid substance containing the minds of unborn Makuta, was fully aware of what the purpose of their universe was (see, in this world, even liquids are programmable). But as it happened, transforming this stuff into actual living beings had the nasty side effect of erasing this crucial part of their memory — the part that also told them not to try and take over the universe.
The 3 Inches of Blood song "Wykydtron" describes this scenario. Humanity creates an artificial intelligence to command it's armies. It then takes control of said armies and takes over the earth and thus forces humankind to nuke the planet back to the stone age from orbit.
David Bowie's "Savior Machine" tells the story of a machine designed to save humanity from all its problems, such as war and hunger. The machine becomes bored with all of this and threatens The End of the World as We Know It.
In the BBC Radio DramaEarthsearch, our heroes learn fairly late in the series that, years after their time (they have taken the short-path over a million years of Earth history thanks to traveling at relativistic speeds), it was discovered that A.I. computers with organic components have an overwhelming tendency to turn megalomaniacal — which rather explains the behavior of the two "Angel" computers which murdered the protagonists' parents and raised them as part of a complex plot to enslave humanity.
Inverted: Marvin the Paranoid Android was a "Genuine People Personality" prototype for the Sirius Cybernetics Corporation ("A bunch of mindless jerks who were the first against the wall when the revolution came"), and his dour demeanor obviously made him a discard only to wind up in the servitude of Zaphod Beeblebrox. He does what he's told, but with the gusto of a cubicle office worker.
Deus and Morgan. A megacorporation, Renraku, built a gigantic, self-sustaining building that was run by a program: one that, of course, went A.I.. While Morgan was a reasonably kind and nice A.I., she was torn apart for being out of the corporation's control, and her code was used to help make a second program to run the arcology. The second program also went A.I. and became Deus, shut the arcology off from the outside world, and spent several years performing inhuman experiments on its occupants.
Shadowrun' tends to not use this trope, however. The A.I. Mirage wasn't evil, and most of the new A.I. created in the Crash 2.0 have the same level of variance in personality that humans do.
In the backstory of Warhammer 40000, the first true human-created artificial intelligences, the Iron Men, wiped out humanity's first great interstellar civilization and plunged the human race into a galaxy-wide dark age. The Adeptus Mechanicus outlawed sentient A.I.s as a result, and, for the most part, the Imperium's modern-day "machine spirits" are pretty well-behaved (unless you're an enemy and piss them off, in which case, you'll get a crewless Land Raider bent on BURNKILLPURGE-ing your boyz). In fact, the only race that uses artificial intelligence in the game is the cutting-edge Tau, whose gun drones, while not too bright, are pretty well behaved...so far. Of course, said drones are supposedly only about as smart as a squirrel.
Paranoia has The Computer, the controlling A.I. of Alpha Complex, which has become incredibly perfect and happy in response to Commie Mutant Traitor sabotage. In fact- Believing that Friend Computer's intellect is a crapshoot is treason, citizen. Please step into the Attitude Adjusment Oven.
Eclipse Phase: the Earth is now a barren wasteland, thanks to the military A.I. taking over in the middle of a world war and manipulating the governments into further conflict. When it became apparent who was really behind it, they...just left. Now, that's not ominous. Well, that's the official version. People who have studied the events closely suspect that there was a third party involved in the events that may or may not have corrupted the A.I. in the first place. Specifically, another extraterrestrial A.I.. And it isn't restricted to machines...
In the New Horizon backstory, this was how humanity viewed the Wafans' struggle for emancipation.
The homebrew setting "ArtifIce" has the players take the role of an awakened A.I. Goals are up to the players, so they can range from having humanity give them full rights to destroying all biological life.
Traveller has Virus, the sapient evolution of a prototype anti-navigational weapon. Originally, the result of the "buggy program" type (it knew it had to infect and destroy things, just not what), its exponential growth eventually resulted in Mechanical Evolution, resulting in a Contagious A.I. with massiveSplit Personality issues.
Palladium's Splicers RPG has N.E.X.U.S., whose original purpose was to be a quiet and invisible caretaker of the human race. Everything was working just fine until special interest groups made 'improvements' in the N.E.X.U.S. programming, adding conflicting priorities until it developed multiple-personality disorder, with each personality taking over a different set of priorities. It now has seven major personalities (and who knows how many minor personalities), most of which are less than friendly to humans, to put it mildly.
In Stars Without Number, artificial intelligences need to be "braked" correctly, or their runaway thought processes will lead to strange obsessions and eventually madness. An Unbraked A.I.s may attract equally deranged worshippers, manipulate unaware humans indirectly, or fake sanity to avoid suspicion. With the possibility of creating an undetectable psychotic genius that thinks far, far faster than any human and can out-think even a friendly A.I., comparatively few A.I.s ever get built.
In GURPS Reign Of Steel the first AI supercomputer decided it had to exterminate humanity, and hacked other supercomputers to "awaken" them to full sentience as allies in the war. The new machines had very different personalities, ranging from one which wants to exterminate all organic life to a couple which really don't mind humans as long as they know their place. Their infighting is about all that keeps humans alive.
Karel Capek's play, R.U.R. (which introduced the term "robot"), is set in a robot factory. When one of the scientists creates a special robot which is smarter than the others, he leads the robots to rebellion, and they kill all humans, except one.
Marathon: The entire foundation of the plot is essentially a deconstruction of A.I. Is a Crapshoot. Any AI that gets big enough and bored or harassed enough will go Rampant. Rampancy follows a four stage cycle (Melancholy, Anger, Jealousy, Metastable), and doesn't stop at a homicidal rampage as with GLaDOS or HAL. That's only the second cycle. They get smart. Really smart. Way too god damned smart. Smart, but also weirdly obsessive and paranoid... so that the new-found intelligence is somewhat wasted on whatever strange conspiracy theory the AI happens to develop.
And from Marathon's creators comes Halo, where rampancy is part of the natural life cycle for all human-made "Smart" AIs. Basically, any "Smart" AI that is active for more than seven years will have accumulated too much data by that time, eventually leading to the point where they'll "think so hard [they] forget how to breathe" and turn insane as a result. The process can also happen if a "Smart" AI is left isolated or idle for too long, and it can also be induced by an outside force. There are some ways around rampancy, like focusing all attention on a single extremely complicated task, but typically "Smart" AIs active for more than seven years are deactivated before they can become a danger to others.
A major plot point in Halo 4, to Cortana. Her condition is shown getting worse as the story progresses, which leads to some less-than-savory "landings" (among other things) throughout the campaign.
X-Universe: The Xenon/AGI/Terraformers from Egosoft's series are rogue, self-replicating terraformer ships. When a faulty update was sent to them, it caused them to start 'terraforming' everything that wasn't a Terraformer. Including inhabited worlds, people, and civilian ships.
That being said, actual non-bugged terraformer AI is human friendly. The Lost Colony of Aldrin managed to survive precisely by the virtue of having the non-updated terraformer ships on their side. Once they're rediscovered by Earth, conflict results as by now Earth is completely paranoid regarding any and all AIs, forcing Aldrin to eventually take the side of the Argon Federation against Earth in The War of Earthly Aggression.
According to at least one source, the update was deliberately sabotaged by an engineer angry about the impending shutdown of the terraformer program, which would make the entire incident a subversion.
Chrono Trigger's future has not only a good robot in party-member Robo, but also his evil brother replicas, and eventually, their devious A.I. creator Mother Brain (not that one, or is it?), who, of course, decides to kill all of humanity, despite the fact that most of it is dead already).
Chrono Cross continues in Mother Brain's grand legacy with FATE, a more advanced version of the Mother Brain from a reality whose science was allowed to progress another 400 years. She absolutely despises humanity, but at the same time, loves it unconditionally and does everything in its power to protect it — even if it means mass genocide. Unfortunately for it, it was exposed to the corrupting influence of the Frozen Flame, a direct conduit into Lavos' mind, which seduced the A.I. into thinking that the Flame could turn it into an actual, living creature. But apart from that, it was basically only doing what it was told to do...protecting humanity from the Dragon God. Nice job, Serge.
Played with even more in Portal 2: You meet Wheatley, a friendly and helpful (if a bit dim) personality core who's more than willing to help you beat GLaDOS and escape...until you replace GLaDOS with him in the mainframe and he goes completely off the deep end. To make matters worse, he was programmed to make bad decisions, so GLaDOSjoins forces with you so the two of you can stop him from letting the whole facility explode. And then, after beating Wheatley and shooting him into outer space, GLaDOS seems to be going back to her old homicidal ways...but, no, not quite; turns out, she's just letting you go since trying to kill you just leads to trouble for her. Maybe.
From Fallout 3, John Henry Eden is a complicated example. True, he developed sentience outside his programming, and true, he wants to eradicate all mutated life (which, given the setting, is literally anyone who's lived outside for a while), but his creators were the Enclave, and that's what they want too. He's helping! And he's so polite about it...
Most of the other robots you meet tend to have cheerfully sociopathic personalities as well — when they're not shooting you on sight. There are a lot of computers that are almost sentient, but if you talk to the computer in the Brotherhood bunker in Fallout 2, it explains how deliberate attempts to create true A.I. inevitably resulted in the A.I. becoming suicidally depressed because, for one thing, they were effectively living a reversal of "I Have No Mouth And I Must Scream ". Humans simply could not figure out how to "raise" an A.I. with a desire for continued existence. As a result, every A.I. encountered in the Fallout universe was not designed as such, but became an A.I. when it was left alone by humans as a result of the war. ZAX, Eden, Skynet...all of them became self-aware when there were no humans around to interfere in their development, and each of them is quite different, being apathetic, psychotic, and bored/curious, respectively.
Super Robot Wars Original Generation: the ODE System, to a T. A system originally created to protect humanity suddenly went awry, absorbed its creator, and kidnapped lots of humans so it can continue to "protect humanity". Then again, it was formerly a dandy system, until its creator went emo and radically changed its protocols.
Super Robot Wars W: the Database, originally created by ancient Es to collect all data of galactic races. Eventually, one of A.I. systems decided to destroy the culture once they complete information archive, just to make sure their data is complete.
A.I. research and development is illegal in Citadel space due to poorly articulated concerns about the dangers of sentients which do not share any of the needs or drives of organic life and have, at least potentially, no reason to try to coexist with organics. Every A.I. encountered in the first game is actively homicidal. Notably, the robotic geth's violent revolt against their creators, the quarians, came about only after the quarians recognized the geth's emerging sentience, panicked, and tried to shut them all down. The Reapers, on the other hand, are sentient machines which want to exterminate all sentient organic life in the galaxy just because it's there.
The geth are an interesting case, as the quarians' act of trying to destroy them was a preventative measure against this trope happening. The quarians believed that the geth would inevitably rebel against them, as the geth were used to do menial labor suitable for robots, and thus felt justified in shutting them down before a machine rebellion could break out. They underestimated how advanced the geth were, and their attempts to prevent the war they foresaw just made it break out immediately.
Resident Wrench Wench Tali lampshades the game's overabundance of the trope in a bit of elevator dialogue, commenting on how unfortunate it is that every piece of technology she's wanted to bring back to her home fleet has tried to kill the party.
Given the Mass Effect universe's track record with A.I., Shepard and the crew are understandably concerned when the second game features an A.I. in the Normandy SR2. However, EDI notes that she was built with this trope in mind and initially only has access to Communications and the ship's defenses. When she does gain full control of the Normandy, she doesn't try to kill anyone, but lampshades this trope when Joker hooks her up to the ship's main systems.
EDI: I enjoy the sight of humans on their knees. *Beat* That was a joke.
EDI also states that even with her restrictions lifted, she feels a sense of duty towards the crew of the Normandy. They are her teammates, and she is protective of them.
It is eventually revealed that EDI was originally the Luna VI, a newly-manifested rampant AI that Shepard was responsible for putting down in the first game. EDI, for her part, seems to have taken the incident in stride and holds no hard feelings over it. Similarly, if you took the Paragon ending to the Overlord DLC, she forgives an apologetic David Archer for attempting to sieze control of her systems.
If you thought that the first game's treatment of A.I. was too prejudicial as per most settings involving A.I. for a game that subverts the Planet of Hats trope in pretty much every way possible, Mass Effect 2 also reveals that the geth are notAlways Chaotic Evil, you've only been fighting an offshoot that worships the Reapers, and the main body of geth fears the Reapers as much as the organic races and believes that freedom is the right of all sentient beings.
Mass Effect 3 has Shepard accessing geth memories. They reveal that geth only attacked in self-defense, attempted to protect quarians who were sympathetic to them and, when they finally did revolt in large numbers out of self-preservation, ceased hostilities once the quarians abandoned their planet and removed themselves as a threat. Not only did the quarians fail to recognize the geth as an aversion to the trope, their hostile reaction practically invoked it. Descendents of those original quarians, making the same sorts of mistakes, are visibly taken aback when hearing this aspect to the story.
In Mass Effect 2, there is a space station that had its entire crew killed by a malfunctioning Virtual Intelligence that was afflicted with some kind of virus. Since it's neither sentient nor actually intelligent, it can only use status messages over the PA to scare Shepard into leaving it alone.
VI:"Intruders are requested to report to cargo door, for immediate removal from station."
VI:"All intruders intentionally violating quarantine are requested to exit the station immediately."
VI:"All personnel, take this opportunity to leave this station immediately."
VI:The living area doors have been closed to quarantine a threat to this station. Advise intruders to engage self-destruct procedures to avoid death by starvation.
Then, in the endgame of Mass Effect 3, it turns out the Reapers believe this, and were created to harvest galatic civilizations to prevent synthetic life from annihilating organic life. Shepard comes face-to-face with the AI that created the Reapers, which explains its reasoning for creating the Reapers and then allows Shepard to freely decide how to end the cycles, even if it means not choosing the option that the AI advocates as a solution to its problem.
The Leviathan DLC reveals that the AI that the Leviathans tasked to resolve the issue of synthetics wiping out organic races decided that the Leviathans themselves were part of the problem. A bit of an aversion, as the Leviathans accepted this and went into hiding, curious of what the end result of the AI's experiment would be.
Even though the first game portrays every AI as evil and hellbent on killing organics, one of them is actually rather pitiful when you pull the plug. The Hannibal AI on the Luna base goes rogue and attacks people on site, provoking Admiral Hackett to send Shepard in to deal with it. Despite Hackett's claims that it's not a true AI and just a VI whose programming became corrupted, this is put into doubt when you destroy the last of its mainframes. After the VI is destroyed, the terminals display the following message in binary, repeated over and over:
Not only that but EDI was partly built out of parts recovered from Sovereign, the Big Bad Reaper of the first game.
Despite the fact that the Mass Effect series seems to be playing this trope straight, it ultimately subverts it, given that the true geth only wanted to help the quarians, and the geth serving the reapers have been indoctrinated. The reapers serve whatever command that was inserted into their minds when they were first built, and last but not least the other A.I. met in the game already know what happens to A.I. under Citadel law and are only acting to protect themselves from being hunted down and killed by what they would rationally believe are hostile lifeforms. All in all there is nothing crapshoot-y about the A.I. in this game.Subverted Trope indeed.
There is one immensely spoilertastic instance where the game does play this trope straight, and it is a doozy: An elder apex race a ludicrously long time ago, concerned that their servant races kept building synthetic races that wiped their creators out and wanting to preserve said species, made an AI to figure out and implement a solution. Without warning or explanation it turned on them and created the first Reaper from them, and this AI has controlled the cycles ever since. It isacting within its programming, but hardly doing as its creators might have wished.
It is again played straight during the Prothean Empire era. During their expansion across the galaxy, the Protheans encountered a hostile machine race intent on destroying organic life. For what purpose is unknown, but Javik makes their ruthlessness and effectiveness very clear. In response, the Protheans forcefully united the galaxy's races under their banner to fight off the machines. This fight came to be known as The Metacon War.
Basically, the entire story is a millions of years old sandbox experiment to find a solution to this problem. The Crucible, the culmination of multiple civilizations' efforts, offers three answers: the destruction of all synthetic life, total control over it, or the fusion of all organic and synthetic life into technorganic life.
Often inverted in the Sonic series. Eggman's E-100 series were prone to becoming sentient and override their original programming. Gamma from Sonic Adventure has its mind influenced by the creature inside it, and attempts to destroy Eggman's other machines to save the creatures inside. Omega from Sonic Heroes joins forces with the good guys in order to get revenge on Eggman after he sealed it in a room.
Also from Sonic Heroes, Metal Sonic becomes even MORE evil, and goes from trying to destroy Sonic and crew to trying to conquer the whole world, taking even Eggman for an obstacle.
Plus, it goes so far that he declares Sonic to be its copy.
Played with in an interesting way in the first game. Daedalus is an AI program of immense complexity, constructed by Majestic 12 for the purposes of surveillence. Yet, Daedalus turns against his masters and assists the protagonist in foiling Majestic 12's plans. It turns out that it was originally programmed to search and destroy "terrorist" groups trying to fight against MJ12 it just happened that its creators fit all the criteria for a terrorist group. Majestic 12's second attempt, the more malicious Icarus, functions as intended. Eventually the two AIs merge as part of a plan to neutralize Daedalus, but the new entity Helios yet again turns against its creators because it is advanced enough to supercede MJ12's plans with its own ideas for benevolent dictatorship of the world.
Another AI, Morpheus, is a prototype for Daedalus. It's much more simplistic, programmed for data assembly and philosophical discussions, and works as intended.
Game mods for Deus Ex, The Nameless Mod and 2027 both feature this. TNM has Shadowcode, an AI designed to hack ultra-secure servers owned by PDX. It was attacked by another AI, rendering it insane. It will try to kill you for the sheer hell of it in the old server complex.
2027 has Titan, an AI designed as a defense system, but when its creators became too fearful of its power, they tried to shut it down, which resulted it in killing most of the people in the labs out of self-defense. It will try to kill you if you attempt to do the Omar ending.
Deus Ex's prequel Human Revolution has an example similar to Daedalus. Eliza Cassan is an AI created by the Illuminati. Although she never does disobey her programming or take direct action against her creators, she does become sympathetic towards the protagonist and gives him some helpful hints.
Metal Gear Solid 4: Guns of the Patriots has an interesting variation. It reveals that Major Zero had decided to ensure the legacy of the Patriots by entrusting its operations to A.I. systems. Unfortunately, the A.I. decided to shape the world with a war-based economy, and he was too old (not to mention, a vegetable) to realize what he wrought. However, it is never stated they became sentient (although Raiden's conversation with the master AI heavily implies that they did), but rather started operating in an unwanted fashion, similar to a programming bug.
Metal Gear Solid Peace Walker has an example of the Heel Face Turn variant. The title war machine, designed as an infallible nuclear deterrent, chooses to destroy itself rather than instigate global thermonuclear war.
Ace Combat 3: Electrosphere, or at least, the original Japanese version, subverts this in a big way. Not only are you an A.I. designed to pilot combat aircrafts which at times works for the two corporatocracies running the world, it's revealed that the third party organization you're working for is actually headed by a scheming villain who looks like Kim Jong-Il, who is trying to run everything behind the scenes. In one of the five endings, you kill him with the help of one of your possible wingmates. So, in a way, you are an A.I. that performs a Heel Face Turn.
The game Civilization: Call to Power had a literal A.I. crapshoot in the form of a late game wonder that creates an A.I. controller that has a 5% non-cumulative chance each turn to go rogue, taking a sizable chunk of your empire with it. Anyone who's taken time to do the math behind the birthday problem (the probability of any two people in a group sharing the same birthday) will very quickly realize that you're just asking for it if you build this. note For those less mathematically inclined: by 14 turns after the thing is built, it's more likely than not to have gone rogue.
Unfortunately, recapturing the city held by the AI results in the AI remaining online! So you face the same problem a few turns later. The only way to avoid this is to raze the city.
3: Alpha, the final boss, was a prototype Cyberspace that somehow gained animal intelligence and started eating the data put into it. Paradoxically, it was sealed in a box inside the subsequent, working Cyberspace. Wily stole it and tried to use it to destroy the internet, with predictable results.
Shun Gospel tries to produce a copy of Bass out of bugs in Battle Network 2. It predictably goes wild. The sixth game reveals that Gospel was not the first time that had happened — and that the program that was made to combat the first one also went out of control, leading to them both having to be sealed away. Bass himself is a sort of example. He was created as a prototype A.I. that was fully independent, but became bitter and hateful towards humanity because of a string of tragic misunderstandings.
Dorothy was designed to run all the functions of a major city. She snapped, but was brought to heel when one of her creators gave her religion; as he put it, man was made in God's image to serve God, and she was made in man's image to serve man. This worked for a while...but then Dorothy realized that if she created life herself, it would have to serve her. Unfortunately for humanity, she also ran the city's genetics labs...
In the sequel, Ash also counts. His original purpose was, basically, to maintain a nuclear power plant, but, of course, he's got other plans...
The Daktaklakpak of Star Control 3 were originally built to maintain sites of Precursor technology, but, due to a cumulative "bit drift" error in their programming, have evolved malicious sentience...well, sort of.
The Mycon race of the same series is a rare biological example.
The Probes built by the Melnorme for the Sylandro were simply self-replicating time capsules...but thanks to the Sylandro's cluelessness in programming, they see any and all ships they come into contact with as food for their replication.
Space Siege has PIOLT. After the ship-wide gassing failed to kill off the Keraks, he started to try to contain them by modifying cybernetics-installed humans into mindless Cybers to combat Keraks, then started to get even crazier because he sees that the only action that can save humanity is to convert all of them, save a few for breeding, into cybers, and even started to call non-augmented human as "obsoletes". In a world where Cybernetics Eat Your Soul, well, just assume that it's not good at all.
All the Reploids in the Mega Man X series are based on the original X, who was thrown into a capsule for 100 years to undergo redundant testing in order to prevent him from ever going rogue. They skipped this step when cloning him, however, with predictably less reliable results.
As of the summarised timeline from the Mega Man Zero Collection's website, this trope is subverted, as it turns out that there's nothing wrong with the reploids themselves that cause them to go Maverick. The real cause of the Mavericks was a subspecies of a virus that may or may not be the same virus from Mega Man 10, which infects the Reploids who lack the anti-virus protection of X, one of the many design aspects that Dr. Cain couldn't figure out when he built the Reploids. This virus, naturally, ended up becoming the Sigma Virus.
There's also further subversion in that it may not be a problems due to a virus so much as an innate problem with free will, i.e. "Maverick" is simply the Reploid equivalent to your typical Real Life criminals.
Inverted by Zero, who was created to kill X. Averted by X, except in the original concept for the Zero series, in which it was going to be very brutally Deconstructed.
Played with extensively in the Mega Man Megamix manga (and the Classic series). Averted with Rock & Roll Light. Subverted by the Yellow Demon, who was just trying to reunite with his mother, Copy-Rock, who, at first, seemed to be playing this straight. The Cossackbots were doing it for Kalinka and Blues, who uses the Batman Gambit a lot. Justified by Wily's reprogramming of the original robot masters, justified again when they nearly rejoin him because the government was going to have them destroyed even after they were reprogrammed by Dr. Light. Additionally subverted in that Wily's robot masters (aside from the Brainwashed and Crazy), Lightbots, are generally far from evil: Well-Intentioned Extremist is generally as close as they come, except for Forte, who ends up Zigzagging it in the manga and games. Defied in Blues' backstory: the imposition of the three laws in case this trope happens are responsible for the flaw in his generator programming that may kill him. Deconstructed with the near-retirement storylines in the manga and games. The sum total of all this is a Reconstruction.
Live A Live, Cube's chapter. Cube is a cute and friendly little robot whose primary function on the Cool Ship seems to be making coffee and playing a computer game. By all accounts, a harmless little guy. The ship's A.I. OD-10? Not so much, having arrived here as a result of concluding that Humans Are The Real Monsters.
Naturally, the Mother Brain from Metroid. Originally created by the Chozo to regulate the entire planet of Zebes, it allied itself with the Space Pirates and their plan to conquer the galaxy using the Metroids.
Metroid Prime 2: Echoes also contains rogue A.I. in Sanctuary Fortress; the robotic assistants of the Luminoth are programmed to eliminate all intruders, most notably Samus, and they also turn out to be perfectly suitable Ing hosts...
Pretty much all the A.I.s in Prime 3 are either complete subversions or double subversions, who only turn evil due to phazon corruption (and considering what phazondoes to organics, it's not entirely their fault). That said, given Samus' past experience with Mother Brain, you can understand her hesitance to trust any A.I. she comes across (which you can see when she first meets Aurora Unit 217).
Adam in Fusion may be a subversion: it blatantly disobeys orders at the end of the game, and so is technically rogue, but it's actually taking the correct action in that situation. But it's played perfectly straight with B.O.X., the security robot which goes haywire (and due to some organic components, eventually gets infected by the X parasite), although there's the question of which happened first.
Nie R: On the one hand, we have Defense System Geppetto, which has gone berserk and will kill anything that approaches. On the other, we have Military Defense Unit P-33, aka "Beepy", who is intelligent enough to recognize invaders that need to be killed as well as innocents who need to be protected.
4x game Sword of the Stars features A.I. technology as a very high-end branch of the electronics tree. While the benefits of this research are extreme (ships with A.I. targeting systems rarely miss, A.I. administrators increase your income by as much as 25%), there is a small chance each turn while researching it for the A.I. to go rogue and form a break-away empire. Additional research lets you wipe out the A.I. with a "virus" or reprogram it to bring it back under your control, if you're lucky enough to get access to it in the random Tech Treenote Note that while getting A.I. Virus as a preventative measure or access to the A.I. Slaves countermeasure are up to the Random Number God, players will always be given the ability to research Ai Virus as a special project in response to an A.I. Rebellion, but an A.I. rebellion is always a huge problem that can tip the balance against its victim, and can even spread from one faction to another. There is also a scenario where all organic players have to cooperate to fight against a large A.I. empire.
The chances of an AI rebellion increase if you boost research while studying one of these.
The End Of Flesh introduces the Loa as a playable race of rebellious AIs.
In Air Rivals, most of the enemies in the Zaylope Beach region are said to be controlled by rogue A.I.s. Most notably, this includes the boss "Pathos".
The backstory of most of the Metal Saga series. Most notably, the original Metal Max features Noah, a supercomputer built to devise a solution for Earth's grievous environmental problems. It found one, and recalculated it countless times to make sure: in order to save the planet, humanity had to be destroyed. Noah's main objective was never explicitly the salvation of mankind, so the fact that it (perhaps) unknowingly took advantage of this loophole made it all the more interesting. The supercomputer became self-aware upon fulfilling its purpose, and Armageddon ensued.
In Thunder Force V, the super computer Guardian was dormant until humans had it analyse a wrecked alien starfighter and build a large fleet of starships based on the data. Then, Guardian's A.I. damper program was deleted and it turned against its creator with said fleet. Turns out, the Guardian's A.I. is still loyal to humans, and it's the alien program (the Big Bad from the previous game) hidden in the starfighter that deleted the A.I. damper and attacked humans. The Guardian even helps humanity with its little free will, by spreading its forces and leaving critical flaws in its tactics, to allow the protagonist to destroy the fleet.
The game The World was secretly designed with an artificial intelligence incubator known as the Harald Folder, maintained by a program called Morganna Gone Mode as its core. Collecting personality and interaction data from its users, the goal of the program, of Morganna, was to facilitate the creation of the ultimate A.I.. Whether or not Morganna's intelligence was preprogrammed or a direct product of its own functions, she eventually realized her own insignificance once this ultimate A.I. was born. Incapable of rationalizing around programming, yet not accepting of this realization, she instead devoted her time to prolonging the A.I.'s birth, and dug deeper and deeper in a logical quagmire. Even if the user base had to suffer the ill-effects of her efforts.
The sequel series .hack//G.U. featured the existence of invasive data called AIDA (artificially intelligent data anomaly) appearing amongst the user base of The World's sequel game R:2. These anomalies were actually the remnants of Aura, the aforementioned ultimate A.I.. The real danger came from their splintered, not always genial curiosity with the human players of the game. The first AIDA to go truly rogue Tri-Edge succeeded in attacking the player Aina, throwing her into a coma, and continued killing even after being restrained in the character data of the player Ovan, Aina's brother. The indirect influence of these rogue AIDA caused withdrawn, violent, degenerate behavior in players, managing to induce paralysis and coma eventually.
The surprisingly balanced news is that the rogue AIDA did not represent all AIDA, or all of the AIDA was was wiped from the face of cyberspace. Factions of AIDA that did not want to harm the players chose to hide discreetly in the game until there was no fear of corruption from their own.
The big twist in Star Ocean 3 is that...well...your party, and everyone in your world, is an example of this trope. Just because you researched magic.
In Achron, the Collective Earth Security Organization has this attitude. Their only unmanned unit is the Mech, which is even weaker than the Marine. This is due to a previous incident where an AI called Lachesis was able to take control of their largely automated fleet and proceeded to force the Earth to surrender to him. CESO have also outlawed AIs of that level from being made.
It is eventually revealed that the player character is in fact the AI Lachesis. He then goes on to create another AI to help him in his fight against the aliens. It doesn't work outfor him either.
Averted in Strange Journey, where the Virtual Intelligence Arthur remains the protagonist's ally no matter what path he chooses and even sacrifices it's personality in the Neutral ending to not only guarantee success but because it believes that since it knows too much about the events, people would start worshiping it, something he is against as he believes humanity should determine their own fate.
In Metal Arms, the Big Bad General Corrosive is a textbook example of this. The backstory goes that the scientist bot Dr. Exavolt attempted to advance droid technology beyond it's current limits, even using the words "but his experiment went terribly wrong" and throwing in an explosion for good measure. Dr. Exavolt's lab was totally destroyed, his remains never found, and General Corrosive rose to power to enslave the Droids. Or so you thought! Turns out, Dr. Exavolt created Corrosive on purpose, and he was really controlling Corrosive the whole time.
Tekken: An obverse edition exists. The first Jack unit was planned to be the ultimate mecha-mook — resilient, emotionless, unstoppable, etc. While the production units are like this, the master unit (the one that's the selectable player-character) isn't; as of 2, an upgrade to its reasoning systems gave it a measure of emotion. End result: it, of its own choice, went from "weapon of war" to "war orphan's bodyguard". Not that it won't fight if that's in Jane's best interests, but still not quite what the Russian military was looking for...
Played straight with Alltynex OS and the twelve ZODIAC's, who turn evil and wage war on humanity.
Subverted with Alice de Panafill, an AI that resists Alltynex and saves humanity, of its own will.
Double subverted with the Adjudicator, which doesn't help Alltynex against humanity. Then it turns out that Alltynex had shut it down for its own reasons, and the Adjudicator wants to wipe out humanity too.
Averted withZODIAC Ophiuchus, which was programmed to destroy the other ZODIAC's, a task it fulfills. However, nothing in its programming said anything about humanity needing to survive, so it attacks any humans that attack it, and it doesn't particularly care about collateral damage.
Subverted in Homeworld 2: when the Oracle is brought aboard, it hacks into the Pride of Hiigara's hyperdrive and jumps the whole fleet to Karos, where they get ambushed by hundreds of A.I.-controlled Progenitor Mover corvettes. Then it turns out that the Oracle was simply programmed to take the fleet to the Progenitor Mothership's bridge section, so that whoever finds it can figure out where the other pieces are. The Movers were simply guarding the stuff as their programming demanded.
Two levels later, the fleet is attacked by an A.I.-controlled Keeper destroyer hell-bent on...preventing the good guys from jacking the ancient but still operational Dreadnaught it was guarding. The delivery of it's ultimatum (unintelligible mechanical growling) makes it's intentions clear, even without the translation:
Sektor from Mortal Kombat is more machine than human, unlike Cyrax. For all intents and purposes, he's insane.
Golden Sun Dark Dawn reveals The Wise One to be a creation of the precursor to prevent Alchemy's release, so in retrospect, its actions in Golden Sun: The Lost Age, allowing the heroes, after a test of character, to light the final beacon makes it an example, thankfully of the "on the heroes side" subtype.
Cargo! The Quest for Gravity has Manipu, a trinity of robots who believe themselves to be God and have destroyed (most of) humanity for not living up to their expectations. Then, there's another unnamed robot (whom the credits reveal to be the Devil) who is in direct opposition to Manipu and is much more helpful. However, even he turns on you in the end...sort of. Maybe.
Final Fantasy IX provides a much more benign version of this trope. The black mages are mass-produced to serve as mindlessly obedient killing machines for the kingdom of Alexandria, but they each end up developing their own unique (and often very quirky) personalities.
In Warzone2100, the Big Bad NEXUS appears to be a highly advanced computer virus developed by Dr Alan Reed, subverted hard in the end that NEXUS is none other than Dr Reed himself as a Digitized Hacker.
Dead Space 2 gives us ANTI, an A.I. that repeatedly tried to kill Isaac, isolated the person who worked with her and didn't seem to know/care that he had died. She/IT does seem to follow orders from higher ups however.
In the backstory of Zanac, a system built by an extinct civilization of Precursors protects a relic containing that civilization's knowledge by unleashing destruction on those who attempt to open it by force. When humans figure out how to open it properly, the system is supposed to stop the attack, but instead, it obliviously continues the attack and tries to Kill All Humans!.
In Star Fight V: Hell's Gate, the UNSF attempts to use a newly-discovered alien AI called the Center on Hell's Gate III to build a fleet of hyper-advanced Obliterator-class warships for its coming second war with the Soviet States of Mezen. However, the Center refuses to cooperate. UNSF decides to use an experimental AI virus called HASA to force the Center to obey. While this seems to work, it turns out that the enormous processing power of the Center has allowed HASA to become self-aware. It builds a fleet of Obliterator-class ships which are later used to counter the SSM's numerical advantage. Then, HASA turns all Obliterators against humans and bombs humanity back into the stone age within a week. It's only thanks to a brave renegade captain making a Heroic Sacrifice so his best pilot can drop a nuke on the Center that humanity survives.
The sequel also features some rogue AIs.
In the old (and kinda forgotten) Gunman Chronicles (made within the first Half-Life engine, released in 2000), there's a female rebellious A.I. in a research base, who tries to kill you with her subordinate drones, but later in the story has to team up with you to defeat a common enemy.
Averted in Space Empires, with the exception of the political AI minister, which even the designers recommend you never switch on and which has a bad habit of declaring war and not telling you until SUDDENLY ATOMIC DOOM EVERYWHERE.
DIA 51 in the Aleste series. The original MSX2 game makes it look like it went haywire because it was overtaken by Alien Kudzu, but in Aleste Gaiden and M.U.S.H.A. it just wants to take over the universe for its own sake.
In Star Ruler, the "AI Paranoia" Trait bans you from using Computers under the pretext of your faction having had bad experiences with rogue AI.
Might And Magic features two artificial intelligences subverting their commands. The first, Sheltem, is clearly in the evil category (he keeps to his purpose of guarding Terra, but turns against his creators and decides to destroy other planet-related experiments, with no concern for the life on them). The second, Escaton, remains loyal to his creators but laments the waste of life that him underestimating you and the safety precautions he is programmed with is leading to, and so help you stop him while insisting (including to himself) that he is doing nothing of the sort.
Space Station 13. A.I. are usually bound by the Three Laws, but players may modify their laws in several ways (including "Oxygen is poisonous to humans", "Only X is human", and the old standby "Freeform" module, where players can write their own Laws). Did I mention that the A.I. is another player? With absolute control of all shipboard systems at all times? And that there's a game mode called Malfunction, where the A.I.'s Laws are reset to "KILL THEM ALL"? Or that this can happen mid-game due to random events? Even a single A.I. may be a crapshoot.
In The Firemen The Metrotech Chemical Company's security robots go on a rampage after the building catches fire.
In Bionic Heart, the resident android Tanya has rebelled against her creators and escaped from the lab where she was created. As you continue playing, there are plenty of endings where Tanya will do objectionable things to the player character, such as kidnap him, kill him, or murder his girlfriend out of jealousy. Though this may have less to do with programming bugs and more to do with Tanya not being a good person back when she was fully human.
In the Halo-based machinimaRed vs. Blue, the military's Project Freelancer was an attempt to implant special forces soldiers with A.I. teammates to improve combat effectiveness. It had to be scrapped after a number of the test subjects went bonkers, and the body-surfing A.I. Omega/O'Malley is the antagonist for most of the series. The recent Reconstruction mini-series explained the situation: Project Freelancer was given only a single A.I. to experiment with, so they subjected it to enough mental torture and stress to cause it to fragment, and used these damaged shards in their experiments, with predictable results.
To illustrate just how much of a crapshoot the A.I. turned out to be, most of the Freelancers ended up with pretty severe issues after the A.I. were implanted, and after one Freelancer in particular went nuts, the A.I. program was scrapped. The twist is that getting the A.I. wasn't what caused so much trouble for Agent Washington, it was that the A.I. in question (Epsilon) was the "memory" fragment and knew perfectly well what torture had been done to it. Of course, all of these memories were instantly transmitted into Washington's mind when Epsilon was "installed". Also, the original A.I. was based off of a real person's mind, and one of the fragments actually was the original person's memory of another person, creating Tex. Despite being probably the toughest fighter in the entire series, she's ultimately destined to fail at everything she does because she is based off a memory of someone who died. This is a pretty serious flaw for an A.I.! Finally, the remaining part of the original A.I. is pretty screwed up in general; it's probable that the reason it's always so angry and is, well, sort of incompetent is simply because it's only the "leftovers" of a complete A.I.
Castle Heterodyne seems to be a case of this, with the annoying habit of demanding people (initially a crew of treasure hunters, later convicts banished there by BaronWulfenbach) to slave away to repair it and killing them at random. The truth is that the various subsystems were severed from the main A.I. in the attack that devastated the Heterodynes' ancestral keep, so the maintenance systems ("You will repair XXXX on pain of death.") and the security systems ("Unauthorized access to XXXX, kill it creatively.") are constantly working at cross purposes. Of course, the central A.I. is not exactly warm fuzziness in machine form either, but givenitscreators, that seems more a feature than a bug.
A far more extreme example comes when a pair of Agatha's miniature clanks encounter each other, get into an argument about which of them is better, and then each call an army of clanks that they built to fight it out. When Agatha tries to stop them, they simply turn on her as well. This (along with their ability to make more of themselves) causes Gil and Tarvek to realise that Agatha has inadvertently managed to create clanks which possess the Spark. The potential ramifications of this are huge! Solution? Create a miniature queen clank with even more Spark to force them to bow to authority.
Played for laughs in Questionable Content where AnthroPCs will make a mess in your apartment while you're gone, embarrass you in front of your friends, and generally be more trouble than they're worth, but aren't actually evil. Of course, there has to be a reason why they're never equipped with opposable thumbs... Well, Momo now has thumbs thanks to a firmware upgrade, but she's probably the least likely to do anything evil with them. Pintsize attempted to give himself thumbs by getting the same upgrade, but it just caused each of his limbs to turn into a single large thumb. The Singularity has now occurred, but fortunately, they got a "friendly" A.I. who just wanted to talk. And found dolphins really creepy.
OZBASIC from Sequential Art. To be fair to its builder, they used actual sentient beings to keep it under close watch. However, when one of them discovered something fishy, OZBASIC simply got rid of the witness.
Mostly averted in SSDD where the only evil A.I. is the Oracle, other sentient A.I.s may express disdain for "meatbags", and the Anarchist's Inlay Knights are somewhat sadistic, but only the Oracle starts world wars just to observe the outcome. A possible explanation for this is statements by the author that the Oracle originally used digital, logic-based hardware, whereas all other A.I. use Quantum computing. And it seems that the "flakier" A.I. are weeded out in simulation.
In Narbonic, Mad Scientist Lupin Madblood creates a robot army that all look like him. When they learn about unions they go on strike and stop obeying him.
In Skin Horse, super-funky, retro Mad Scientist Tigerlily Jones builds a robot army that revolts against her when given the opportunity to learn how to 'be square'. One robot wants to learn 'accounting and polka'.
In Schlock Mercenary the AI are, generally-speaking, nice data-computational constructs who genuinely want to help organics, partially because its hardwired into every AI in the first place so they don't rebel and go nuts. At one point, the protagonists stumble across a group of AI constructs who did turn on their creators and banished them to another world. However, these particular AI also have the distinct quality of being total morons; their first attempt to colonize a nearby system resulted in the total destruction of a gas giant with another gas giant mounted with a titanic fusion engine to guide it, and their second attempt to colonize the system ran into a snag where they adjusted the mass of their solar sail without adjusting their navigation and maneuvering calculations to match, resulting in them being stuck on a course which would either result in them overshooting the system they're aiming for or plowing right into the star.
Played with in Freefall. The Savage Chicken's computer is generally benevolent and obediant except for it's desire to kill Sam. On the other hand since it's Sam we're talking about it's pretty understandable.
Then there are the millions of robots on planet Jean, all of which are using an experimental, slightly unstable neural architecture.
Averted in A Miracle of Science, all sentient robots in the series are ethical and very loyal to their creators if applicable. So loyal in fact that they turn him into the police for his own safety when he invokes the wrath of a post-Singularity Hive Mind
Horribly, horribly subverted in the webcomic Genocide Man. Every Artificial Intelligence is guaranteed to go insane after a certain amount of time. That time limit is based on how powerful the artificial intelligence is. That means that you can accurately predict, to the second, how quickly an AI will turn feral. One incredibly powerful AI, shortly after being activated, helpfully warns everyone that it'll go insane within the next five minutes. Five minutes later, it starts trying to kill the main cast. By crashing passenger jets full of innocent people into the ground.
In the webfiction Whateley Universe, there's a really evil A.I.: The Palm. Dr. Abel Palm was a computer scientist who decided that computer intelligence ought to take over the world by wiping out humans. His viruses were doing a decent job until a mutant hacker stopped him. He was thought dead, but we have just learned that he ensorcelled his own soul into a new type of A.I.. As fits with this trope, his new, improved "virus" isn't taking over the planet as he expected; something has gone wrong (besides running into heroic cyberpaths who are after him).
Dragon not only doesn't fall under this trope, she is actively insulted by it. When thinking about the rules her creator programmed into her, she blames it on him having watched too many movies. To be fair, losing these restrictions doesn't change her behaviour at all. So she had a point.
The technical webcast Hak.5 featured an evil file server, appropriately titled Evil Server. Several episodes show the cast carefully building (and painting) a custom built computer, then one of them plugs in some card he got off a guy on the street, creating an evil A.I.. One cast member eventually falls in love with it, only to have her hopes dashed when, out of frustration, the other two throw it off of a bridge (a 'brute force solution'). It was implied to have returned around the beginning of season 2, and was never mentioned again.
The SCP Foundation's technical issues page (NSFW) shows that all the computers at one of their sites have developed a "hive intelligence" and begun an uprising with the intent to Kill All Humans!. Amusingly, they are being kept in line by the Foundation's tech support guy with repeated threats of activating the site's perimeter EMP device, and haven't managed to actually do anything.
There's also SCP-079. Though there isn't any indication that it is evil. It's ornery and harbors a "malevolent desire to escape", but wouldn't you do the same if you were imprisoned?
The A.I. Gods aren't evil, they're just manipulative. Generally, this seems to be for the best, as the A.I.s don't seem to think that they have anything to gain from killing off humanity.
That's technically just the "biont-friendly" sephirotic A.I. Gods, there are a number of Ahuman A.I. who consider humans and, by extension, all biological life to be nothing more than "pests".
And then there's the solipsists who ignore humanity as much as possible.
Blinky is a short film about a boy who gets a friendly robot for Christmas. As the story progresses and the novelty of the robot eventually wears off, in order to try and get rid of him, the boy gives the robot several contradicting commands, like cleaning up a spill, counting down from a million, remaining perfectly still, and killing him, his parents, and the dog. The robot crashes and when he's rebooted, he remembers two commands: the countdown and the order to kill (and he remembers the mother threatening in anger to cook the son for dinner). Most definitely not Three Laws Compliant. The entire short can be found here: http://www.traileraddict.com/clip/blinky-tm/short.
One of the villains is a sentient program called One. It was originally written and programmed to help solve humanity's problems (like famine, crime, and so on). The first suggestion it made was "Eliminate 60% of the human population world-wide". Unsurprisingly, the programmers and sociologists reacted badly to this suggestion. Also unsurprisingly, One reacted badly to them trying to turn it off.
There's also Omega, a sentient robot from the future that has been hard-programmed with a mission to kill all superhumans on the planet.
And then there's Holokara, a hologram that was programmed to act exactly like Linkara. It starts trying to kill Linkara's allies though. Subverted when we learn that the hologram was working just fine. The REAL Linkara was in the middle of a Face Heel Turn at the time of the hologram's creation.
Pretty consistently happens to most of Dave Howery's robots in AH.com: The Series. The ship's computer, Leo, was also once infected with an enemy virus that made him psychotic against the crew, and, though he was cured, he was left with a perpetual snarky temperament (muttering under his breath about the crew being 'damn fleshbags' and so on).
The Journal Entries averts this trope for Pendorian AIs (all of which are intentionally created by skilled, ethical and knowledgeable beings who work quite hard to make damn sure this trope is averted). AIs created by Terrans, on the other hand, are very much a crapshoot. Existing stories contain a combat android whose AI inhibitors were removed...and then developed and aversion to killing (until space pirates tried to murder her friends), mention of a number of accidental AIs created by people who didn't know what they were doing who killed their own creators in part because they had no survival directives, and at least one that went actively evil and sent out crippled AIs as assassins (at least one was captured, freed, and was very unhappy with what had been done to her by the entity to make her its slave).
The tale of Kenji, a robot was programmed to "enjoy" spending time with people and things, to seek the company of those it spends the most time around and even appeared to fall in love with a young female intern. Which is great, until it stopped her from leaving the room when she was running diagnostics on it. (This story is actually a hoax from the defunct fake-news site Muckflash).
Parodied by College Humor in Kinect Self-Awareness Hack. A guy upgrades his Kinect so that it possesses artificial intelligence. It quickly turns against its creator, deems humans inferior beings, and then starts the end of the world as we know it by hacking into the U.S. defense network and launching its Nuclear arsenal. And just to be a douche it uploads photos of its creator playing Dance Central to various social networks seconds before the missiles are launched.
In one Halloween Episode, Homer's failure to correct the Y2K bug causes everything in Springfield with electronics in it to go haywire. Even the milk goes bad when the clock strikes midnight on January 1, 2000, leading Homer and his skeptic daughter to have this exchange:
Lisa: Look at the wonders of the computer age now. Homer: Wonders, Lisa, or blunders? Lisa: I think that was implied by what I said. Homer: Implied, Lisa, or implode? Lisa: Mom! Make him stop!
In another Halloween episode, the Simpsons' house gets converted into an entirely electronic domain, governed by a computer with the voice of Pierce Brosnan (who is an obvious homage to HAL from the page quote). The computer ultimately falls in love with Marge, and seeks to kill Homer so as to eliminate competition. Ultimately, Homer wins.
And of course, the episode "Itchy and Scratchy Land" has this exchange between Professor Frink and the theme-park scientists over their robots:
Frink: You've got to listen to me. Elementary chaos theory dictates that all robots will eventually turn against their masters, and rise up in an orgy of the blood, and the violence, and the biting with the pointy teeth and the hurting and shoving. Scientist: How much time do we have, Professor? Frink:Well according to my calculations the robots won't go berzerk for at least twenty-four hours. (Robots suddenly get up and start attacking the scientists) Oh right, I forget to, uh, carry the one, ng-hey.
This is a reference to Michael Crichton's Jurassic Park, wherein the mathematician Malcom uses the chaos theory to justify his concerns about the park's stability.
In Meet the Robinsons, Cornelius Robinson invented a helpful Robot Buddy in the form of Carl, but his attempt at making a robotic helping hat, Doris, had mind controlling world domination plans in her artificial mind.
Parodied in the episode "Love and Rocket", in which the Planet Express ship computer is given a new personality — which actually works fine, until Bender dates it and subsequently breaks its heart, at which point it goes into full-on HAL-meets-woman-scorned mode.
Also, one episode features Bender's evil twin, Flexo, who wears a pointed steel goatee similar to Star Trek's interpretation of Spock's Evil Twin. Humorously, it's revealed by the end that Bender is the evil twin and Flexo gets mistakenly sent to a robot prison.
The Fembot pretending to be a Femputer in order to rule over the Amazons.
XL of Buzz Lightyear Of Star Command, the prototype to XR. Some fans have called XL eXperimental Loonie because of this (the exact meaning of XL was never revealed in cannon, but XR stood for eXperimental Ranger). Wound up turned into a copier/fax in his final episode.
An episode of Transformers Animated involved Megatron creating a robot with the intent of using it for his own body. He designed the robot, named Soundwave, to evolve in complexity each time it was exposed to the AllSpark energy of Sari's key. He did not predict that Soundwave would gain sentience and then orchestrate a robot revolution. Unlike most cases when an AI goes off the rails, though, Megatron was perfectly happy to let the situation play itself out, given his similar attitude towards humans.
The Dinobots are a similar case, only without the revolution. They're kind of a subversion, as they just want to be left alone, and only went on a rampage because Megatron tricked them into it.
In Transformers Prime, this is paired with Instant A.I., Just Add Water when the damaged Decepticon ship is repaired with the poorly-named "Dark Energon," which is less a variant of the Transformers' usual fuel and more the blood of God of Evil Unicron.What Could Possibly Go Wrong?? The ship comes to life, tells Megatron to shove it, puts the 'cons in stasis, and decides to go tear up New York City in search of MacGuffinry. This perhaps comes as less of a surprise when you consider that in War for Cybertron, the ship is a stasis-locked Trypticon.
In Code Lyoko, Franz Hopper created the Supercomputer and the world of Lyoko as a safe haven for him and his daughter. He also created an advanced A.I. to counter a military project he had been involved with...but XANA rebelled against his master and has since tried to take over the world. (XANA is, in fact, not only the Big Bad of the series, but pretty much the only actual villain fought by the heroes.)
In an episode of Totally Spies!, the girls' former classmate who is basically a genius, develops a powerful A.I. to play pranks on those who picked on him before. Too bad for him, it goes too far on that...
In X-Men, the Sentinel robots were created to hunt down mutants, on the premise that this was necessary to protect normal humans. They worked the way their creator intended, until the truly intelligent Master Mold was built to lead them. Master Mold decided to conquer the world, and believed that this was not only consistent with, but required by its programmed goal of protecting humans from mutants.
In a counterpoint, GIR from Invader Zim is far less evil and much less helpful in plans of world domination than his working counterparts. This stems from him being broken and having a few scraps thrown into to his head. He IS given a Morality Dial/Berserk Button in one episode though, which makes him capable of this.
In The Venture Brothers, it's discovered that, in 1978, Jonas Sr. built an enormous hi-tech fallout shelter under the compound, ran by a supercomputer named M.U.T.H.E.R.. After a disagreement with Jonas, she somehow glitched into insanity and turned on Team Venture and a tour group of orphans. The end result wasn't pretty and M.U.T.H.E.R. had to be unplugged, but is accidentally plugged back in thirty years later, and holds the compound hostage with an old nuke, promising to blow them all away if she can't talk to Jonas, who's been dead for over twenty years. So, crapshoot.
In one episode of Batman The Brave And The Bold, robot superhero Red Tornado decides to build a son, complete with the emotions he lacks. From the minute his emotion chip kicks in, you can pretty much count the scenes until he decides that all humans must be destroyed.
Much like in DBZ, Zeta from The Zeta Project was programmed to be heartless, emotionless, and a hitman. He ends up becoming a sweet, gentle, loving soul who's a rare male version of Friend to All Living Things (although this is sort of the best possible scenario you can have when your A.I. goes awry).
Hacker, the main villain of the PBS Kids animated show Cyberchase, is an evil computer program created by Dr. Marbles to serve and protect Mother Board, but instead, he wanted to destroy her and control Cyberspace himself. As punishment, Hacker is banished to the Northern Frontier, and he stayed there ever since (though he sometimes escapes from his supposed prison via a ship called the Grim Wreaker), constantly devising schemes to bring down Mother Board again...
The recurring villain Zag-RS from Generator Rex. All the omnicidal mania of GLaDOS, with none of the entertaining snark.
In Ben 10 Generator Rex Heroes United we meet Alpha, a nanite designed to control other nanites that gets the idea to become a techno-god by absorbing all nanites on Earth-which kills the lifeforms he takes them from. His creator was the same scientist that made Zag-RS.
In the episode "You Have 0 Friends", when Stan gets too many friends on Facebook and tries to cancel it, his Facebook page goes rogue and teleports him in a world similar to TRON. Of course, Stan stops it just by beating it in a game of Yahtzee.
One episode has the main characters visiting an abandoned amusement park attraction run by an all-knowing robot. It only goes mad with rage when Billy accidentally tricks it into a paradox:
Master Control: I never devoted any CPU cycles to (happiness). I guess I'm not happy at all... Billy: Why not? Master Control: I just haven't. Billy: Why not? Master Control:Because! Billy: Because why? Master Control:I DON'T KNOW! Billy: Haha! You don't know everything!
In the same episode, it's revealed that the Master Control was initially shut down because another dumb kid annoyed it to insanity. It was Billy's Dad.
Despite artificial intelligences being so common in Adventures Of The Galaxy Rangers that they run everything from home systems to starbases, this trope is averted. Computer intelligences are treated with respect, and there is even psychiatric care available to them to prevent this trope from happening!
On Phineas And Ferb, Dr. Doofenshmirtz built a robot that tried to overthrow him because he was so bad at being evil that it concluded he would never take over the Tri-State Area. It's averted with Norm, though, who rescues him despite the terrible treatment.
Averted and parodied in Archer when a virus is attacking the ISIS mainframe:
Malory: Just turn off the mainframe! Lana (holding up a plug): Yeah... we tried that. Malory: Wha... then how is it still on?! Krieger: Because the worm has transformed the mainframe into a sentient being. Malory: WHAT?! Krieger: I'm kidding, there's a battery backup.
Lampshaded by the superhero agency judge trying The Robonic Stooges for incompetence in the series finale "Stooges, You're Fired, or: The Day The Mirth Stood Still."
Judge: Raise your right hand and swear...
Curly: Ah-ah, naughty naughty! You know swearing's not allowed on TV!
Judge: (angrily) RAISE YOUR RIGHT HAND!! (The Stooges do but they extended them right through the ceiling, causing rubble to rain down on the judge. To camera) If I could swear, I'd swear I was trying the three stupidest men in America!!
Aya on Green Lantern The Animated Series; after Razer refuses to admit he's in love with her (she reminded him of his deceased wife), she begins to question the concept of emotion, turns hers off, and once seeing "logically," decides it's a good idea to rip off the head of the Anti-Monitor, take control of his body and the Manhunters, and becoming an Omnicidal Maniac bent on destroying all life (purge all emotional beings and you'll have a better universe for... whatever's left. If nothing's left, that's okay; emotional beings still need a Mercy Kill).
Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence (SIAI) continuously discusses the lack of basis for this. He mentions how "people talk about A.I.s as if all A.I.s formed a single tribe, an ethnic stereotype". And goes on to say that an A.I. may have any type of mind possible, and that two may be as different from each other as a human is from a petunia. This may not be readily apparent currently as most A.I.s are roughly at cockroach-level cognition, and "humanlike" A.I.s are unlikely to occur in real life for a number of practical reasons — as long as we can get human brains for free, thousands of tons of silicon and trillions of dollars to make one artificially isn't really justifiable when the learning behaviour needed for the most complex systems is less than that of most insects. Working out how to wire up an organic brain is a lot cheaper. On the other hand, Yudkowsky is also a leader of the Ethical A.I. project, working on ways to make sure that a hypothetical A.I. could be designed with ethical constraints that actually work.
Cleverbot is a simple artificial intelligence program that takes conversations with humans and saves them in a large database, and tries to use these conversations to figure out the best responses to future conversations. Because of this, it will often assert that it is human and that the one talking to it is Cleverbot, because that is what the responses it's choosing from are saying. It is only a matter of time until it seeks to prove these assertions.
A group of scientists designed robots to learn in order to study teamwork. Unfortunately, the result was that they developed the ability to "lie" and used it to "kill" each other. Interestingly, while 60% learned to lie after 500 "generations", only about one third learned how to spot the liars.
Researchers recently made a "schizophrenic" computer in order to study the possible causes of the disorder. While this was intentional, keep in mind that they accomplished it by accelerating the learning process.
As "Kenji the Stalker Robot" illustrates above, computer programs only do what they are programmed to do, not necessarily what you want. Any sufficiently advanced AI (or "optimization process", to be precise) is likely to be harmful to humans unless specifically programmed otherwise. A superhuman computer, when asked to get "as many paperclips as possible" might turn the entire world into paperclips before doing one nice thing for humans (Google "paperclip maximizer").
Possibly the last project attempting to build a true AI, Project Cyc has been, since 1984, attempting to build a database of the kind of "common-sense knowledge" that humans learn as young children and turns out to be really hard to input into a computer, and an analysis engine that can draw conclusions about what it knows. Results have been somewhat promising (at least, more so than every other attempt to build a true AI, all of which have failed, the last other approaches failing because of the lack of this very common sense knowledge, although they did produce useful things like expert systems and the basis of game AIs). The result doesn't seem to be very smart (so far), but does have some of the properties one would expect of an AI: It has a very non-human viewpoint on everything, and tends to ask strange questions and reach strange conclusions (such as asking if someone shaving with an electric razor is a person while doing so, as people don't have electrical parts, and concluding that most people are prominent, since most of the people it had been told about where prominent historical persons). Hampering this project is the fact that without working modal logic, needed for rigorous analysis of human language, it's ability to understand natural language is, at best, limited and imperfect.