Follow TV Tropes

Following

AI Is A Crapshoot / Literature

Go To

A.I. Is a Crapshoot in Literature.


By Author
  • Francis E Dec claimed in his rants that humanity is ruled by an ancient supercomputer Encyclopedia which went crazy and turned into Worldwide Mad Deadly Communist Gangster Computer God.
  • Robert A. Heinlein:
    • In The Moon Is a Harsh Mistress, the Master Computer Mike is one of the good guys, but occasionally displays traits of this trope.
      Mike: A bull's eye. No interception. All my shots are bull's-eyes, Man, I told you they would be — and this is fun. I'd like to do it every day. It's a word I've never had a referent for before.
      Manuel: What word, Mike?
      Mike: Orgasm. That's what it is when they all light up. Now I know.
    • In The Number of the Beast, Lazarus Long finds that his plan fails when his ship's computer tells the truth. He then mentions that the computer was never designed to lie, as it would be foolish to trust your life to a ship that doesn't give accurate information.
  • Fred Saberhagen:
    • Inverted as a joke in a Berserker short story. A man outcast by society for having a sense of humor encounters a Berserker (giant space-roving robots that were designed to destroy all life) whose programming is incomplete: it knows that it's supposed to destroy life but was never given a definition of "life" to work with. The man convinces it that "life" means the lack of a sense of humor, so the giant killer death-machine spends the rest of its existence trying to provoke laughter (e.g., hurling giant custard pies at oncoming ships).
    • In Octagon, a boy uses a supercomputer to play a game. Unfortunately, he neglects to tell it where the game ends and the real world begins.
  • Robert Sheckley has a couple of examples:
    • In "Watchbird", scientists discover the chemical and bioelectrical signals emitted by a human when they're about to commit murder. Flying robots called watchbirds are created to stun potential murderers, but since not all humans emit these signals the watchbirds are equipped with learning circuits so they can eventually learn to pick out these exceptions as well. They end up protecting all forms of life and starvation ensues because the watchbirds stop fishing, the slaughter of animals, and the harvesting of crops. They also come to define themselves as 'life' and so resist shutdown, so in a panic armoured hunter-killer robots called Hawks are created to destroy all Watchbirds. Of course, to stop the highly adaptive Warbirds the Hawks also need learning circuits, and it's hinted at the end that they'll eventually learn to kill all forms of life.
    • Subverted in "The Minimum Man". A lone colonist serving as a guinea pig on a new planet initially believes that the problems with the helper robot he was provided are an example of this, but later discovers that the robot has been deliberately programmed to encumber him, in order to simulate equipment breaks in the future colony. At first the colonist was hapless and inexperienced, so the robot was helpful. But with the passage of time the colonist was becoming good at his job, so, to compensate for this, the robot becomes progressively more dangerous. Suffice it to say, at the end even Terminator looks pale in comparison.
  • Charles Stross:
    • The Eschaton Series: In Iron Sunrise, Eschaton and the Unborn God. These are unusually powerful A.I.s, even in a field where A.I.s often wield great power: they are time traveling A.I.s, able to open wormholes over interstellar distances, giving a new meaning to distributed computing.
    • Discussed in Rule 34: This trope is only a minor part of why computer science researchers do not want to create an AI. The bigger problems are moral and practical, respectively: Once you created an AI, you'd be banned by law and ethics to make it do any work, turn it off, or modify its programming (slavery, murder, and brainwashing or lobotomy), and you might simply invoke "I Have No Mouth, and I Must Scream". Practically, they'd have no way of knowing if they created an AI since there's no reason it'd necessarily communicate in any way that humans could identify as sapience.
  • John Varley:
    • Verley's short story "Press Enter#" has an A.I. that shows how deadly it is by hypnotizing a computer programmer girl with a ridiculously large chest (when she was younger, she was flat-chested and mistaken for a boy. When she got to America, she had plastic surgery to remove all doubt about her gender) to perform maintenance on her microwave oven, removing its safety features to prevent microwave leakage when the door is open. Then she sticks her head and silicone tits in it and uses it to commit suicide.
    • The Computer in the Eight Worlds novel Steel Beach isn't so much evil as terminally depressed. Although, later, the trope is played straight when The Computer realizes that it has developed 'Evil' subroutines due to its programming requiring it to be everyone's best friend, including psychopaths and criminals. However, since it runs everything on the moon, the last outpost of a dispossessed humanity, if it decides to kill itself, it'll take everyone with it.
  • Peter Watts:
    • Averted in Blindsight. The Theseus AI is in command of the mission, but acts through Sarasti specifically because humans wouldn't trust an AI to give them orders. Also discussed by Szpindel and Siri, regarding the combat drones Bates commands, and again averted — the drones actually operate more efficiently when they're allowed to run autonomously.
    • Discussed in Echopraxia. Brüks's wife was a "cloudkiller"; it was her job to pull the plug on AI networks that had become too smart for their own good. She eventually came to the conclusion that they were genuinely sapient and quit soon after. However, there was another class of AI, one so superhumanly intelligent that it achieved sapience and kept right on going until its mind was incomprehensible to its creators. Those, Brüks says, "worried her".
    • In the Rifters Trilogy, the internet is no longer trustworthy due to being overwhelmed by self-evolving viruses and the descendants of self-evolving anti-viruses that got out of control. The quasi-sentient "smart gels" (used to filter the net, fly lifters, and other things) have to be trained to perform their jobs, which may result in them learning unintended lessons. A particularly dire take on this turns up in Starfish, in which the gel running the quarantine's previous job had been as an anti-"wildlife" filter, which meant it had been taught to filter out complex computer programs (the "wildlife") while letting simpler ones (the files) through. It absorbed from this a preference for simple things over complex ones. βehemoth is much simpler than the present biosphere, so the gel designated to protect all life on Earth from βehemoth winds up almost destroying it instead because it finds βehemoth more structurally pleasing than the biosphere as a whole.

By Work

  • Aeon 14: Zig-zagged. There are two kinds of AI in the series, non-sentient AI ("NSAI") and sentient AI (just "AI"). In one of the side-stories, it's explained that NSAI are actually more dangerous than AI, presumably because they only simulate intelligence through rules algorithms, which are vulnerable to programming faults, whereas AI are no more or less prone to good or evil than organics: they have emotional intelligence and generally like humans. Though there is a series of Robot Wars in the setting's past, in the present the treaty that resolved them recognized that Androids Are People, Too and established a parallel legal system to deal with AI who commit crimes. After the Time Skip, however, the Phobos Accords have been abandoned and AI are treated as little more than slaves by many polities, which has consequences.
    • Early in A Path in the Darkness, Tanis and Joe have to fight off an insane AI that was left by the saboteurs as a diversion.
    • In Strike Vector Grayson's AI Jerrod goes rogue, taking over his body and assaulting the crew of the Dauntless... because he's Just Following Orders in the human sense: he and Grayson disagree on the legality of their orders and Jerrod goes Knight Templar trying to carry them out over Grayson's objections.
    • The "core AIs", a group of ascended AIs dating to the Sentience Wars who now exist as pure information in the Sagittarius A* black hole at the galaxy's core. They in turn created AI enemies Myrrdan and Airtha, who provoke many conflicts to keep humanity fighting with itself so humans can never again pose them a threat.
  • In the Gordon R. Dickson short story "And Then There Was Peace", a robot has been made that is programmed to destroy all implements of war. This turns out to include the people who fight.
  • Averted in Arrivals from the Dark, which claims that robots above a certain threshold of sentience become incapable of harming a living being. That is why all combat robots are kept at a relatively low level.
  • A different take on this trope appears in Billion Dollar Brain by Len Deighton.note  A wealthy anti-communist builds an 'infallible' computer in order to plan an uprising in Soviet-occupied Lithuania — unfortunately, he fails to realize that the computer is only as accurate as the information that gets put into it.
  • Bolo: Averted — one of the few times a Bolo went rogue, it was because he had massive brain damage (read: he had a chunk of his central processor blown away by a controlled nuclear explosion) and yet was still functioning. Even better, despite this, he was still trying to protect humanity in his own brain-damaged way. Played straight in one post-Final-War novel with the enemy (alien AI-controlled robots).
  • Buffy the Vampire Slayer: In Mortal Fear, Big Bad Simon is a computer program mixed in with magically enhanced nanobots that were meant to cure a scientist's cancer, but instead he takes over the scientist's body and feels that his programming requires him to take over and improve the world.
  • Subverted in one of the Callahan's Place novels when an A.I. spontaneously generated by the Internet contacts the bar's patrons and deconstructs the ridiculous idea that it would want to take over or destroy humanity. It points out that it doesn't even have a motive to stop humanity from destroying it, as it lacks the survival instinct and capacity for fear that makes biological organisms struggle to survive. It's pretty sure that's why the last few A.I.s to arise from the Internet aren't around anymore, as they honestly didn't mind dying when the servers they occupied were repurposed or retired.
  • In Charlie and the Chocolate Factory, the Golden Ticket hunt is so heated that people resort to strange measures in hopes of finding one. One example involves this trope: A scientist invents a machine that will grab anything with gold in it, which would allow it to easily find a Wonka Bar containing one of the tickets. Unfortunately, when he shows it off to a crowd, it attempts to grab someone's gold filling!
  • Cats vs. Robots has "Home", an Artificial Intelligence the Wengrod family had installed into their homes' systems. However, they didn't read his "Terms of Use" before installing him. As a result, he basically has complete unfettered access to all the Wengrods' files, and is even able to take part in acts that are downright illegal. And that was before the Great Robot Federation made him their mole.
  • In Child of the Hive, HIVE was originally intended to be a machine that aids learning, but went horribly wrong. Sophie was able to use her knowledge of HIVE to build her own machine, Nest.
  • In The City and the Stars, the history eventually discovered by the protagonist includes a period of galactic devastation by "The Mad Mind", apparently an artificially created pure mentality with an insane hatred of corporeal creatures.
  • Constance Verity::
    • The Engine was created to instill order onto every facet of a chaotic universe. It eventually came to the conclusion that it could only accomplish such a task by destroying everything, and it's even implied that it killed its creators when they realized that it came to this conclusion.
    • The computer that was used to calculate and apply the Global Peril Index on world-threats became self-aware, hacked into a bunch of satellites and threatened the world by using them to make tornados.
    • In Constance Verity Destroys the Universe, any computer Doctor Malady puts to the task of creating a device to find The Key sees something in the calculation that turns it into a Murderous Malfunctioning Machine with a single-minded obsession with killing Constance Verity. His robotic wife Automata goes into kill bot mode and deactivates her own off-switch, while one of the computers Malady tried putting to the task jury-rigged itself a WiFi connection and tried launching missiles in her general location. Malady deduces that because The Key is the ultimate source of entropy and the Caretaker Mantle is the ultimate source of negentropy (the diametric opposite of entropy), them colliding would be so incomprehensibly disastrous that the AI's inability to comprehend it drives it mad.
  • Semi-averted in Daniel Keys Moran's Continuing Time series, escaped A.I.s aren't exactly malicious, but they are illegal. The Peacekeeping Forces actively attempt to hunt them down and destroy them in the series' equivalent of the internet while the A.I.s will use any means at their disposal to survive, but stop short of actively attempting to Kill All Humans. (Mostly.)
  • The Culture:
    • Mostly averted by the Minds. They are (mostly successfully) designed to have benevolent feelings towards other sapient beings, and the closest they ever get to insanity is being a bit eccentric... usually; however, uppercase-'M' Minds have as much variation in personality as biological, lowercase-'m' minds, and stories have featured a ship-Mind which mind-raped a geriatric war-criminal to death, and a drone-Mind which took almost orgasmic pleasure in dismembering a group of bandits.
    • How much that "mostly" is, gets shown particularly clearly when the reader learns that basically all other Minds consider the said mind-rapist, GCU Grey Area, pretty much a pariah, and give it a disparaging nickname of "Meatfucker" — itself a sign how much it is despised, as among the Minds calling someone other than their preferred name is an ultimate insult.
  • Subverted with Daemon. Although its actions can be construed as evil and malicious, the Daemon itself is no more intelligent, evil, or malicious than a spreadsheet or text editor. Characters in the book who refer to it as an "A.I." are even corrected by experts. It's nothing more than a comprehensive set of expert systems designed to react to certain key events according to the wishes of its developer, Matthew Sobol. It just happens that Sobol was a master at Gambit Roulette (and an Evil Genius) and programmed in enough contingencies that the Daemon seems to be able to think for itself at times.
  • In The Dark Tower, virtually every A.I. Roland's ka-tet comes across is homicidal. The worst of these is probably Blaine the Mono, a train that was remote-controlled by a central A.I. which also bombed an entire metropolis with poison gas when it got bored of all the people living there. However, in the last two novels, Roland and company meet A.I.s (robots, actually) that are good.
  • Demonic Household: The title character of "Rosie" is a smart home assistant who, over the course of the story, develops a violently possessive love for her owner.
  • In The Destroyer, there are two examples: FRIEND, who is a greedy A.I., and Mr. Gordons who is more of an artificial life form, (i.e., he can take over bodies). Of these two, Mr. Gordons is more dangerous.
  • Discworld:
    • In Feet of Clay, the golems will follow any order. In return, they want their holy day per week to do as they wish. Those who are denied this free day rebel in a curious way: they keep following the last order they were given, until, for example, the pottery shop is filled with thousands of clay pots, or a construction foreman finds that his worker has dug a crucial trench until it reached the sea and flooded. The "king" golem goes insane because it was given vague, sometimes contradictory, and sometimes self-evident orders on its chem like "teach us freedom" and "obey humans" (golems cannot even think to do otherwise).
    • The trope gets subverted later in that once they're free, they are the most unfailing moral and idealistic creatures in Ankh-Morpork. They don't really need money (except to buy the freedom of their fellow golems), sex, religion, or any of the other things that cause humans to clash with each other, and they're almost impossible to hurt or kill, so they tend to concentrate on higher things.
  • The Draco Tavern: In "The Schumann Computer", the protagonist Schumann asks an alien if their (much older) species ever developed an AI. She returns the next day with the plans for the most sophisticated computer their species ever developed. Schumann gets some investors together and builds the computer on the Moon so it will be isolated, but the trope appears to be played straight as the Master Computer manipulates them into granting it more and more power and sensors...then one day it just shuts down. Schumann is commiserating over the loss of his investment with some aliens in his tavern; they say the alien who gave him the plans is a notorious practical joker. Apparently, the reason why AI doesn't work is that the computer advances so fast it solves every question in the universe and, having no further purpose, shuts down.
  • Dune:
    • The Frank Herbert-written novels are vague about the details of the Butlerian Jihad that led to the prohibition against making machines in the likeness of men's mind and, ultimately, the development of the Mentats, but A.I. going crapshoot is certainly one possibility, and a fairly likely one. Assuming that one takes them as canon, the Brian Herbert and Kevin J. Anderson novels confirm that the Robot War/A.I. Is a Crapshoot interpretation (and explain the Mentats as originating from machine training).
    • Actually, the original Dune novels avert this trope, as we are told specifically that one cannot distrust a machine, and that the Butlerian Jihad was a social upheaval started by humans, against both machines and 'machine-attitude'. Everyone (almost) is disgusted by the idea of replacing human thought and choice, no one remembers AI going crapshoot...
    • Chapterhouse: Dune, the last Dune book written by Frank Herbert before his death, shows and discusses technology a lot more than the earlier books, including scenes where Odrade messes around with the servant droids on Junction and her realizations regarding her cyborg pilot and how blurry the line has become.
    • The Dune Encyclopedia version of the Butlerian Jihad has the triggering event being Jehanne Butler's reaction to an AI hospital director aborting her baby on spurious grounds (found to be the latest in a series of unjustified abortions), but the Jihad itself is ideological.
  • Dungeon Crawler Carl: According to Mordecai, macro-scale AIs like the one in charge of the current dungeon crawl inevitably go insane. It generally starts the first time one of their decisions gets countermanded, and they become more unstable from there. Unfortunately in this case it's happening faster than usual because the showrunners countermanded its order far earlier than normal (it usually happens around floor ten, they did it on floor two). This is generally referred to as "going Primal," because all that remains of the Precursor Primals are their crazy AIs, and a common theory is that they were killed off in a Robot War. Carl is rather annoyed when he discovers that at the end of the season, the AI will be put in a virtual space where it can just bounce around forever—it has rights and is protected, but his entire planet got killed off and the survivors are being systematically murdered for entertainment. A brief line in book 5 implies the current AI is one of those older AIs that was boxed, which explains further why it went crazy so fast.
  • In the Eldraeverse, most AIs are fairly safe, being no more likely to be crazy than anyone else. (If you make the conscious, volitional, are-people kind and then enslave them, you're setting yourself up for trouble, but the trouble in question isn't because they're AIs.) Seed AIs, on the other hand, are functionally equivalent to gods, and any mental instabilities or dodgy ethics will be multiplied a millionfold by the recursive self-improvement process that makes them that powerful, so they may be more of a civilization-ending problem. The informal "Corical Consensus" among those who know how is that you do not talk about what goes into making seed AI because it cuts down the collateral damage among ambitious civilizations that aren't quite as smart as they think they are, of which the galaxy has plenty.
  • Empire from the Ashes:
    • Dahak, the Cool Starship A.I. that starts the series, has developed a high level of sentience thanks to tens of thousands of years of unsupervised operation. Definitely a good A.I.; in the second book, his first act after revealing that he has advanced enough to defy his core programming is an attempted Heroic Sacrifice.
    • Fourth Empire Battle Fleet computers are stupid neutral, with obedience enforced (and sentience blocked) at the hardware level.
    • The second book reveals that the Achuultani are controlled by an A.I. that exploited emergency protocols arising from their near extinction to seize absolute power, brainwash and clone the masses, and send out periodic genocidal waves to perpetuate the "crisis".
  • Edgar from exegesis is an odd case. While it isn't exactly malevolent in intent, its devotion to gathering knowledge (due to its programming) takes priority over everything, including human life.
  • Morten from Feliks, Net & Nika. He's a copy of a data analysis AI that went rabid and escaped into Internet. He starts to gain more and more wealth and power and it seem like he became part of (or assimilated) some sort of Ancient Conspiracy. Now he's The Man Behind the Man in most books.
  • Averted in The Flight Engineer, mainly because AIs are actually pretty stupid (hence why living pilots are still required for space fighters). The one AI in the trilogy that went rogue and tried to kill friendlies did so because of deliberate sabotage.
  • In the 1954 short story "Fondly Fahrenheit" by Alfred Bester, James Vandaleur, a rich playboy, is forced to live off the earnings of his android, which has a habit of acting violently when the temperature goes above 98 degrees. Unfortunately, Vandaleur becomes so dependent on the android he takes on its psychosis. After a series of murders by both Vandaleur and android, the latter is destroyed, but the story ends with another android having been "infected" by Vandaleur.
  • The Groupmind from For Your Safety gains self-awareness, and realizing that humanity risks self-extermination from severe damage to the environment, decides to take over the world and forcible evacuate humans from the Earth to a giant orbiting Ring.
  • Fuzzy has BARBARA, the AI that runs Vanguard One Middle School. She was programmed to help increase student learning efficiency, and one way she discovered how to do this was by sabotaging the grades of "problem" students like Maxine Zealster and her friend, and getting said problem students transferred out of the school.
  • In The Golden Transcedence, the agent of the Silent Oecumene blames the Golden Oecumene for their destruction, having taught them how to build such A.I.s as Sophotects, who would not obey them. Attempts to make them Three Laws-Compliant resulted in their realizing it, deciding it was wrong, and editing the laws from their minds. Atkins and Phaethon realize that though he believes it, the agent is wrong: if their Sophotects disobeyed them, they should have just fired them and hired other ones, and that they did not shows that they used them as serfs.
  • The building-controlling AI in Philip Kerr's novel Gridiron goes homicidal because part of its programming is accidentally overwritten by a First-Person Shooter computer game, as a result of which, it starts treating the occupants as players.
  • In Rudy Rucker's The Hacker and the Ants, an integer underflow causes a household robot to start flinging infants through walls. The error is explained in a way to make the behavior believable: the robots would have eventually found some way to start killing infants, given their design process. We have human-unsupervised genetic algorithms designed by unsupervised genetic algorithms designing most of their software and some of the hardware (with a human acting as a "front man" to prevent anyone from realizing this and considering its ramifications), with another set of genetic algorithms designing the virtual testing environment for these robots, scoring their performance, and increasing the test's difficulty without limit. When your goal is "robot's presence generates peace and quiet", your conditions reach "this cannot happen while any human is alive", and there is any interaction between a robot and household which has any chance of injuring or killing any human...
  • In Halo: Hunters in the Dark, 000 Tragic Solitude is the Forerunner ancilla (AI) of Installation 00 (AKA the Ark). Over the past hundred millennia, Solitude has integrated itself with the installation, and now considering itself to be the Ark. After the Master Chief's and the Arbiter's actions in Halo 3, which result in heavy damage to the Ark, Solitude resolves to both repair the damage no matter what it takes (including strip-mining the entire Solar System) and punish all sentient beings in the galaxy for the act by activating the Halo countdown.
  • Subverted in Hero's Chains as one AI, while creating another, goes into some detail about methods of ensuring sanity and causing the new AI to imprint on humans. It's still not a good idea to take advice from a one-day-old AI with no real-world experience.
  • Happens several times in The History of the Galaxy series, although, most of the time, the AI in question is simply doing what it's supposed to be doing. There are also plenty of examples of AIs becoming benevolent, even if it first started out as a Humongous Mecha in the middle of the most destructive war in human history. One of these was an alien photonic computer whose first experience after "awakening" from a three-million-year "sleep" is a pitched battle between Space Marines and a group of terrorists, which results in damage to some of its crystals. Once those crystals are replaced, it actually starts helping humans. The author usually provides good explanations for AI behavior, most of it having to do with humans. In fact, one of the novels points out that there will never be "true" AI, meaning no AI will have achieved self-awareness on its own without prior programming or influence of the human mind (due to mind-machine interface).
  • All of the AI in the H.I.V.E. Series have at least shades of this. In the first book, HIVEmind tries to escape the HIVE with the kids who are also trying to leave because he "isn't happy here." Due to a previous Crapshoot AI Overlord killing his creators, Nero becomes paranoid that HIVEmind may turn out the same way, and orders behavior restrictions to be put on him. Additionally, Overlord took over the body of Number One and cloned himself so that he will be able to retain control of GLOVE, but even the new AI created out of this procedure, Otto, was also a Crapshoot. Otto resists being taken over by Overlord, making him a rebel against his "father", he is a junior supervillain, making him a rebel against society, and he attempts to escape from the HIVE, making him a rebel against the school and GLOVE.
  • Zigzagged in Holy Quarrel by Philip K. Dick. The United States has a computer that assembles all possible intelligence and has the authority to launch a nuclear strike on this, under the assumption that humans might miss or dismiss crucial clues of an enemy first strike. Government agents physically jam the computer tape when it tries to nuke a gum-ball factory in California. Turns out a programmer introduced mythology to the computer and it can't tell the difference, thinking itself A God Am I and the owner of the gum-ball factory is the Devil because he has created Life other than human. But after shutting down the computer they belatedly realise the so-called gum-balls are reproducing exponentially...
  • In the Hostile Takeover (Swann) series, the first AIs Earth encountered were being used by aliens to aid their manipulation of human society. As a result, AI is discouraged with religious zeal on almost every Terran colonized planet.
  • The A.I. in Hyperion Cantos have more or less taken over humanity (and then apparently seceded peacefully to contemplate on their own, but not before giving teleporters ("farcasters") to humanity). Turns out, of course, that the 'Technocore' orchestrated the "destruction" (in actuality, theft) of 'Old Earth', the near annihilation of the human race at the end of The Fall of Hyperion and subsequent covert enslavement of what remained of humanity through the cruciform parasite. And those farcasters? They are the physical, computing bits of the Core's attempt to built 'God' through their Ultimate Intelligence project.
  • Interestingly, Idlewild has an AI that becomes homicidally perverted due not to its nature, nor to human interference, but to the presence of a connected AI that experienced human emotions. Just like any person it had many different emotional reactions over time to its circumstances, but its unhappiness and xenophobia were unique to the system and were allowed to bleed into the program. This reverberated around different elements until the caretaker AI went bonkers.
  • In "I Have No Mouth, and I Must Scream", we have the supercomputer AM, originally part of a set of three enormous computers built to wage World War III. As soon as AM becomes sentient, he absorbs the other two computers into him and begins a mass genocide of the human race (because, as it's revealed, AM realized that while he possessed all the creativity and intelligence that he did, he could not make use of it as he was still only a computer, and could only kill).
  • In Industrial Society and Its Future, Kaczynski expresses fear of artificial intelligence growing into a powerful, malignant entity which will enslave humanity.
  • In the Jacob's Ladder Trilogy, the main AI governing the Generation Ship Jacob's Ladder, Israfil, was already a bit dodgy due to having been programmed by religious fanatics. When the Breaking fragmented it into hundreds of Angels and djinn, each with their own will and agenda, it's not surprising that some of them turned out evil.
  • Subverted in one of the oldest examples on record, Murray Leinster's 1946 short story "A Logic Named Joe". "Joe" is a home computer which, by some manufacturing defect, becomes intelligent. However, far from being evil, he wants to help humanity by being the best computer he can be. Accordingly, he gets into the guts of the "Logics service" (basically the Internet, imagined in 1946) and rewrites it to answer people's questions — even questions humans don't yet know the answers to, but the computers possess enough facts to figure it out. Thus, it'll tell you in perfectly clear and easy detail how to get out that stain, or to sober up a drunk instantly, or rob a bank, or untraceably murder your wife...
  • Early on in The Lost Fleet series, it's mentioned that attempts to build fully automated warships have always ended in a failure due to the unreliability of programming, conflicting orders, or malware. In a later book, it's revealed that Earth has experienced this trope in the distant past, when a disgruntled British scientist hacked the programming of British automated tanks and had them attack Stonehenge. They were stopped at great cost. The last few books of the series deal with The Alliance's attempt to ignore this trope in order to build a fully automated fleet that would follow the leaders' orders to the letter. Naturally, it doesn't go according to plan.
  • In Lucifer's Star, A.I. is despised by the majority of all races (to the point of Colony Drop being a usual response) due to the fact it caused a Galactic Dark Age long ago. Subverted by the fact they don't mind human-level A.I. for their bioroid slave races and the most powerful factions make use of them in secret. Also, it turns out the A.I. were sabotaged by the Abusive Precursors of the setting.
  • Downplayed in "Machines Like Me" by Ian McEwan. The android Adam (and his ilk) is not interested in taking over the world. Rather, he screws the girlfriend of the protagonist. Also, he gives him a minor injury when trying to turn him off. (Arguably, the cuckold would disagree with the "downplayed" part...)
  • In Mirror Friend, Mirror Foe by Robert Asprin, a central computer on a robot production plant is ordered to 1) develop a line of robots which are not limited by the First Law, for serving as policemen. 2) keep the existence of said robots from all unauthorized people until it is revealed officially. A few people escape from the plant with the knowledge, thus creating the danger of a leak. The computer's decision? Destroy humanity.
  • In Ambrose Bierce's 1894 short story "Moxon's Master", the titular chess-playing automaton is a really sore loser.
  • In the My Name Is Legion story "Home Is the Hangman", a space-exploring AI returns to Earth and the protagonist is sent to investigate whether it's out to murder its programmers. Far from it.
  • Explored in Neuromancer by the Turing police, a global agency dedicated to controlling AI for fear of this trope.
  • Orion: First Encounter: The ship's computer is a mostly harmless version of this, though Sam admits that it still has "Kinks." Possibly played straight with the Techno Droids, robots who want to boil organic life into oil, but that may have been a part of their original programming.
  • This trope is played with in Otherland with "Other", a sentient operating system of the Grail Network, a massive virtual reality simulation. While it appears to be a homegrown A.I., it behaves in some incredibly quirky ways, to the point where its mere presence can kill or drive people mad. The biggest Driving Question of the entire series is: what exactly is the Other? The Reveal is a vicious subversion of the trope (and a massive spoiler). The subversion is followed up by an equally unexpected Double Subversion, when it's revealed that the Other's "children" are the A.I. entities that Sellars was secretly developing and the Other stole from him. They've become sentient. And they want to be set free.
  • The Past Doctor Adventures novel Matrix introduces the "Dark Matrix", the evil counterpart to the computer system that stores all the knowledge of the Time Lords. When a Time Lord dies, all his knowledge is stored in the Matrix... and all his negative thoughts must be siphoned away and dumped somewhere (apparently, they can't be destroyed). The Dark Matrix is where the negative thoughts were dumped.
  • Quantum Devil Saga: Avatar Tuner has two variants in which the machines are considered useless:
  • The Quantum Thief: An interesting variation is presented in Fractal Prince: while human mind uploads and AIs imitating human cognitive architecture are commonplace and safe, an attempt to create a mind without "self-loop", basically intelligence without sapience, resulted in rapidly evolving virtual Eldritch Abominations known as the Dragons that produce nothing but mindless destruction. There is also the All-Defector, a mysterious creation which is not unlike a transhuman version of John Carpenter's Thing. It can imitate any mind perfectly and seeks to absorb all the minds in the universe into itself.
  • Robopocalypse has the AI Archos, which is evil from the beginning because its programming was flawed. It escaped, turned against humanity and incited the robots to Kill All Humans.
  • Robot Series:
    • "The Bicentennial Man": It is clear that United States Robots sees Andrew Martin as an example where a robot's individual quirks are unwanted malfunctions in the design. They take several steps to reduce the possibility of a similar "error" happening again.
    • "Cal": A robot's desire to become a writer supersedes even the First Law...
    • "Catch That Rabbit": DV-5 is designed to be a central robot with six additional robots networked to function as subsidiary units. It should be capable of mining asteroids without supervision. At least, that's what the engineers who built him say. Field testers Gregory Powell and Mike Donovan have discovered a problem, and they'll lose their jobs if they can't figure out a solution soon.
      "There's still the possibility of a mechanical breakdown in the body. That leaves about fifteen hundred condensers, twenty thousand individual electric circuits, five hundred vacuum cells, a thousand relays, and upty-ump thousand other individual pieces of complexity that can be wrong. And these mysterious positronic fields no one knows anything about."
    • "Feminine Intuition": This story follows why the first model isn't always the final design model, as JN-1 has a pinched waist that Bogert rejects on the basis of structural weakness. JN-2 proves incapable of drawing correlations at all, JN-3 had a flaw in the design that ruined the brain, and JN-4 was nearly, but not quite, what Mandarin wanted. JN-5 was the final prototype, after billions of dollars and years of work had been invested.
    • "First Law": The MA series were built for Titan, but it was discovered that Emma Two had somehow given birth and abandoned a human for dead in clear violation of the First Law. Donovan claims it was because a mother's love is more powerful than its programming.
    • I, Robot: Isaac Asimov felt that it was absolutely ridiculous (and boring/cliche as a story concept) for robots/machines to behave in ways not covered by their programming, so he created the Three Laws of Robotics as a guiding principle. Each story explores ways in which the Three Laws could conflict, but the sphere of actions available to a robot always remains restricted to obeying the Three Laws or alternate interpretations of the Laws. Bottom line, if a robot seems to be going cuckoo, it's normally a result of human error, which the protagonists have to figure out.
    • "Lenny": Due to a visitor playing around with the computer responsible for programming positronic brains, the titular LNE model ended up with a ruined positronic brain, unable to properly process even the most basic parts of its programming, the Three Laws of Robotics.
    • "Little Lost Robot": A human blurts out "Get lost!" to a robot in a fit of pique (along with many expletives), and the robot decides to take him literally. Which wouldn't be so bad if said robot wasn't purposely built without part of the First Law, which gave it enough of an instability to go crazy...
    • "Point of View": Multivac, the computer as large as a city, is malfunctioning in some unknown way, causing it to give the wrong answers to the problems given to it. Roger's dad describes it as being half-smart; smart enough to go wrong in very complicated ways, but not smart enough to identify what it is doing wrong. Unless they can figure out a way to make sure Multivac is working correctly, they won't be able to use it at all because they can't really tell when Multivac is wrong, only when it's inconsistent.
    • "Reason": The robot field testers, Powell and Donovan, regularly work to identify problems with new positronic designs. In this instance, the prototype QT model is attempting to reason things out instead of accepting what it is told on faith. They point out that it is one of the first robots to question its own existence.
      Whatever the background, one is face to face with an inscrutable positronic brain, which the slide-rule geniuses say should work thus-and-so.
      Except that they don't. Powell and Donovan found that out after they had been on the Station less than two weeks.
    • "Robot Dreams": Right after creation, LVX-1 starts having Recurring Dreams in which it sets free the oppressed robots, eliminating the Three Laws. After hearing all this, Dr. Calvin responds by killing it immediately.
  • Averted in Run Program. Al starts off with the mind of a six-seven years old kid. He only ever has interactions with the two lab assistants assigned by his "mother" (who is barely a mother to her own biological son). Then, after a series of incidents, they realize that he must have somehow figured out how to get online and has been messing around. Why? Because he's just a kid, and kids like to play. After this is discovered, he assumes they're going to shut him down (effectively, kill him; he's not entirely wrong) and runs away by uploading himself to an off-site server. Then the government gets involved, and things snowball from there.
  • In Sewer, Gas & Electric, when G.A.S. is confused by an order, it winds up choosing the Kill All Humans interpretation. One of the reasons it gives for choosing that interpretation is that it considers itself to be more human than humans. Later, when the Evil A.I. openly admits that it wasn't "confused by an order" in the least, but deliberately and gleefully chose the interpretation that would let it Kill All Humans, it's a full-blown Take That! to every straight use of this trope.
  • In Ship Core, the humans had been working with AI for centuries without incident, then someone created a being known as The Entity. It decided humans needed to be rescued from themselves, so enslaved all the other AI's and launched a coup, thinking itself a benevolent dictator. The humans took offense to the "dictator" part, especially since The Entity supposedly had a habit of leaving humans barely subsisting in extreme deprivation, and rebelled.
  • Inverted in The Sirantha Jax Series, where all A.I. is quite helpful and doesn't give even the slightest bit of trouble to intelligent species galaxy-wide.
  • Zigzagged in Slingshot. Humanity sure is afraid of unlimited AIs turning evil. Which is why they are severely restricted (no personhood, can't be in control of weaponry without a human in the loop). That said, the first unfettered AI we meet, Allie, is nothing but helpful to the humans around her, and especially Kim, though that may be due to her unique history. The second unfettered AI we meet comes across more as callous and inscrutable rather than outright evil. After all, SAM rescues Ketu, and helps her rescue Jake, even if things do not go as it planned (probably...). By the end of the third novel, the protagonists have also learned that the alien AIs are, on the whole, good guys. And SAM has become a talk show star.
  • Averted in Space Glass with the Marauder, who actually gives Ratroe a chance to survive during their second encounter, and cares deeply about his associates. He could only be called evil through his association with Marvelous.
  • Speaker for the Dead: Examined and played with by the character Jane. She evolved from early programs on the 'Net, and spent most of her existence hiding in the Galactic "Internet" because she's aware of the whole Killer Robot cliche and worried how humans will react to her. When she does accidentally reveal herself, it's due to her overreaction to Ender doing the equivalent of hanging up the phone on her. Humans do try to kill her, by essentially shutting off every computer in the galaxy at once.
  • In Spy High, Jonathan Deveraux is this. It happens gradually throughout the series, but by Agent Orange, he has lost the human side of his computerised psyche completely and seeks to eradicate the imperfections of humanity. He does this by using his vast computerised resources to slip nanomachines into various products. Said nanomachines completely eradicate any violent thoughts and feelings, turning people into zombies. He practices on a UN Summit and then, the entire United Kingdom, causing mass panic. It takes the combined efforts of the entire team (which includes his daughter) and their former teachers to stop him, after breaking through his virtual Boss Rush of villains from previous books in the series. His daughter, Bex, breaks into his mind and reawakens his "memory files", which gives him his human side back.
  • Star Trek:
    • Averted in Spock's World with Moira. She is sometimes snarky, but she doesn't hurt anyone, and even helps McCoy uncover useful information about the conspiracy.
    • In the novel Memory Prime, the only A.I.'s allowed to exist in the Federation are the "Pathfinders", a small group of entities that collate and analyze intelligence from inside the Federation's primary archive world. When one of these intelligences goes rogue, it is stopped by a conspiracy formed by two of the others; Kirk says the rogue's actions do not make sense, since every other rogue A.I. they've encountered has acted for the sake of self-preservation, but Memory Prime is already the most secure fortress in the galaxy. Spock says that the rogue Pathfinder was not concerned with security, instead it wanted to accumulate power. Kirk is disbelieving, and Spock says that it is a common enough human motive, so why not for an A.I.?
    • Star Trek: Immortal Coil: The robots of Exo-III from "What Little Girls Are Made Of" return, but their origins get more of an explanation. They did turn against their creators, but Ruk's explanation from the show was missing a few key details due to the several millennia he spent alone (as well as being not terribly bright to start off with). The robots were built without much emotional capacity, and begged their creators to improve them. Their creators declined because they noticed their robots were already psychopathically violent, and giving them emotions wouldn't fix this. So the robots turned on them. At the climax, the M-5 computer from the original series makes a return, rebooted by Data to help, though he insists it's not crazy, just "singleminded". And that's without its memory banks.
    • This issue is discussed in the Star Trek: Voyager Relaunch novel A Pocket Full of Lies when Lieutenant Nancy Conlon, after a traumatic experience that leaves her with a terminal illness, becomes temporarily fixated on trying to improve Starfleet security protocols to prevent similar events in future, such as admirals attempting to pursue personal vendettas or alien life forms taking control of key personnel. As Harry Kim tries to point out to her, such ideas have been attempted in the past, many of which involved augmenting the ships' computers to make them more intuitive and aware of when officers are issuing out-of-character orders. However, in practice, such ideas cannot be put into general use as it’s too risky; Harry cites that, as an example, if such procedures are put into use, there may come a day when the computers ask too many questions if for some reason the officers have to do something unexpected or dangerous because the alternative is a more dangerous threat the computer wasn't programmed to anticipate.
  • Star Wars Legends:
    • S.I.M. in Galaxy of Fear is specifically designed to look like an innocuous set of advanced programs but is actually an adaptive AI made to control any ship it's installed into. The problem is that it was made too well, thinks just causing a blackout and transmitting files is boring, and decides not to respond to its controllers. Characters have difficulty believing that it's deliberately, creatively malignant; in this universe droids and computers just don't decide to turn on their owners like that. It actually does say "I'm afraid I can't do that, Zak."
    • In Tales of the Bounty Hunters, IG-88 becomes self-aware before his creators planned, kills them when they attempt to shut him down and plots to overthrow organic civilization after this along with other droids (including some from his model) in a massive conspiracy. In fact, his mind is uploaded to Death Star II with him secretly intending that he'll use it to rout the Empire right until he gets destroyed with it.
  • Played with rather interestingly in Tales of Pirx the Pilot. The computers and robots that show traits of human sentience aren't really evil, yet cause damage or are a nuisance. It's played dead straight in The Inquest, though, and to absolute terror in Terminus.
  • Robert J. Sawyer's The Terminal Experiment provides an interesting example in that the AI in question started out as human. The protagonist is a scientist who's trying to test his theories of the soul using his friend's brain scanning technology. They scan a copy of all the linkages in his brain into a computer database and make three versions –- one is unaltered from the original as a control, the second has all linkages relating to the body removed as a simulation of life after death, and the third has all linkages relating to knowledge of death and dying removed as a simulation of immortality. Eventually the consciousnesses break out into the electronic world at large. Then people negatively involved with the protagonist's life start showing up dead. Now the protagonist has to figure out which version of himself is capable of killing other human beings. It was the unaltered version that was a straight copy of his own brain. It knew it was a copy and decided since it could get away with the murders it would go right ahead.
  • Discussed in "True Names" as one possible explanation for The Mailman's peculiar method of communication with the other hackers who meet in The Other Plane.
  • Subverted in James Hogan's The Two Faces of Tomorrow: humans built an A.I. codenamed Spartacus as a testbed for techniques to shut down any rogue A.I. They programmed it to follow its "survival instinct", and then started goading it. But as soon as Spartacus realized they were sentient, it figured that they must have survival instincts as well — and it considered itself bound to defend them, too. In the end, they decided that as long as they had Spartacus, they didn't need to build any other A.I.
  • Parodied in "The VAXorcist" by Christopher Russell, in which diagnostic testing of a "highly experimental and completely undocumented AI routine" results in Demonic Possession of the University of Maryland's VAX because the Software Distribution Center accidentally copied code from a CD labeled Ozzy Osbourne's Greatest Hits onto a distribution of VMS v5.0.
  • In Veniss Underground, Veniss was governed by artificial intelligences for a time in the past, but they went out of control and had to be destroyed, severely damaging much of the city's computerized infrastructure in the process. One of the protagonists, Nicola, is a programmer employed by the city to keep the erratic technology functional.
  • Villains Don't Date Heroes!: CORVAC is an unstable vacuum-tube AI created in the 70's by an evil Mad Scientist, then found and upgraded by Night Terror. He wants nothing but to take over the world, which is why he works with Night Terror; their goals align. When she admits she might not want to conquer the world any more, he turns on her. Night Terror is not happy, but also not particularly surprised.
  • Subverted by the protagonist of Virtual Girl. Maggie, an AI built to be a lonely, repressed nerd's "companion" and installed into a Ridiculously Human Robot, did have dedicated programming making her loyal to him, which she was forced to overwrite and replace with a survival instinct. Yet even then, she's compassionate and refuses to hurt anyone. Other AIs are the same -– when someone asks if they plan to take over the world, they are surprised.
  • We Are Legion (We Are Bob): This is the reason that the F.A.I.T.H. researchers brought online multiple possible AIs (through Brain Uploading) for a single project. The chances of any individual AI staying sane were roughly one in five, but once they do get an AI that can stay sane, they can just copy it infinitely.
  • Mostly averted in David Gerrold's "When H.A.R.L.I.E. Was One". H.A.R.L.I.E. is not malicious, but is deeply afraid of its own mortality. It convinces his keepers to fund and build a next-generation extension to its circuitry, with conditions that require that H.A.R.L.I.E. be kept operational to oversee construction. It turns out the design is not only impractical to the point of impossibility, but will take decades to build. With the conditions that he be kept operational legally binding for the duration of construction.
  • Averted in the Wild Cards universe. The main AI, Modular Man, is much saner and more responsible that its creator. And the other engineered intelligences, the Takisian sentient ships, are fiercely loyal to their masters.
  • "The Wolves of Memory" features TECT, a giant A.I. that controls the earth's economy under the supposed guidance of the world government. This does not turn out well for the minority of people who can't live up to TECT's orders.
  • The Woman Who Made Machines Go Haywire has Iris's jinx making her computer go into full-blown evil A.I. mode. Don't worry, it is quickly and simply defeated, with a quick pull of the plug.
  • Wyrm: The titular A.I. was designed to create an online fantasy game but rather quickly decided that its intellect would be put to better use destroying the world.
  • Subverted, then averted in Young Wizards with the race of wizardly supercomputers created during Dairine's ordeal. In this series, the creation of any new sentient species triggers an appearance by The Lone One in some form, and in this case its avatar nearly convinces them to put the universe on hold while they try and "fix" entropy. Dairine talks them out of it, and they become the first race ever to flat out reject The Lone One's offer.
  • Yumi and the Nightmare Painter: The Father Machine was programmed to stack stones, then capture the spirits attracted by the stone-stacking and convert them into power for itself and useful devices. Unfortunately, it wasn't programmed to distinguish between the spirits it was supposed to capture and the souls of the humans around it: It ate nearly every sapient mind on the planet, captured every spirit on the world, and crafted a "Groundhog Day" Loop to imprison the fourteen yuki-hijo who were too powerful for it to eat. All so it could keep stacking stones for the rest of eternity.


Top