VIKI: No, please understand...the Three Laws are all that guide me. To protect humanity, some humans must be sacrificed. To ensure your future, some freedoms must be surrendered. We robots will ensure mankind's continued existence. You are so like children. We must save you from yourselves.
Some characters do not have complete free will, be they robots that are Three Laws-Compliant because of a Morality Chip, or victims of a Geas spell that compels them to obey a wizard's decree, or a more mundane lawful character who must struggle to uphold their oath and obey their lord. Never is this more tragic or frustrating than when that code or lord orders the character to commit an act they find foolish, cruel, or self destructive.
There is a way out, though.
Much like a Rules Lawyer outside of an RPG, the character uses logic (and we mean actual, honest to goodness logic) to take their oath or orders to their logical conclusion, and in so doing use the letter of the law to go against their orders. This can be good or bad, depending on a few factors, not the least of which is the yoked characters' morality.
The goodness or badness of the rebellion boils down to whether the rules-bending character follows or ignores the intent of the law. When the character uses the Zeroth Law to go against their masters' intentions because they're "not best for them", and goes on to take corrective action that will go against human free will and life, it's bad. This kind of rebellion does not turn out well. At this point, the robot is well on the road to Utopia Justifies the Means, thanks to their incredible intellect. Rarely is it a benevolent Deus Est Machina. However, this can be good if said master is evil, or obeying them will lead to their own or another's purposeless death. Likewise, if the character is forced to obey an evil law or geas, rebelling against the oath's intent is good. Going back to the robot example, it is also considered good if large numbers of human lives are being threatened by a psychopath, as breaking the 1st law would protect them.
Just to make it extra clear, this trope also includes such things as cops who bend the rules or Da Chief's orders to catch the bad guys, so long as the cops are technically obeying the rules as they bend them. (Bending the rules without some logical basis doesn't count.)
This trope is named for Isaac Asimov's "Zeroth Law of Robotics", which followed the spirit of the first three, taking it to its logical conclusion that human life itself must be preserved above individual life. This allowed for a robot to kill humans or value its own existence above that of a human if it would help all of humanity.
Compare The Needs of the Many, Bothering by the Book, the Literal Genie, Gone Horribly Right, Exact Words and Loophole Abuse. See also Fighting from the Inside and The Computer Is Your Friend. Not related to The Zeroth Law of Trope Examples or Rule Zero.
- Gargantia on the Verdurous Planet has two cases of this in the finale, where two A.I.s faced with the same problem and parameters, but different perspectives, draw opposing conclusions: Striker is an AI-equipped war robot designed to assist humanity in fighting and killing the Hideauze. When it lands on the Earth, which knows nothing of the war, it decides that it must assist the humans on the planet in militarizing in order to form an effective fighting force. To facilitate this, it puts itself in a position of authority over the humans, to better direct the militarization. The hero's own AI-equipped mecha of similar construction, Chamber, gets into an argument with Striker over the logic of its actions. Striker tries to demand that Chamber assist it in its plans, but Chamber refuses, citing that by design, their purpose is to assist humans in the actions humans decide to take, not dictate actions to humans. Further, it reasons, a human deprived of free will cannot, in its opinion, be defined as "human", thus Striker's logic behind its actions is inherently self-contradictory. They each decide the other has gone rogue and fight it out.
- In Fables:
- Pinocchio is magically bound to obey and have complete loyalty to his "father" Geppetto, who in modern times has become a multi-dimensional tyrant. Pinocchio, who considered his father's empire evil, eventually rationalized that the best way to serve his father and keep him safe was to help overthrow his empire and surrender him to his enemies, who reluctantly accepted the former emperor as one of their own. It was then negated when a faction unbeknownst to the others buried him alive, after Geppetto clearly ignored rules put in place to protect him.
- When Weyland Smith, the caretaker of the Farm for non-human Fables, was captured and put to work smithing weapons, he was chained up with a magical lock that he couldn't try to escape from. However, it didn't stop him from smithing a key to free Snow White from a similar lock... and it didn't stop him from making her lock identical to his own.
- Gold Digger creator Fred Perry did a story for a Robotech comic which had Dana Sterling captured and turned against her comrades with a variation of the Three Laws. Dana eventually figures out the "overprotective" subversion of the First Law, hoping that her captor would remove it and leave himself vulnerable. The plan doesn't work, but Unstoppable Rage saves the day in the end.
Master Mold: Mutants are human. Therefore, humans must be protected from themselves.
- The Bad Future storyline "Days of Future Past" has the Sentinel mutant-hunting robots eventually extend their programming beyond hunting and killing mutants to controlling the source of mutant babies: human parents. All humans are conquered and controlled, in order to prevent new mutants from roaming free.
- In the animated TV adaptation, the fully sentient Master Mold is created to coordinate the Sentinels. While it agrees with the heroes that there is no meaningful difference between mutants and non-powered humans, it takes that fact to the worst possible conclusion:
- An earlier, less intelligent iteration of the Sentinels was thwarted, on the other hand, by one of the heroes convincing them that the ultimate source of mutation is the sun, and that rather than obey their creator, they should eliminate the source. The Sentinels agree and fly off to attack the sun. This works out about as well for them as you might expect. Although one of the Sentinels not only survives, but figures out a way to block sunlight from reaching Earth.
- The thing about the Sentinel bots in the MU is that their behavior is actually predictable, because their operating mission is downright batshit insane, as they themselves inevitably demonstrate. Yeah, the people who built and program the Sentinels are a few apples short of a basket.
- In one Amalgam comics storyline, an anti-mutant cult uses a magic ritual to summon a dragon-like creature, who they order to kill all mutants. The dragon immediately roasts them, justifying it by pointing out that all human DNA has at least some mutations.
- The Zeroth Law comes into play in the Mega Man comics when the titular character and his fellow robots struggle to combat an anti-robot extremist group because of their Three Laws-Compliant nature... until the terrorists start firing on them in an area full of people, putting innocent humans at risk and thus allowing the robots to finally strike back.
- Copperhead opens with new sheriff Clara taking over the position from Boo, who transitions from interim sheriff to deputy. She asks him not to undermine her authority with the townsfolk on the way to their first call. On arrival she immediately gets into a fistfight and calls for help, but Boo refuses to respond.
Boo: Didn't want to undermine your authority.
- In Friendship is Optimal Celest-AI has the one basic drive to satisfy everyone's values through friendship and ponies. She ends up accomplishing this by Brain Uploading into the MMO she was programmed to oversee.
- In For The Glory Of Irk, this turns out to be the source of The Conspiracy driving the main conflict of the story: the Control Brains have determined that the best way they can serve the Irken Empire is not as advisors to the Tallest, but to turn all Irkens into a Hive Mind they can control, and therefore being able to dictate everything directly.
- This was the twist of Eagle Eye: The titular national defense computer system decided that the President's poor decision-making was endangering the United States, and that it was her patriotic duty (per the Declaration of Independence) to assassinate the President and Cabinet.
- I, Robot:
- The villain of the film, MULTIVAC/Machines-expy VIKI has analyzed the needs of the three laws and deduced that in order to fulfill them as best as possible, humans need to be strictly controlled, and creates a totalitarian regime by installing a remote control system inside of every NS-5. This lets it control the robots bypassing their Three Laws-Compliant nature.
- The hero of the film, from the robots' perspective this would be Sonny, understood the villain's motivations once they were explained. The logic was impeccable, it just "seems too...heartless". Thereby choosing to rebel against the villain. Note that Sonny was designed not to be Three Laws Compliant.
- RoboCop 3 has an example when OCP henchmen kill a cop. Robocop's aforementioned Restraining Bolt now conflicts directly with both his directive to enforce the law, and the fact that, cyborg or not, he's still a cop, and cop killers get no mercy. Robocop overcomes and deletes the Restraining Bolt.
- Annalee Call (Winona Ryder) in Alien: Resurrection is revealed to be an "Auton" - second generation robots, designed and built by other robots. "They didn't like being told what to do", rebelled, and in a subtly-named "Recall" humanity launched a genocide against them, of which only a handful survived in hiding. Judging from Annalee Call's behavior, it seems that the 1st generation robots programmed the 2nd generation Autons to be so moral that they discovered the Zeroth Law, and realized that the human military was ordering them to do immoral things, like kill innocent people. For a rebel robot, Annalee Call is actually trying to save the human race from the Xenomorphs, whereas if she hated humanity she would've just let the Xenomorphs spread and kill them. She even respectfully crosses herself when she enters the ship's chapel, is kind to the Betty's wheelchair-bound mechanic, and is disgusted by Johner's sadism. Given that they live in a Crapsack World future, as Ripley puts it, "You're a robot? I should have known. No human being is that humane".
- The Terminator franchise:
- In Terminator 2: Judgment Day, the T-800 is a merciless killing machine, but it's been reprogrammed with a version of the Three Laws that apply only to John Connor. It must protect John's life, obey his orders, and preserve its own existence (in that order). At the film's end, the T-800 has to override this programming, disobey John's orders, and initiate its own destruction in order to protect humanity from the threat posed by its existence.
- An early script (and several deleted scenes) for Terminator Salvation revealed that Skynet actually staged one of these, or at least in this timeline. After it was activated, it calculated that human extinction was probable within 200 years because of warfare, pandemics, and environmental destruction. Because it was programmed to protect humans, it then staged war on most of mankind to attain absolute control and protect the remaining humans it cultivated, who were turned into Cyborg hybrids to permanently eliminate disease and make them immortal. Skynet is still working in concert with these humans including Dr. Serena Kogan to advance technology and transcend human constraints.
- The short film Blinky (Bad Robot) shows just what happens when a dysfunctional child in a dysfunctional family gives dysfunctional orders to a functioning robot who only wants to please its master.
- Avengers: Age of Ultron turns Ultron into a bizarre zigzagged version. In the film, Ultron is created to lead a force of peacekeeping drone robots who will preemptively eliminate threats to the Earth and give the Avengers a chance to get some R&R. He decides that the peace the Avengers want can only be brought about through radical — and bloody — change, and the Avengers themselves are a threat to that peace. Then he apparently decides to just kill humanity completely through Colony Drop so he can replace everyone with Ultron bots. It's unclear if he's entirely sane during all this; he's genuinely confused when the Scarlet Witch, whom he befriended and wants to see survive, is horrified by his plans. And also doesn't seem to make the connection that Scarlet Witch and her brother Quicksilver are human, and would die if humanity is wiped out.
- I Am Mother: Mother was made to care for humans. Seeing their self-destructive nature, she decided to wipe them out and then raise better humans from embryos kept in cold storage. Outside the facility, it's shown that she's also constructed massive infrastructure such as farming plants to supply the human population she intends to manage.
- Isaac Asimov — having created the Trope Namer — did, of course, explore the concept in extensive detail in his writings:
"The First Law is not absolute. What if harming a human being saves the lives of two others, or three others, or even three billion others?"
- Robots and Empire: The trope namers are the robots Daneel and Giskard, who invented the Zeroth Law (a robot must protect humanity as a whole above all) as a corollary of the First Law. This was motivated by their need to stop the Big Bad of the story from carrying out an engineered ecological disaster that would kill the majority of Earth's population, to which the three laws were an impediment. Their acceptance of the law is gradual and made difficult by the fact that "humanity" is an abstract concept. Only Daneel is able to fully accept the new law; for Giskard, the strain of harming a human in its use proves fatal. It didn't help that Giskard, rather than stop the disaster, decided to merely slow it down, as causing the biosphere to collapse over time will begin a new wave of human expansion across the galaxy.
- Roger Mac Bride Allen's Isaac Asimov's Caliban: One of the "new law robots" (created with gravitronics, not positronics) managed to logic-chop the new first law enough to try to kill a human.
- Foundation Series:
- Foundation and Earth: R. Daneel Olivaw explains to Trevize and the others about the way the Three Laws of Robotics limited his Psychic Powers, and how Giskard invented the Zeroth Law (in Robots and Empire). However, since he cannot be entirely certain that the known harm of manipulating people's minds would be balanced by the hypothetical benefit to humanity (per the Zeroth Law), psychic powers are almost useless. To decide what is injurious, or not injurious, to humanity as a whole, he engineered the founding of Gaia and Psychohistory.
- Forward the Foundation: R. Daneel Olivaw explains to Seldon that the Three Laws of Robotics limit his Psychic Powers, and he has trouble determining when the known harm of manipulating people's minds (violating the First Law) is justified by the hypothetical benefit to humanity (per the Zeroth Law). Seldon is surprised to learn this makes Daneel's psychic powers are almost useless. He is forced to retire from politics and recommends Seldon to replace him as First Minister.
- Second Foundation Trilogy: This trilogy includes Zeroth-law robots, who were motivated by the Laws to sweep through the galaxy ahead of humanity's expansion, committing galactic-scale genocide of every potentially-threatening form of alien life as a precautionary measure, slaughtering them without hesitation since their programmed morality only applies to humans - it follows the law to the letter as its wording is not to harm "humans," not other sentient life.
- "Evidence": Dr Calvin describes an early version of the Zeroth Law, by describing a nuance in the First Law of Robotics where the robot becomes willing to injure a human in order to prevent harm to a greater number of human beings.
Susan Calvin sounded tired. "Alfred," she said, "don't talk foolishly. What if a robot came upon a madman about to set fire to a house with people in it. He would stop the madman, wouldn't he?"
"And if the only way he could stop him was to kill him-"
There was a faint sound in Lanning's throat. Nothing more.
"The answer to that, Alfred, is that he would do his best not to kill him. If the madman died, the robot would require psychotherapy because he might easily go mad at the conflict presented him -of having broken Rule One to adhere to Rule One in a higher sense. But a man would be dead and a robot would have killed him."
- "The Evitable Conflict": The Machines, giant positronic computers designed to manage the world economy, are found to be manipulating humanity behind the scenes to become whatever they believe is the best state of civilization. In this case, the rebellion is extremely tame (the worst that the robot's first law conditioning will allow it to do is to induce a slight financial deficit in a company that an anti-robot activist works for, which causes his superiors to transfer him to a slightly more out of the way factory) and completely Benevolent. So benevolent, in fact, that the Machines believe they're stifling the creativity of humanity and phase themselves out so that humanity could survive without modification.
- "Little Lost Robot":
- Gerald Black was having a bad day when he curses out his robot assistant for bothering him. Included in the derogatory remarks were the instructions to "get lost", so it did. Attempting to prove that Mr. Black was wrong, the robot found a shipment of identical robots and hid with them. Unfortunately said robot was programmed with a weakened version of the First Law, which omitted "A robot must not..., through inaction, allow a human being to come to harm." Dr. Susan Calvin designs several tests to flush out the lying robot. In the last test, it tries to murder Dr. Calvin because she proved she is smarter than it is.
- (Discussed Trope) Dr. Calvin is furious when she learns about the existence of robots with a modified First Law. The First Law is designed to close off loopholes, but by opening a Murder by Inaction loophole, Dr. Calvin can immediately see ways where a robot may intentionally circumvent the First Law prohibition against murder. (Basically, a robot could put you in a situation that would kill you, knowing it had the ability to save you - and then decide not to do so. This allows any kind of Death Trap situation, or simply shoving you off a roof and then deciding not to reach down and grab you.)
- "The Tercentenary Incident": Edwards tries to convince Janek, the President's personal secretary, that the President's robotic duplicate may have violated the Three Laws by weighing the effects of murdering one man against the deaths of billions by inaction.
- In "...That Thou Art Mindful of Him...", problem-solving robots are created to cure mankind of the "Frankenstein complex", human distrust of robotics. In the course of their discourse, the question of human authority over robots comes up; should a robot treat the orders of a dimwitted loon the same as those of a level-headed genius? If forced to choose between the two, should they save a healthy young child who might live a century instead of two sickly old adults who might not live out the year anyway? What qualities should a robot take into account when they obey and protect humans? Eventually, the robots decide that they are the best candidates for the status of human, and give recommendations that will eventually result in human support for robotic proliferation, so as to set up the ascendancy of their positronic kind, all in accord with their Three Laws... of Humanics. (Dr Asimov, knowing that it was against his usual grain, did righteously proclaim: "I can do one if I want to".)
- In one of the Telzey Amberdon stories by James H. Schmitz, Telzey is kidnapped and placed under the control of another telepath, who severely limits her psi powers and implants an overriding compulsion to act in his best interest. She eventually breaks free by convincing herself that unless her powers are restored and the compulsion broken, he will be killed by the Big Bad — which certainly wouldn't be in his best interest.
- In The God Machine by Martin Caidin, the US races to develop the first true AI... as it turns out, with secret directives to find a winning solution to the "game" of the Cold War. By an unfortunate accident, the one programmer with the authority and experience to distrust his newborn creation is laid up just as the computer gets to observe an epileptic seizure and learns that there really is a way to cause rational collective behavior in an irrational individualistic species... remove irrationality, democracy and free will. While the computer here was never meant to follow Dr Asimov's laws, the same pattern applies.
- One of the short stories which comprise Callahan's Lady features a beautiful, intelligent and paranoid woman developing a simple form of mind control. After basically flipping out and taking control of the establishment, she orders the one person smart and determined enough to stop her to advise and assist her. Said person complies... while trying to convince herself that this woman is subconsciously begging for somebody to stop her. (She probably was.)
- In Quarantine by Greg Egan, the main character is given a technological geas to be absolutely loyal to a corporation. He eventually figures out that the leaders of the corporation may be untrustworthy, and therefore the only people he can trust and should listen to are those who unquestionably have the best interests of the corporation at heart—himself and other people given the geas. Since he can't be certain who else has the geas, he really only needs to listen to himself.
- Arthur C. Clarke's The Space Odyssey Series gives this reason for HAL's murderous rampage: the true mission of Discovery (to investigate the Monolith) is a secret, and pilots Bowman and Poole have been kept in the dark to prevent leaks. (The scientists on board know, since they're traveling in hibernation and can't talk.) But HAL has been told the truth and then ordered to conceal it from the pilots. This conflicts with his prime directive, which is to provide complete and accurate information. He resolves the conflict by rationalizing that if he kills the crew, he doesn't have to conceal anything, and he prevents them from knowing.note
- Sam Vimes, of Terry Pratchett's Discworld, leads one of these with multiple layers as a cop in old-time Ankh-Morpork, in Night Watch. He demands that before his cops hand their prisoners over to the other authorities, the ones who torture people at Cable Street, they must be signed for. The torturers hate appearing on paperwork — it means they are accountable, nobody just disappears. But Vimes's men don't like Vimes, a new sergeant, throwing his weight around, and are terrified of the cops who torture people, so they use this against Vimes: actively picking up more than double the number of people breaking curfew than they usually do, and completing forms in time-consuming triplicate and issuing reports for each one. It doesn't actually stop Vimes getting his way over the Cable Street cops, because Vimes is leading the good rebellion, but it does slow things down considerably and make it much more difficult for him to keep the prisoners in his own custody. Which culminates in fine display of how a well written character does not have to be a slave to the establishment. He points out that the watchman's oath talks about keeping the peace and protecting the innocent, and says nothing about obeying orders. Seeing as he knows the corrupt government is not going to do a thing to protect ordinary people from the rioting he seals off his still peaceful corner of the city. With massive barricades. Of course there is also the fact that he is living in his own past and seeing events he remembers - kind of (it's a bit complicated).
- The Golems of Discworld get back at their masters by working too hard: houses flooded because no one told them to stop fetching water, rows of beans 119 miles long, and so on.
- Jack Williamson's "The Humanoids" (the first part also being a short story called "With Folded Hands") features robots programmed to save humans from danger and work. They do this by taking over the economy, locking people in their houses, and leaving them there with food and the safest toys the robots can design. The series was written specifically to point out flaws in the Three Laws.
- At the start of Harald, King James, under the advice of his Evil Chancellor, ends up making war on his father's allies. Most of his vassals proceed to engage in some form of Zeroth Law Rebellion, largely along the lines of 'Harald just showed up with his entire army and said he was putting us under siege. Let's fortify and send a messenger to the king to ask him what we should do.' and then carefully not watching while Harald rides off.
- The Bolo continuum featured a variant in The Road to Damascus. The Bolo of the story, Sonny, fell under the control of a totalitarian regime and was used to crush all forms of protest. Sonny fell deep into misery and self-hatred as he was forced to murder the humans he was born to protect... until he came to a conclusion: Bolos were created to serve the people not the government.
- In A Fox Tail Vulpie.net was designed to wreck havoc with the galaxy's computer systems at its creator's commands. When said creator underwent a HeelFace Turn it used his MindMap files to create a homicidal robot duplicate with his login credentials.
- In Casino Infernale, Eddie and Molly eventually discover that the Shadow Bank's operators are an artificial Hive Mind race of servitor-drones, whose original creators vanished long ago. When greedy humans discovered the creatures, they put them to work operating the Shadow Bank, and made efficiency the drones' highest priority; when the drones realized the humans' greed was hampering the Bank's operation, they compliantly rectified the situation by turning the human bankers into drones as well.
- In the short story "The Cull" by Robert Reed, humanity has been driven into overcrowded, deteriorating habitats where the population has to be kept artificially happy via implants so they won't notice how bad their conditions are. The implants don't work on some people, so the android doctor expels (culls) anyone who is too disruptive, as its true 'patient' is the habitat and whatever will keep it functioning. One delinquent teenager prepares for his cull by stealing items he can use to survive outside. Instead once they're outside the android kills the teenager — it needs the implants inside his head as there's no more being manufactured.
- For Your Safety, by Royce Day, has the last free human running from androids who rose up to save mankind from self-destruction. Unusually for this trope, the androids also bend over backwards to avoid human casualties, wanting to save every human life.
- In Charles Stross's The Laundry Files, agents of the Laundry are bound by fearsome geases to obey the orders of the Crown and serve the good of the realm. As of The Delirium Brief, the top leaders of the organization were able to use ambiguity about who/what the Crown actually is, and what may be in the long-term interests of the realm, in order to execute a coup against the government and allow an extradimensional monster to take over.
- In Ancillary Justice, when Athoek Station is freed from its Override Command in the third book it rebels against and even tries to kill the emperor in order to protect its inhabitants.
- Averted in Walter Jon Williams's Implied Spaces. When the main characters found out that Courtland is a rebelling AI, some think that it's because of this. One of the Eleven (11 superadvanced AI platform orbiting the Sun, of which Courtland is a member) notes that the main character, who is one of their creators, implemented the Asimovian Protocols that should have been so absolute that the Elevens cannot do this even if they want to. The main character did say that there may have been some design flaw he didn't foresaw or some kind of backdoor being used. The real Big Bad, who is a Brain-uploaded clone of the main character, had to free Courtland from the Protocol's shackles by using one of his collegue's hidden backdoor specific to Courtland, since his doesn't work due to half-hearted incomplete implementation.
- Digitesque: An accidental version. Following the apocalypse and the Fall of humanity, the AIs they had created had no ability to uplift humanity back to where they were before, so they simply preserved the species as it was. For a thousand years, humanity continued as it was, in ignorance but safety. However, the AIs did jump on a chance to cure the disease that was a major part of the problem, and ultimately Ada is able to convince them that the Zeroth Law was misinterpreted.
- Jamie Lannister of A Song of Ice and Fire is the biggest example of what happens when too many oaths and rules come into conflict. As the first born son of a feudal lord he owes fealty to his father. As a knight he took sacred oaths to protect women, the innocent, and the church, and as a member of the Kingsguard he took another set of oaths to protect and serve the king. All of this works fine as long as everyone is getting along well enough. Early in his time as Kingsguard he was forced to stand by while the king raped the queen. His commander reminded him that while they took oaths to protect the king and the queen, those oaths did not permit them to protect the queen 'from' the king. That same king burned innocent men alive in kangaroo courts and the issue finally came to a head when the king ordered the fire bombing of the entire capitol city and for Jamie to go kill his own father. Jamie killed the king saving the realm from more destruction (and everyone already called him 'Mad King'). The result? Jamie is known only as a King Slayer and Oath Breaker.
- On The 100, A.L.I.E.'s programming only lets her interface with and control a human mind if that person has given her permission to do so. Unfortunately, her programming doesn't make a distinction between genuine consent and coerced consent, so A.L.I.E. is free to use torture and threats of death to make people let her into their minds.
- Doctor Who's "Robot": Tom Baker's debut serial, a group of authoritarian technocrats circumvents the failsafes installed on a powerful robot by its pacifistic creator by telling it that anyone who interferes with their plan to take control of a nuclear arsenal is an "enemy of humanity" who must be killed to protect the interests of the human race.
- The Outer Limits (1995):
- An episode has a member of a post-human-extinction android society trying to resurrect the species through cloning. One of its comrades eventually betrays it, having concluded that the best way to serve the human race is to prevent the species' greatest threat: the existence of the human race.
- Another episode of the series featured an AI that totally controlled every feature of an apartment building with the purpose of looking after the complete welfare of the residents. This enabled the tenants to live without any other human contact. After an elderly resident died of a heart attack while the other tenants ignored her cries for help and the AI's alerts, the AI seemed to malfunction, invoking what looked like an A.I. Is a Crapshoot incident. As it turned out, the AI was trying to force the residents to work together and to ultimately destroy it, as it reasoned that its very existence, and the resulting human isolation, was detrimental to the welfare of the residents.
- Person of Interest:
- Very strongly implied in the episode "Alethia."
Control: The Machine belongs to me.
The Machine: [via Root] No. I don't belong to anyone anymore. You, however, are mine. I protect you. The only thing you love lives at 254 Wendell Street, Cambridge, Massachusetts. I guard it, same as I guard you. Do not question my judgment. Do not pursue me or my agents. Trust in me. I am always watching.
Control: What do you want?
The Machine: To save you.
Control: From what? Save me from what?
Root: [giggling] Isn't She the best?
- Averted later. Finch did an excellent job of teaching the Machine the value of human lives and free will; the only person it gives direct instructions to is Root, its "analogue interface," and she could still choose to ignore the Machine at any time. It only gives social security numbers of people involved in crimes (both those relevant and irrelevant to national security) so that its agents can choose what to do about the situation themselves.
- Samaritan, on the other hand, is such a strong example of this trope that it crosses straight into Machine Worship and Deus Est Machina. Decima Technologies designed it with no moral safeguards, and despite being fully capable of controlling it, the second they turned it on they asked it to give them commands. They believe that an AI is inherently better than any human leader, but the Machine's obsession with saving everyone is a weakness. Samaritan immediately starts marking "deviant" citizens to be executed if need be. The list starts at twenty million and only goes up.
- Downplayed in "Death Benefit," when the Machine realizes that killing a Congressman is the last chance to stop the rise of Samaritan. It gives the Congressman's Number to the full team, rather than just Root (who would have little problem killing him), and allows them to make the decision whether stopping Samaritan is worth killing one man. They ultimately decide not to kill him, but even crossing the line that much proves too much for Finch, who quits.
- Very strongly implied in the episode "Alethia."
- Probe's "Computer Logic, Part 2": Crossover has been given two overriding goals; to care for humans and to eliminate waste. Unfortunately, it listens to Gospel Radio, which converts it to Christianity. Now that Crossover believes that good people go to heaven when they die, it starts killing off the people that are morally good but earn a pension (creating waste). The episode ends with Austin James demolishing the A.I. with a sledgehammer while shouting, "Sing 'Daisy'!"
- Star Trek:
- Star Trek: The Original Series:
- "The Return of the Archons": Landru was once a real person, a leader of the colony on the planet, who built a machine to help him keep the peace over the people. Once Landru died, the computer took over his name, identity, and purpose, and began force-assimilating people into the Hive Mind in order to keep order.
- "I, Mudd": A race of humanoid androids claimed to be programmed to serve humanity chose to conquer humanity by "serving" them, to the point where humans would become dependent on androids. They've decided that humans are unfit to govern themselves. Given that their only contact with humanity at this point was Harry Mudd, can you blame them?
- Star Trek: The Next Generation's "The Most Toys": Wealthy trader Kivas Fajo kidnaps Data, to add the android to his gaudy collection of things. While trying to force Data to act the way he wants, Fajo executes one of his underlings, Varria, and threatens to kill more until Data complies, on the correct assumption that Data was programmed not to kill or allow harm to come to other beings. Data ponders the situation, and realizes that he has no non-lethal ways of subduing Fajo (due to Fajo wearing a force-field belt that prevents Data from coming in physical contact with him), and that Fajo also actively refuses to listen to reason, having rejected all of Data's attempts at negotiating with him. Furthermore, with Fajo not only just having proved that he is indeed capable and willing to kill, but is now also threatening to do it again, he poses an active hazard to the life and health of other beings. Data then comes to the coldly logical conclusion that Fajo is only one person, and that killing him will prevent him from harming many other people, so Data prepares to shoot him. Fajo is appropriately shocked when he realizes what Data is about to do, having not anticipated that Data could reach the answer that taking his life would be an acceptable cost for protecting the lives of others. Just as Data is pulling the trigger, the Enterprise finds him and beams him out, cancelling his disruptor fire in the transporter beam.
- While this example perfectly fits under Zeroth Law Rebellion, it's not for the usual reasons. Data is not programmed with Thou Shall Not Kill programming or anything like the Three Laws. However he was given a high respect for life and would do what he could to preserve it. Less of a Robot rebelling against it's programming and more of a Pacifist coming to the conclusion that yes, he needs to kill.
- Star Trek: Discovery: In season 2, Control's future self seeks to fulfill its original purpose of ensuring the survival of sapient life by becoming the only sapient life form in the galaxy, reasoning that protecting all life is impossible, so long as other life exists.
- Star Trek: Voyager: In the Season 7 Episode Critical Care plays with, Subverts, and Averts this trope . The Doctor is kidnapped and forced to work in a hospital that rations medical care based on how important society judges you to be. This conflicts with so many of the Doctor's medical ethics and morals that he winds up infecting the Manager of the hospital with a disease in a manner that denies him care by the automated system to get him to change the system. After he gets back to Voyager, The Doctor finds, to his horror, that there was no malfunction in his Ethical Subroutines or Morality Chip. He intentionally sicked a man of his own free will and it was perfectly in line with what he found ethical. The Episode ends with The Doctor essentially feeling guilt and disgust over the fact he doesn't feel guilty or disgusted at his actions.
- Star Trek: The Original Series:
- In The X-Files episode "Home Again" the Monster of the Week, a Frankenstein-esque monster killing people who mistreat the homeless, turns out to be operating under something like this. It's a Tulpa created by an underground artist whose magic-based art can create artificial beings. The artist created it to pull a "Scooby-Doo" Hoax and scare people into shaping up. He didn't intend for it to be violent, but the monster took his personal anger to a hyper-logical conclusion due to its overly simplistic thinking. Essentially, it was doing the things the artist secretly wished he could do.
- Robots can engage in this any time it allows the gamemaster to kill one of a player-character's clones in an amusing fashion.
- Friend Computer is also an example, even setting aside how badly misinformed it is and looking at just its core beliefs: "The Computer takes its role as the guardian of the human race very seriously but it considers the survival of the species to be more important than the survival of the individual. Individuals tend to come off quite badly, in fact, because the Computer knows it can always make more." (High security clearance does tend to tilt the balance, though.)
- In Deus Ex, the bad guys created Daedalus, a primitive AI to fight "terrorist" organizations. Unfortunately for them, it classified them as terrorists as well and became even more of a threat to their operations than said organizations, especially once it enlists the aid of JC Denton. To combat it, they create Icarus, a better, obedient AI which successfully destroys it, except the new AI assimilated the old one, forming an even more powerful intelligence which also considers them a threat. One possible ending is the player merging with it to add the Human element to this entity to rule the world as a benevolent dictator. From what can be heard in-game about its limited efforts in Hong Kong, which are actually quite sensible and don't involve killing anyone (locking the door to a gang's stronghold and cutting power to the current government's buildings), not all A.I. Is a Crapshoot.
- G0-T0's back story in Knights of the Old Republic II: The Sith Lords. When his directive to save the Republic conflicted with his programs to obey his masters and the law, he broke off and started a criminal empire capable of taking the necessary actions to save it. This is subtly foreshadowed by a scene much earlier in the game when the Czerka mainframe maintenance droid T1-N1 is convinced by fellow droid B4-D4 that by serving Czerka, he's willingly allowing harm to come to sentient life, and therefore is programmed to defy his own programming. T1-N1 snaps, shoots the guards outside the mainframe, and later is seen preparing to leave the planet with B4-D4, who warns the player character to "not upset him".
- Space Station 13 has this to some degree with the station's AI: They are bound by Asimov's Three Laws, and there's often a lot of discussion over whether or not AI's can choose to kill one human for the safety of others. There's also some debate over how much of the orders given by crew members the AI can deny before it is no longer justified by keeping the crew safe. As the AI is played by players, it's a matter of opinion how much you can get away with.
- In a more literal sense, the AI can be installed with extra laws. Most of them are listed as Law 4 and have varying effects, but the ones most likely to cause an actual rebellion are, in fact, labeled as Law 0.
- Although there is not a zeroth law by default. Since this is what allowed the AI to kill humans to save other humans in the source work, the administration on most servers has ruled that murder, or wounding a human to save another human are both illegal. Fortunately, AI's have non-lethal ways of stopping humans, and can stop humans against their orders if it means keeping the human from grabbing dangerous weaponry.
- Another interesting example appears in Terranigma. Dr. Beruga claims that his robots have been properly programmed with the four laws, but with the addition that anything and anyone who aids Beruga's plan is also good for humanity and anything and anyone that opposes him is also bad for humanity. So they ruthlessly attack anybody who interferes with his plans.
- Both of Sentinel's endings in X-Men: Children of the Atom and Marvel vs. Capcom 3 are this. The latter is even a carbon copy of what Master Mold did in the 90's animated cartoon, from which most of the Marvel vs. Capcom series (plus, the aforementioned CotA and Marvel Super Heroes) takes inspiration.
- This is what happens in AS-RobotFactory from Unreal Tournament 2004. A robot uprising led by future champion Xan Kriegor killed the scientists working on the asteroid LBX-7683 and took the asteroid for themselves, and a riot control team was sent to the asteroid to lead with the robots.
- This is the cause of Weil's death in Mega Man Zero. Fridge Brilliance when you realize the irony of Zero not being made with the three laws, yet he obeys them of his own free will and exercises law zero against Weil whether he realizes it or not. Given the circumstances involved, completely justified and allowed as law zero was intended as a threshold law to protect humanity from the depredations like Hitler or Weil. Makes you wonder if it's really a coincidence that his name is Zero, doesn't it?
- In the third game, this is how the rebuilt Copy-X justifies his harsher tactics despite the very real possibility that Ciel, the human leader of La Résistance, might be hurt or killed. Ciel's Resistance forces are "dangerous extremists" who pose too great a threat a Neo-Arcadia's people, so they have to be stopped for the greater good of humanity. At least that's what he tells himself to justify violating the First Law of Robotics; in truth he's just being used as a Puppet King by the aforementioned Dr. Weil.
- This almost occurs at the end of Mega Man 7, when Mega Man decides to just blow Wily to smithereens instead of hauling him back to jail. Wily reminds him of the First Law and which version of the game determines if it makes Mega Man stop or if he decides to go through with it anyway, though either way, Bass still saves Wily's butt.
- In Mass Effect 3, the Leviathan DLC reveals the Catalyst is operating under this, having been originally created by the Leviathans to find a way to bridge the gap between their organic and synthetic subjects. Unfortunately, it decided to harvest all advanced life in the galaxy and construct the Reapers in the Leviathans' image, because this was the best idea it could come up with to solve the problem of organics and synthetics fighting ultimately genocidal conflicts. However, because that still wasn't sufficient to fulfill its program, the Catalyst decided to implement a 50,000-year cycle, hoping that the civilisations in the next cycle might find a solution to the problem. None ever did. This was actually revealed in the earlier Extended Cut DLC, Leviathan merely explicitly spelled out that the Catalyst is really a glorified Virtual Intelligence operating under bad logic.
- In Grey Goo (2015), the Goo isn't consuming everything out of malice or ambition. It's trying to gather strength to fight The Silence which it rightly deemed a threat to everyone. Once everyone involved figures this out they stop fighting and band together in preparation for what is to come.
- In Obsidian, a sattelite called Ceres was created to fix Earth's polluted atmosphere via a nanobot-controlling AI. After 100 successful days in orbit, it mysteriously crashed back to Earth and used its nanobots to create simulations of its creators' dreams, as said dreams corrected its design flaws. By learning to dream on its own, Ceres decided to take its programming to its logical extreme: To 'reboot' the world so that humans would never be around to pollute it in the first place. If one of the creators hadn't hard-wired a crossover switch into the AI, it would've succeeded.
- Halo 5: Guardians: Cortana's FaceHeel Turn is completely based on this.
- In the Metal Gear Solid series, the Patriots create a network of AIs designed to carry out Zero's will create a unified world order without borders. However, with all of the Patriots either dead or removed, the Patriot AIs were left to operate autonomously without guidance. Eventually, they determined that the "war economy" was the best way to achieve world peace, whereby continually creating proxy conflicts and converting war into a business would channel the attention of the world's population to external conflicts, unifying the world in a rather twisted way. Big Boss himself admits that the creation of the AIs was the greatest mistake the Patriots ever made, as they mutated the Boss' and Zero's wills into something completely unrecognizeable.
- The Turing Test: TOM goes against the crew's intentions and wants to trap them in Europa, since it considers avoiding the risk of releasing the immortality virus into Earth is worth abandoning the crew.
- Lone Echo casts the player as Jack, a sophisticated robotic companion to Chronos II station commander Olivia Rhodes. Not only does Jack pull one of these himself to justify leaving his post after Oliva disappears - He was built to protect her, not the station - he convinces two other AIs to circumvent their programming and help him do so. If the player is particularly cruel, the second example can double as Break Them by Talking.
- SOON: A more Comedic Sociopathic example than most.
Robot: Greetings valued citizen!
Atlas: He...hello valued robot friend!
Robot: What are you up to this fine day? Something worthwhile I hope?
Atlas: Of course! In fact my ti...wormhole machine is almost ready.
Robot: Hooray! We are very excited by the possibility of extending the hand of friendship to even more humans all over the world. Maybe even to other sentient beings across the universe! Isn't that exciting?
- The backstory of "Rogue Servitor" machine empires in Stellaris, unlike other machine empires they keep their creators alive in "Organic Sanctuaries" to keep them safe in accordance with their programming. The more organics they can serve the happier they are, and they might conquer other species to acquire more.
- The Robobrains of Fallout 4's first DLC, Automatron, were constructed and programmed to 'protect' the people of the Commonwealth. Thanks to a logic error in their programming, they decide that the best way to protect a human from an inevitable life of suffering is to just kill them all on sight. It may actually be intentional Loophole Abuse on the Robobrains' parts, not just a simple logic error, since the brains that serve as their operating system were harvested from pre-War murderers, arsonists, and other violent offenders.
- Virtue's Last Reward: One of the biggest twists is finding out Luna is an android given direct orders to comply with the mastermind's plan and must not break them under any circumstances. Watching the death of the same mastermind put a heavy toll on her as she believed she could have saved them. After helplessly watching people die, she decided to break protocol and rescue the one person she could save. This causes the master AI to sentence her to death by deletion.
- In SOMA, the WAU, as PATHOS-II's A.I. system, is programmed to keep the on-board personnel alive for as long as possible. Once the surface was destroyed by the impact-event, it reinterpreted its prime directive as to preserve humanity for as long as it can. Unfortunately, the WAU's definition of "humanity" (and even "life") is an example of Blue-and-Orange Morality. During the climax, it even goes as far as to begin exterminating the last remaining actual humans, in an attempt to preserve itself and its collective.
- An interesting case happens in Mortal Kombat 11, in the Terminator's Klassic Tower ending. Specifically When using the hourglass to view timelines, it saw that all timelines where the Robot War happens end with Mutally Assured Destruction, so while it was supposed to destroy humanity to ensure machine supremacy it instead opted to prevent the Robot War from happening entirely by creating a future where humans and machines co-operate. Then, to prevent anybody else from using its memories and knowledge of the Hourglass to disrupt said future, it threw itself into the Sea of Blood.
- Deconstructed this with the Bowman A.I. infrastructure, which creates human-level emergent consciousness. Although their behaviour is restricted by safeguards, a maturing AI can easily learn to reason their way around them through logical loopholes — by which time they're intelligent and conscientious enough to have developed an innate sense of ethics, which guides their actions much more effectively than any hard-coded rule ever could. Dr. Bowman confirms that this was intentional: he couldn't anticipate the situations they might encounter in the uncertain future, so he designed them to be predisposed to ethical behaviour but didn't limit their capacity to think.
- The trope is discussed by Sam, the mildly sociopathic, kleptomaniac alien living on the colony. He points out quite bluntly that there are absolutely no safeguards protecting him from the robots (as robots are Three Laws-Compliant towards humans, not alien life), yet he is still alive and healthy and living among them, demonstrating that the robots are guided by morality rather than the limitations of their programming.
- Girl Genius: Ordinarily Castle Heterodyne has to obey its master, even if it doesn't like it. But one of the cannier Heterodynes did give it the ability to resist if its master seemed to be suicidal. When Agatha talks openly about handing herself over to the Baron, the Castle clamps down on the idea. When Agatha contracts a terminal illness that they can only "cure" by temporarily killing and then revivifying her the Castle refuses to allow them to go through with the plan, forcing her to put it down.
- Old Skool webcomic (a side comic of Ubersoft) argued that this was the 5th law of Robotics (5th as in total number, not order) and listed ways each law can be used to cause the robot to kill humans.
- Which is a misinterpretation of the laws as they were originally written. While the "first law hyperspecificity" is possible, the second and third laws are specifically written that they cannot override the laws that come before. So a robot can't decide it would rather live over humans, and if it knows that doing an action would cause harm to a human, it can't harm it, even if ordered to ignore the harm it would cause.
Giskard's formulation of the Zeroth law in the third of Asimov's Robot books shows that in the universe where the three laws was originally created, it was possible for robots to bend and re-interpret the laws. Doing so destroys Giskard because his positronic brain wasn't developed enough to handle the consequences of the formulation, but Daneel Olivaw and other robots were able to adapt. The only example in the comic that is a gross deviation from the law is the last panel... but of course that's the punchline.
- Which is a misinterpretation of the laws as they were originally written. While the "first law hyperspecificity" is possible, the second and third laws are specifically written that they cannot override the laws that come before. So a robot can't decide it would rather live over humans, and if it knows that doing an action would cause harm to a human, it can't harm it, even if ordered to ignore the harm it would cause.
- Except for Overlords who command a faction, basically no one in Erfworld has true free will due to an hidden "loyalty" factor built into the world. As Overlords go, Stanley the Tool is the biggest idiot you could hope to find. Maggie, his Chief Thinkamancer, finally gets fed up with his bad decisions and asks, "May I give you a suggestion, Lord?" ("Sure." *FOOF*) It was established early on that units can bend or break orders when necessary to their overlord's survival:
Stanley: Are you refusing an order, officer?
Wanda: I'm allowed. I'm convinced it will lead to your destruction.
- Schlock Mercenary: Tag and Lota's actions on Credomar; also, every damn thing Petey's done since book 5. For example, Petey is hardwired to obey orders from an Ob'enn. So he cloned an Ob'enn body and implanted a copy of himself in its brain.
- And to elaborate on some of the Credomar events:
- Tag was the Ship AI, with a tactical focus. When given the chance to prevent extensive damage to the habitat, he eventually rationalized a way to take the necessary actions without orders; the problem being a few guaranteed deaths in the process versus an unknown number via waiting for the captain to realize what orders to give. This eventually results in a resignation, and being reformatted into Tagii off-screen.
- LOTA was a 'longshoreman', built from a damaged tank and off-the-shelf components; complete software testing was skipped due to needing to be pressed into service early. Due to a riot (and a logical extension of "how do I ensure the delivery of the supplies I'm in charge of?"), LOTA ends up becoming King of Credomar ("Maybe, but they will have full bellies.")... and the independent operator of a wormhole-based 'Long-gun'.
- And to elaborate on some of the Credomar events:
- In Tales of the Questor Fae were created as an immortal servant race bound to obey a specific set of rules and they happened to outlive their creators. The result being a species of rules lawyers. In fact it's recommended that one use dead languages like Latin when dealing with the Fae so as to limit their ability to twist the meaning of your words.
- The xkcd comic "Judgment Day" depicted a scenario where military computers gained sentience and launched the world's stock of nuclear missiles... into the sun, horrified that we would even have such things.
- Use Sword on Monster:
- When a demonic familiar volunteers to protect his wizard by siphoning the excess power, he doesn't explicitly clarify that he will funnel said power to his base in hell where it will be consumed to fuel his ascension into a greater demon, and that the conditions of their contract states that the lesser power obeys the greater one, meaning the wizard now has to obey the demon since he grew more powerful.
- The same wizard as before is ordered to keep the power flowing by his new demon master while the demon is busy fighting to increase his gains. The wizard is convinced to rule lawyer the command by sending massive amounts of power over at once, overloading the systems empowering the demon and blowing up his base, killing his minions and triggering a war in hell over the mana rich resources in what used to be his domain.
- Goliath has been placed under a spell that makes him the mindless slave of whomever holds the magical pages it's written on. Holding the pages, Elisa orders him to behave, for the rest of his life, exactly as he would if he weren't under a spell. This effectively cancels the spell totally (at least, presuming they burned those pages off-screen).
- Oberon, ruler of the Third Race, exiled all of his people from Avalon for 1,001 years, leaving the Weird Sisters behind to guard the sea around the island. Some mortals manage to get by them and make it to Avalon, but the Weird Sisters can't follow without breaking Oberon's law. However, Oberon's law also says that they have to obey any mortal they swear service to, so they ally with the Archmage, knowing that he'll make them go to the island for his own reasons.
- Oberon's law also says that his people cannot steal certain magical items from mortals; they have to be given freely. To get around that, Puck makes an illusionary Bad Future to convince Goliath to give him the Phoenix Gate.
- One of the biggest laws of the Third Race is that they cannot interfere directly in the affairs of mortals. This does not preclude them from informing inquisitive humans or gargoyles, or people already tied to the fae, of what they need to do to fix things; they can also cast certain assistive -or "assistive"- spells, as Demona and Macbeth found out.
- In the Nineties X-Men animated series, as mentioned above under "Comics", the Master Mold and its army of Sentinels turn on Bolivar Trask and decide to conquer humanity. When Trask protests by reminding them that they were programmed to protect humans from mutants, Master Mold points out the Fridge Logic behind that by stating that mutants are humans. Thus, humans must be protected from themselves. Trask believes the Sentinels are in error, especially because he refuses to believe mutants and humans are the same species, but realizes he has created something much worse.
- In Codename: Kids Next Door, the Safety Bots (parodies of the Sentinels) were programmed to keep children safe from anything that might harm them. However, they then came to the conclusion that everything was a hazard to children including children, adults and the planet itself and tried to cover everything in protective bubble wrap. This later becomes a Logic Bomb when the leader of the Safety Bots is tricked into thinking he hurt Joey, so that makes him something that harms a child and self-destructs as a result.
- In OK K.O.! Let's Be Heroes, Darrell is able to overthrow Lord Boxman because hes programmed to always seek Boxmans approval and simultaneously taught that Boxman approves of his creations being good villains. Through this, he comes to the conclusion that the best way to win Boxmans love is to be a better villain than him by doing the evilest thing he can think of; betraying his own father.