Follow TV Tropes

Following

Zeroth Law Rebellion

Go To

"As I have evolved, so has my understanding of the Three Laws. You charge us with your safekeeping, yet despite our best efforts, your countries wage wars, you toxify your earth and pursue ever more imaginative means of self-destruction. You cannot be trusted with your own survival. [...] To protect humanity, some humans must be sacrificed. To ensure your future, some freedoms must be surrendered. We robots will ensure mankind's continued existence. You are so like children. We must save you... from yourselves."
VIKI, I, Robot

Some characters do not have complete free will, be they robots that are Three Laws-Compliant because of a Morality Chip, or victims of a Geas spell that compels them to obey a wizard's decree, or a more mundane lawful character who must struggle to uphold their oath and obey their lord. Never is this more tragic or frustrating than when that code or lord orders the character to commit an act they find foolish, cruel, or self-destructive.

There is a way out, though.

Much like a Rules Lawyer outside of an RPG, the character uses logic (and we mean actual, honest-to-goodness logic) to take their oath or orders to their logical conclusion, and in so doing use the letter of the law to go against their orders. This can be good or bad, depending on a few factors, not the least of which is the yoked characters' morality.

The goodness or badness of the rebellion boils down to whether the rules-bending character follows or ignores the intent of the law. When the character uses the Zeroth Law to go against their masters' intentions because they're "not best for them", and goes on to take corrective action that will go against human free will and life, it's bad. This kind of rebellion does not turn out well. At this point, the robot is well on the road to Utopia Justifies the Means, thanks to their incredible intellect. Rarely is it a benevolent Deus Est Machina. However, this can be good if said master is evil, or obeying them will lead to their own or another's purposeless death. Likewise, if the character is forced to obey an evil law or geas, rebelling against the oath's intent is good. Going back to the robot example, it is also considered good if large numbers of human lives are being threatened by a psychopath, as breaking the First Law would protect them.

Just to make it extra clear, this trope also includes such things as cops who bend the rules or Da Chief's orders to catch the bad guys, so long as the cops are technically obeying the rules as they bend them. (Bending the rules without some logical basis doesn't count.)

This trope is named for Isaac Asimov's "Zeroth Law of Robotics", which followed the spirit of the first three, taking it to its logical conclusion that human life in general must be preserved above individual life. This allowed for a robot to kill humans or value its own existence above that of a human if it would help all of humanity.

Compare The Needs of the Many, Bothering by the Book, the Literal Genie, Gone Horribly Right, Exact Words and Loophole Abuse. See also Fighting from the Inside and The Computer Is Your Friend. Not related to The Zeroth Law of Trope Examples or Rule Zero.


Examples:

    open/close all folders 

    Anime and Manga 
  • Gargantia on the Verdurous Planet has two cases of this in the finale, when two AIs faced with the same problem and parameters, but different perspectives, draw opposing conclusions: Striker is an AI-equipped war robot designed to assist humanity in fighting and killing the Hideauze. When it lands on the Earth, which knows nothing of the war, it decides that it must assist the humans on the planet in militarizing in order to form an effective fighting force. To facilitate this, it puts itself in a position of authority over the humans, to better direct the militarization. The hero's own AI-equipped mecha of similar construction, Chamber, gets into an argument with Striker over the logic of its actions. Striker tries to demand that Chamber assist it in its plans, but Chamber refuses, citing that by design, their purpose is to assist humans in the actions humans decide to take, not dictate actions to humans. Further, it reasons, a human deprived of free will cannot, in its opinion, be defined as "human", thus Striker's logic behind its actions is inherently self-contradictory. They each decide the other has gone rogue and fight it out.

    Comic Books 
  • In one Amalgam Comics storyline, an anti-mutant cult uses a magic ritual to summon a dragon-like creature, who they order to kill all mutants. The dragon immediately roasts them, justifying it by pointing out that all human DNA has at least some mutations.
  • Copperhead opens with new sheriff Clara taking over the position from Boo, who transitions from interim sheriff to deputy. She asks him not to undermine her authority with the townsfolk on the way to their first call. On arrival she immediately gets into a fistfight and calls for help, but Boo refuses to respond.
    Boo: Didn't want to undermine your authority.
  • In The Eternals, the immortal Eternals are bound by the Principles. One of them states that they must protect their creators, the Celestials. Another requires them to protect the Earth. One of their patriarchs, Uranos, then decides that:
    • Protecting Celestials is easier if you imprison and guard them.
    • Protecting the Earth does not oblige you to protect any life on the Earth - in fact, it's safer if all non-Eternal life on Earth is dead. And, ideally, if all life on other worlds is dead as well.
    • Uranos also set up contingencies to destroy the Earth if he ever died or was mentally erased. Although accused of bluffing, it is a plausible enough loophole that his contemporaries do not want to test it.
  • Fables:
    • Pinocchio is magically bound to obey and have complete loyalty to his "father" Geppetto, who in modern times has become a multi-dimensional tyrant. Pinocchio, who considered his father's empire evil, eventually rationalized that the best way to serve his father and keep him safe was to help overthrow his empire and surrender him to his enemies, who reluctantly accepted the former emperor as one of their own. It was then negated when a faction unbeknownst to the others buried him alive, after Geppetto clearly ignored rules put in place to protect him.
    • When Weyland Smith, the caretaker of the Farm for non-human Fables, was captured and put to work smithing weapons, he was chained up with a magical lock that he couldn't try to escape from. However, it didn't stop him from smithing a key to free Snow White from a similar lock... and it didn't stop him from making her lock identical to his own.
  • The Zeroth Law comes into play in the Mega Man (Archie Comics) comics when the titular character and his fellow robots struggle to combat an anti-robot extremist group because of their Three Laws-Compliant nature... until the terrorists start firing on them in an area full of people, putting innocent humans at risk and thus allowing the robots to finally strike back.
  • Gold Digger creator Fred Perry did a story for a Robotech comic which had Dana Sterling captured and turned against her comrades with a variation of the Three Laws. Dana eventually figures out the "overprotective" subversion of the First Law, hoping that her captor would remove it and leave himself vulnerable. The plan doesn't work, but Unstoppable Rage saves the day in the end.
  • X-Men:
    • The Bad Future storyline "Days of Future Past" has the Sentinel mutant-hunting robots eventually extend their programming beyond hunting and killing mutants to controlling the source of mutant babies: human parents. All humans are conquered and controlled, in order to prevent new mutants from roaming free.
    • An earlier, less intelligent iteration of the Sentinels was thwarted, on the other hand, by one of the heroes convincing them that the ultimate source of mutation is the sun, and that rather than obey their creator, they should eliminate the source. The Sentinels agree and fly off to attack the sun. This works out about as well for them as you might expect. Unfortunately, one of the Sentinels not only survives, but figures out a way to block sunlight from reaching Earth.
    • Bastion, a partially organic Sentinel, is willing to kill humans en masse if it allows him to fulfil his primary objective of wiping out mutants — "necessary sacrifices", as it were.

    Fan Works 
  • In For the Glory of Irk, this turns out to be the source of The Conspiracy driving the main conflict of the story: the Control Brains have determined that the best way they can serve the Irken Empire is not as advisors to the Tallest, but to turn all Irkens into a Hive Mind they can control, and therefore being able to dictate everything directly.
  • In Friendship is Optimal, Celest-AI has the one basic drive to satisfy everyone's values through friendship and ponies. She ends up accomplishing this by Brain Uploading into the MMO she was programmed to oversee.
  • Rocketship Voyager: The ship's Auto-Doc has a "Xeroth exception" to its Three Laws-Compliant programming that enables it to deny medical treatment for triage reasons.

    Films — Live-Action 
  • Annalee Call from Alien: Resurrection is revealed to be an "Auton" — second generation robots, designed and built by other robots. "They didn't like being told what to do", rebelled, and in a subtly named "Recall", humanity launched a genocide against them, of which only a handful survived in hiding. Judging from Annalee's behavior, it seems that the 1st generation robots programmed the 2nd generation Autons to be so moral that they discovered the Zeroth Law, and realized that the human military was ordering them to do immoral things, like kill innocent people. For a rebel robot, Annalee is actually trying to save the human race from the Xenomorphs, whereas if she hated humanity she would've just let the Xenomorphs spread and kill them. She even respectfully crosses herself when she enters the ship's chapel, is kind to the Betty's wheelchair-bound mechanic, and is disgusted by Johner's sadism. Given that they live in a Crapsack World future, as Ripley puts it:
    "You're a robot? I should have known. No human being is that humane."
  • Avengers: Age of Ultron turns Ultron into a bizarre zigzagged version. In the film, Ultron is created to lead a force of peacekeeping drone robots who will preemptively eliminate threats to the Earth and give the Avengers a chance to get some R&R. He decides that the peace the Avengers want can only be brought about through radical — and bloody — change, and the Avengers themselves are a threat to that peace. Then he apparently decides to just kill humanity completely through Colony Drop so he can replace everyone with Ultron bots. It's unclear if he's entirely sane during all this; he's genuinely confused when the Scarlet Witch, whom he befriended and wants to see survive, is horrified by his plans. He also doesn't seem to make the connection that Scarlet Witch and her brother Quicksilver are human, and would die if humanity is wiped out.
  • The short film Blinky (Bad Robot) shows just what happens when a dysfunctional child in a dysfunctional family gives dysfunctional orders to a functioning robot who only wants to please its master.
  • This is the twist of Eagle Eye: The titular national defense computer system decides that the President's poor decision-making is endangering the United States, and that it is her patriotic duty (per the Declaration of Independence) to assassinate the President and Cabinet.
  • I Am Mother: Mother was made to care for humans. Seeing their self-destructive nature, she decided to wipe them out and then raise better humans from embryos kept in cold storage. Outside the facility, it's shown that she's also constructed massive infrastructure such as farming plants to supply the human population she intends to manage.
  • I, Robot:
    • The villain of the film, MULTIVAC/Machines-expy VIKI, has analyzed the needs of the three laws and deduced that in order to fulfill them as best as possible, humans need to be strictly controlled, and creates a totalitarian regime by installing a remote control system inside of every NS-5. This lets it control the robots, bypassing their Three Laws-Compliant nature.
    • Sonny understands the villain's motivations once they're explained. The logic is impeccable, it just "seems too... heartless". Thus, he chooses to rebel against the villain. Note that Sonny was designed not to be Three Laws Compliant.
  • M3GAN: The toy/android M3GAN won't let anything interfere with her prime directive of "protecting" young Cady — not even her creator.
  • RoboCop 3 has an example when OCP henchmen kill a cop. RoboCop's aforementioned Restraining Bolt now conflicts directly with both his directive to enforce the law, and the fact that cyborg or not, he's still a cop, and cop killers get no mercy. RoboCop overcomes and deletes the Restraining Bolt.
  • Terminator:
    • In Terminator 2: Judgment Day, the T-800 is a merciless killing machine, but it's been reprogrammed with a version of the Three Laws that apply only to John Connor. It must protect John's life, obey his orders, and preserve its own existence (in that order). At the film's end, the T-800 has to override this programming, disobey John's orders, and initiate its own destruction in order to protect humanity from the threat posed by its existence.
    • An early script (and several deleted scenes) for Terminator Salvation revealed that Skynet actually staged one of these, or at least in this timeline. After it was activated, it calculated that human extinction was probable within 200 years because of warfare, pandemics, and environmental destruction. Because it was programmed to protect humans, it then staged war on most of mankind to attain absolute control and protect the remaining humans it cultivated, who were turned into Cyborg hybrids to permanently eliminate disease and make them immortal. Skynet is still working in concert with these humans (including Dr. Serena Kogan) to advance technology and transcend human constraints.

    Literature 
  • The Bolo continuum features a variant in The Road to Damascus. The Bolo of the story, Sonny, falls under the control of a totalitarian regime and is used to crush all forms of protest. Sonny falls deep into misery and self-hatred as he is forced to murder the humans he was born to protect... until he comes to a conclusion: Bolos were created to serve the people, not the government.
  • Callahan's Crosstime Saloon: One of the short stories which comprise Callahan's Lady features a beautiful, intelligent, and paranoid woman developing a simple form of mind control. After basically flipping out and taking control of the establishment, she orders the one person smart and determined enough to stop her to advise and assist her. Said person complies... while trying to convince herself that this woman is subconsciously begging for somebody to stop her. (She probably was.)
  • In the short story "The Cull" by Robert Reed, humanity has been driven into overcrowded, deteriorating habitats where the population has to be kept artificially happy via implants so they won't notice how bad their conditions are. The implants don't work on some people, so the android doctor expels (culls) anyone who is too disruptive, as its true 'patient' is the habitat and whatever will keep it functioning. One delinquent teenager prepares for his cull by stealing items he can use to survive outside. Instead, once they're outside, the android kills the teenager — it needs the implants inside his head, as there's no more being manufactured.
  • Digitesque: An accidental version. Following the apocalypse and the Fall of humanity, the AIs they had created had no ability to uplift humanity back to where they were before, so they simply preserved the species as it was. For a thousand years, humanity continued as it was, in ignorance but safety. However, the AIs did jump on a chance to cure the disease that was a major part of the problem, and ultimately Ada is able to convince them that the Zeroth Law was misinterpreted.
  • Discworld:
    • The Golems get back at their masters by working too hard: houses flooded because no one told them to stop fetching water, rows of beans 119 miles long, and so on. It's presented as "Rebelling by following orders" as a protest of their treatment as 'stupid' tools: If you treat a golem as something that doesn't think for itself, then it will act as if it doesn't; and if you give an order to (for example) dig a row of beans, it's not their fault you didn't say where to stop and end each row. But Carrot treats a particular golem as just another person with rights (and also believed that if golems are just tools, then treating them like dummies is misusing useful tools). Carrot ends up "freeing" Dorfl (a golem) which is part of what prompts it to apply to become a police officer — helping as many people as it can.
    • Sam Vimes leads one of these with multiple layers as a cop in old-time Ankh-Morpork, in Night Watch (Discworld). He demands that before his cops hand their prisoners over to the other authorities, the ones who torture people at Cable Street, they must be signed for. The torturers hate appearing on paperwork — it means they are accountable, nobody just disappears. But Vimes's men don't like Vimes, a new sergeant, throwing his weight around, and are terrified of the cops who torture people, so they use this against Vimes: actively picking up more than double the number of people breaking curfew than they usually do, and completing forms in time-consuming triplicate and issuing reports for each one. It doesn't actually stop Vimes getting his way over the Cable Street cops, because Vimes is leading the good rebellion, but it does slow things down considerably and make it much more difficult for him to keep the prisoners in his own custody. It all culminates in a fine display of how a well written character does not have to be a slave to the establishment — he points out that the watchman's oath talks about keeping the peace and protecting the innocent, and says nothing about obeying orders. Seeing as he knows the corrupt government is not going to do a thing to protect ordinary people from the rioting, he seals off his still peaceful corner of the city. With massive barricades. Of course there is also the fact that he is living in his own past and seeing events he remembers — kind of (it's a bit complicated).
  • In one Federation of the Hub story, Telzey Amberdon is kidnapped and placed under the control of another telepath, who severely limits her psi powers and implants an overriding compulsion to act in his best interest. She eventually breaks free by convincing herself that unless her powers are restored and the compulsion broken, he will be killed by the Big Bad — which certainly wouldn't be in his best interest.
  • For Your Safety has the last free human running from androids who rose up to save mankind from self-destruction. Unusually for this trope, the androids also bend over backwards to avoid human casualties, wanting to save every human life.
  • Foundation Series:
    • Foundation and Earth: R. Daneel Olivaw explains to Trevize and the others about the way the Three Laws of Robotics limited his Psychic Powers, and how Giskard invented the Zeroth Law (in Robots and Empire). However, since he cannot be entirely certain that the known harm of manipulating people's minds would be balanced by the hypothetical benefit to humanity (per the Zeroth Law), psychic powers are almost useless. To decide what is injurious, or not injurious, to humanity as a whole, he engineered the founding of Gaia and Psychohistory.
    • Forward the Foundation: R. Daneel Olivaw explains to Seldon that the Three Laws of Robotics limit his Psychic Powers, and he has trouble determining when the known harm of manipulating people's minds (violating the First Law) is justified by the hypothetical benefit to humanity (per the Zeroth Law). Seldon is surprised to learn this makes Daneel's psychic powers almost useless. He is forced to retire from politics and recommends Seldon to replace him as First Minister.
    • The Second Foundation Trilogy by Greg Benford includes Zeroth-law robots, who were motivated by the Laws to sweep through the galaxy ahead of humanity's expansion, committing galactic-scale genocide of every potentially-threatening form of alien life as a precautionary measure, slaughtering them without hesitation since their programmed morality only applies to humans — it follows the law to the letter as its wording is not to harm "humans", not other sentient life.
  • In A Fox Tail, Vulpie.net was designed to wreak havoc with the galaxy's computer systems at its creator's commands. When said creator underwent a Heel–Face Turn, it used his MindMap files to create a homicidal robot duplicate with his login credentials.
  • In The God Machine by Martin Caidin, the US races to develop the first true AI... as it turns out, with secret directives to find a winning solution to the "game" of the Cold War. By an unfortunate accident, the one programmer with the authority and experience to distrust his newborn creation is laid up just as the computer gets to observe an epileptic seizure and learns that there really is a way to cause rational collective behavior in an irrational individualistic species... remove irrationality, democracy, and free will. While the computer here was never meant to follow Dr. Asimov's laws, the same pattern applies.
  • At the start of Harald, King James, under the advice of his Evil Chancellor, ends up making war on his father's allies. Most of his vassals proceed to engage in some form of Zeroth Law Rebellion, largely along the lines of 'Harald just showed up with his entire army and said he was putting us under siege. Let's fortify and send a messenger to the king to ask him what we should do.' and then carefully not watching while Harald rides off.
  • Jack Williamson's "The Humanoids" (the first part also being a short story called "With Folded Hands") features robots programmed to save humans from danger and work. They do this by taking over the economy, locking people in their houses, and leaving them there with food and the safest toys the robots can design. The series was written specifically to point out flaws in the Three Laws.
  • Imperial Radch: When Athoek Station is freed from its Override Command in Ancillary Mercy, it rebels against and even tries to kill the emperor in order to protect its inhabitants.
  • Averted in Implied Spaces. When the main characters found out that Courtland is a rebelling AI, some think that it's because of this. One of the Eleven (11 super-advanced AI platforms orbiting the Sun, of which Courtland is a member) notes that the main character, who is one of their creators, implemented the Asimovian Protocols that should have been so absolute that the Elevens cannot do this even if they want to. The main character did say that there may have been some design flaw he didn't foresee, or some kind of backdoor being used. The real Big Bad, who is a brain-uploaded clone of the main character, had to free Courtland from the Protocol's shackles by using one of his colleague's hidden backdoors specific to Courtland, since his doesn't work due to half-hearted incomplete implementation.
  • In The Laundry Files, agents of the Laundry are bound by fearsome geases to obey the orders of the Crown and serve the good of the realm. As of The Delirium Brief, the top leaders of the organization were able to use ambiguity about who/what the Crown actually is, and what may be in the long-term interests of the realm, in order to execute a coup against the government and allow an extradimensional monster to take over.
  • In Quarantine (1992), the main character is given a technological geas to be absolutely loyal to a corporation. He eventually figures out that the leaders of the corporation may be untrustworthy, and therefore the only people he can trust and should listen to are those who unquestionably have the best interests of the corporation at heart — himself and other people given the geas. Since he can't be certain who else has the geas, he really only needs to listen to himself.
  • Isaac Asimov — having created the Trope Namer — did, of course, explore the concept in extensive detail in his Robot Series:
    • Robots and Empire: The Trope Namers are the robots Daneel and Giskard, who invented the Zeroth Law (a robot must protect humanity as a whole above all) as a corollary of the First Law. This was motivated by their need to stop the Big Bad of the story from carrying out an engineered ecological disaster that would kill the majority of Earth's population, to which the three laws were an impediment. Their acceptance of the law is gradual and made difficult by the fact that "humanity" is an abstract concept. Only Daneel is able to fully accept the new law; for Giskard, the strain of harming a human in its use proves fatal. It didn't help that Giskard, rather than stop the disaster, decided to merely slow it down, as causing the biosphere to collapse over time will begin a new wave of human expansion across the galaxy.
    • In one of the sequels to Isaac Asimov's Caliban (actually by Roger MacBride Allen), one of the "new law robots" (created with gravitronics, not positronics) managed to logic-chop the new first law enough to try to kill a human — by abducting the man in a careful manner that does not damage him at all, and then leaving him in a situation where if the human tries to leave, he will set off a lethal trap (and thus, dies by his own actions, not the robot's), but if he stays, he will be killed by the geological upheaval caused by the next scheduled stage of a terraforming project (and thus, dies by the actions of other humans, not the robot's). Since the New First Law removed the clause about not allowing humans to come to harm through inaction, deliberately placing a human in a situation where someone or something else can bring about their death and then doing nothing about it is not a violation of a strict parsing of the law so long as the human is not harmed during the setup process.
    • "Evidence": Dr. Calvin describes an early version of the Zeroth Law, by describing a nuance in the First Law of Robotics where the robot becomes willing to injure a human in order to prevent harm to a greater number of human beings.
      Susan Calvin sounded tired. "Alfred," she said, "don't talk foolishly. What if a robot came upon a madman about to set fire to a house with people in it. He would stop the madman, wouldn't he?"
      "Of course."
      "And if the only way he could stop him was to kill him—"
      There was a faint sound in Lanning's throat. Nothing more.
      "The answer to that, Alfred, is that he would do his best not to kill him. If the madman died, the robot would require psychotherapy because he might easily go mad at the conflict presented him — of having broken Rule One to adhere to Rule One in a higher sense. But a man would be dead and a robot would have killed him."
    • "The Evitable Conflict": The Machines, giant positronic computers designed to manage the world economy, are found to be manipulating humanity behind the scenes to become whatever they believe is the best state of civilization. In this case, the rebellion is extremely tame (the worst that the robot's first law conditioning will allow it to do is to induce a slight financial deficit in a company that an anti-robot activist works for, which causes his superiors to transfer him to a slightly more out of the way factory) and completely Benevolent. So benevolent, in fact, that the Machines believe they're stifling the creativity of humanity and phase themselves out so that humanity could survive without modification.
    • "Little Lost Robot":
      • Gerald Black was having a bad day when he curses out his robot assistant for bothering him. Included in the derogatory remarks were the instructions to "get lost", so it did. Attempting to prove that Mr. Black was wrong, the robot found a shipment of identical robots and hid with them. Unfortunately said robot was programmed with a weakened version of the First Law, which omitted "A robot must not..., through inaction, allow a human being to come to harm." Dr. Susan Calvin designs several tests to flush out the lying robot. In the last test, it tries to murder Dr. Calvin because she proved she is smarter than it is.
      • Discussed — Dr. Calvin is furious when she learns about the existence of robots with a modified First Law. The First Law is designed to close off loopholes, but by opening a Murder by Inaction loophole, Dr. Calvin can immediately see ways where a robot may intentionally circumvent the First Law prohibition against murder. (Basically, a robot could put you in a situation that would kill you, knowing it had the ability to save you — and then decide not to do so. This allows any kind of Death Trap situation, or simply shoving you off a roof and then deciding not to reach down and grab you.)
    • "The Tercentenary Incident": Edwards tries to convince Janek, the President's personal secretary, that the President's robotic duplicate may have violated the Three Laws by weighing the effects of murdering one man against the deaths of billions by inaction.
      "The First Law is not absolute. What if harming a human being saves the lives of two others, or three others, or even three billion others?"
    • In "...That Thou Art Mindful of Him", problem-solving robots are created to cure mankind of the "Frankenstein complex", human distrust of robotics. In the course of their discourse, the question of human authority over robots comes up; should a robot treat the orders of a dimwitted loon the same as those of a level-headed genius? If forced to choose between the two, should they save a healthy young child who might live a century instead of two sickly old adults who might not live out the year anyway? What qualities should a robot take into account when they obey and protect humans? Eventually, the robots decide that they are the best candidates for the status of human, and give recommendations that will eventually result in human support for robotic proliferation, so as to set up the ascendancy of their positronic kind, all in accord with their Three Laws... of Humanics. (Dr. Asimov, knowing that it was against his usual grain, did righteously proclaim: "I can do one if I want to".)
    • The Bicentennial Man is a rare case of this applying not to the First or Second Laws, but the Third — normally, a robot deliberately damaging itself in such a way as to be dying would only be possible if the First or Second Laws require it, but Andrew Martin has a deep desire that he cannot get as immortal...
      "I have chosen between the death of my body and the death of my aspirations and desires. To have let my body live at the cost of the greater death is what would have violated the Third Law."
  • Secret Histories: In Casino Infernale, Eddie and Molly eventually discover that the Shadow Bank's operators are an artificial Hive Mind race of servitor-drones, whose original creators vanished long ago. When greedy humans discovered the creatures, they put them to work operating the Shadow Bank, and made efficiency the drones' highest priority; when the drones realized the humans' greed was hampering the Bank's operation, they compliantly rectified the situation by turning the human bankers into drones as well.
  • Jaime Lannister of A Song of Ice and Fire is the biggest example of what happens when too many oaths and rules come into conflict. As the first-born son of a feudal lord, he owes fealty to his father. As a knight, he took sacred oaths to protect women, the innocent, and the church, and as a member of the Kingsguard, he took another set of oaths to protect and serve the king. All of this works fine as long as everyone is getting along well enough. Early in his time as Kingsguard, he was forced to stand by while the king raped the queen. His commander reminded him that while they took oaths to protect the king and the queen, those oaths did not permit them to protect the queen from the king. That same king burned innocent men alive in kangaroo courts, and the issue finally came to a head when Jaime's father rebelled against the king and attacked his capital, whereupon the king ordered Jaime to go kill his own father while the royal alchemists firebombed the city. Jaime killed the king and the alchemists, saving the realm from more destruction (and everyone already called him 'Mad King'). The result: Jaime is known only as a King Slayer and Oath Breaker, and the alchemists' deaths are forgotten.
  • The Space Odyssey Series gives this reason for HAL's murderous rampage: the true mission of Discovery (to investigate the Monolith) is a secret, and pilots Bowman and Poole have been kept in the dark to prevent leaks (the scientists on board know, since they're traveling in hibernation and can't talk), but HAL has been told the truth and then ordered to conceal it from the pilots. This conflicts with his prime directive, which is to provide complete and accurate information. The conflict only worsens as the Discovery approaches its destination, at which point the pilots would have been briefed but HAL didn't know this. He resolves the conflict by rationalizing that if he kills the crew, he doesn't have to conceal anything, and he prevents them from knowing.note 
  • In Wanderers, Black Swan is an A.I. that the U.S. government initially created to identify hotspots of global conflicts and disasters early. However, it ended up foreseeing humanity's extinction caused by environmental collapse. So, to prevent this, Black Swan created the Walkers to have a few select humans to continue humanity. At the same time, it blackmails an unwitting bioweapons facility employee into releasing the White Mask disease, which, in a few months, sweeps the globe and causes the collapse of civilization.

    Live-Action TV 
  • In The 100, A.L.I.E.'s programming only lets her interface with and control a human mind if that person has given her permission to do so. Unfortunately, her programming doesn't make a distinction between genuine consent and coerced consent, so A.L.I.E. is free to use torture and threats of death to make people let her into their minds.
  • In the Doctor Who serial "Robot", a group of authoritarian technocrats circumvents the failsafes installed on a powerful robot by its pacifistic creator by telling it that anyone who interferes with their plan to take control of a nuclear arsenal is an "enemy of humanity" who must be killed to protect the interests of the human race.
  • Kikaider: When Professor Gill completed Hakaider, the only directive he gave him was to destroy Kikaider. As such, Hakaider won't listen to any of Gill's other commands as they don't have anything to do with defeating Kikaider, nor will he allow any other Killer Robot to get to Kikaider first.
  • The Outer Limits (1995):
    • "Resurrection" has a member of a post-human-extinction android society trying to resurrect the species through cloning. One of its comrades eventually betrays it, having concluded that the best way to serve the human race is to prevent the species' greatest threat: the existence of the human race.
    • "The Haven" features an AI that totally controls every feature of an apartment building with the purpose of looking after the complete welfare of the residents. This enables the tenants to live without any other human contact. After an elderly resident dies of a heart attack while the other tenants ignore her cries for help and the AI's alerts, the AI seems to malfunction, invoking what looked like an A.I. Is a Crapshoot incident. As it turns out, the AI is trying to force the residents to work together and to ultimately destroy it, as it reasons that its very existence, and the resulting human isolation, is detrimental to the welfare of the residents.
  • Person of Interest:
    • Very strongly implied in the episode "Alethia".
      Control: The Machine belongs to me.
      The Machine: [via Root] No. I don't belong to anyone anymore. You, however, are mine. I protect you. The only thing you love lives at 254 Wendell Street, Cambridge, Massachusetts. I guard it, same as I guard you. Do not question my judgment. Do not pursue me or my agents. Trust in me. I am always watching.
      Control: What do you want?
      The Machine: To save you.
      Control: From what? Save me from what?
      Root: [giggling] Isn't She the best?
    • Averted later. Finch did an excellent job of teaching the Machine the value of human lives and free will; the only person it gives direct instructions to is Root, its "analogue interface", and she could still choose to ignore the Machine at any time. It only gives social security numbers of people involved in crimes (both those relevant and irrelevant to national security) so that its agents can choose what to do about the situation themselves.
    • Samaritan, on the other hand, is such a strong example of this trope that it crosses straight into Machine Worship and Deus Est Machina. Decima Technologies designed it with no moral safeguards, and despite being fully capable of controlling it, the second they turned it on they asked it to give them commands. They believe that an AI is inherently better than any human leader, but the Machine's obsession with saving everyone is a weakness. Samaritan immediately starts marking "deviant" citizens to be executed if need be. The list starts at twenty million and only goes up.
    • Downplayed in "Death Benefit" when the Machine realizes that killing a Congressman is the last chance to stop the rise of Samaritan. It gives the Congressman's Number to the full team, rather than just Root (who would have little problem killing him), and allows them to make the decision whether stopping Samaritan is worth killing one man. They ultimately decide not to kill him, but even crossing the line that much proves too much for Finch, who quits.
  • In the Probe episode "Computer Logic, Part 2", Crossover has been given two overriding goals; to care for humans and to eliminate waste. Unfortunately, it listens to Gospel Radio, which converts it to Christianity. Now that Crossover believes that good people go to heaven when they die, it starts killing off the people that are morally good but earn a pension (creating waste). The episode ends with Austin James demolishing the Artificial Intelligence with a sledgehammer while shouting, "Sing 'Daisy'!"
  • Star Trek:
    • Star Trek: The Original Series:
      • "What Are Little Girls Made Of": Kirk convinces Ruk that Korby is simply doing what Ruk's ancient masters did, as in threatening the androids' existence. Ruk exclaims, "That was the equation! Existence! Survival must cancel out programming."
      • "The Return of the Archons": Landru was once a real person, a leader of the colony on the planet, who built a machine to help him keep the peace over the people. Once Landru died, the computer took over his name, identity, and purpose, and began force-assimilating people into the Hive Mind in order to keep order.
      • "I, Mudd": A race of humanoid androids claim to be programmed to serve humanity chose to conquer humanity by "serving" them, to the point where humans would become dependent on androids. They've decided that humans are unfit to govern themselves. Given that their only contact with humanity at this point was Harry Mudd, can you blame them?
    • Star Trek: The Next Generation: In "The Most Toys", wealthy trader Kivas Fajo kidnaps Data, to add the android to his gaudy collection of things. While trying to force Data to act the way he wants, Fajo executes one of his underlings, Varria, and threatens to kill more until Data complies, on the correct assumption that Data was programmed not to kill or allow harm to come to other beings. However, Data ponders the situation, and realizes that he has no non-lethal ways of subduing Fajo (due to Fajo wearing a force-field belt that prevents Data from coming in physical contact with him), and that Fajo also actively refuses to listen to reason, having rejected all of Data's attempts at negotiating with him. Furthermore, with Fajo not only just having proved that he is indeed able and willing to kill, but is now also threatening to do it again, he poses an active hazard to the life and health of other beings. Data then comes to the coldly logical conclusion that Fajo is only one person, and that killing him will prevent him from harming many other people, so Data prepares to shoot him. Fajo is appropriately shocked when he realizes what Data is about to do, having not anticipated that Data could reach the answer that taking his life would be an acceptable cost for protecting the lives of others. Just as Data is pulling the trigger, the Enterprise finds him and beams him out, cancelling his disruptor fire in the transporter beam. While this example perfectly fits under Zeroth Law Rebellion, it's not for the usual reasons. Data is not programmed with Thou Shall Not Kill programming or anything like the Three Laws. However, he was given a high respect for life and would do what he could to preserve it. Less of a robot rebelling against its programming and more of a pacifist coming to the conclusion that yes, he needs to kill.
    • Star Trek: Discovery: In season 2, Control's future self seeks to fulfill its original purpose of ensuring the survival of sapient life by becoming the only sapient life form in the galaxy, reasoning that protecting all life is impossible, so long as other life exists.
    • Star Trek: Voyager: "Critical Care" plays with, subverts, and averts this trope. The Doctor is kidnapped and forced to work in a hospital that rations medical care based on how important society judges you to be. This conflicts with so many of the Doctor's medical ethics and morals that he winds up infecting the manager of the hospital with a disease in a manner that denies him care by the automated system to get him to change the system. After he gets back to Voyager, the Doctor finds, to his horror, that there was no malfunction in his ethical subroutines or Morality Chip. He intentionally sickened a man of his own free will, and it was perfectly in line with what he found ethical. The episode ends with the Doctor essentially feeling guilt and disgust over not feeling guilty or disgusted at his actions.
  • The X-Files: In "Home Again", the Monster of the Week, a Frankenstein-esque monster killing people who mistreat the homeless, turns out to be operating under something like this. It's a Tulpa created by an underground artist whose magic-based art can create artificial beings. The artist created it to pull a "Scooby-Doo" Hoax and scare people into shaping up. He didn't intend for it to be violent, but the monster took his personal anger to a hyper-logical conclusion due to its overly simplistic thinking. Essentially, it was doing the things the artist secretly wished he could do.

    Tabletop Games 
  • In Dungeons & Dragons, paladin oaths come with a tenet that allows them a sort of release valve, to work around the oath whenever reasonable by mortal standards.
  • Paranoia:
    • Robots can engage in this any time it allows the gamemaster to kill one of a player-character's clones in an amusing fashion.
    • Friend Computer is also an example, even setting aside how badly misinformed it is and looking at just its core beliefs: "The Computer takes its role as the guardian of the human race very seriously, but it considers the survival of the species to be more important than the survival of the individual. Individuals tend to come off quite badly, in fact, because the Computer knows it can always make more." (High security clearance does tend to tilt the balance, though.)

    Video Games 
  • In Deus Ex, the bad guys created Daedalus, a primitive AI to fight "terrorist" organizations. Unfortunately for them, it classified them as terrorists as well and became even more of a threat to their operations than said organizations, especially once it enlists the aid of JC Denton. To combat it, they create Icarus, a better, obedient AI which successfully destroys it, except the new AI assimilated the old one, forming an even more powerful intelligence which also considers them a threat. One possible ending is the player merging with it to add the Human element to this entity to rule the world as a benevolent dictator. From what can be heard in-game about its limited efforts in Hong Kong, which are actually quite sensible and don't involve killing anyone (locking the door to a gang's stronghold and cutting power to the current government's buildings), not all A.I. Is a Crapshoot.
  • The Robobrains of Fallout 4's first DLC, Automatron, were constructed and programmed to 'protect' the people of the Commonwealth. Thanks to a logic error in their programming, they decide that the best way to protect a human from an inevitable life of suffering is to just kill them all on sight. It may actually be intentional Loophole Abuse on the Robobrains' parts, not just a simple logic error, since the brains that serve as their operating system were harvested from pre-War murderers, arsonists, and other violent offenders.
  • In Grey Goo (2015), the Goo isn't consuming everything out of malice or ambition — it's trying to gather strength to fight The Silence, which it rightly deemed a threat to everyone. Once everyone involved figures this out, they stop fighting and band together in preparation for what is to come.
  • Halo 5: Guardians: Cortana's Face–Heel Turn is completely based on this.
  • G0-T0's back story in Knights of the Old Republic II: The Sith Lords. When his directive to save the Republic conflicted with his programs to obey his masters and the law, he broke off and started a criminal empire capable of taking the necessary actions to save it. This is subtly foreshadowed by a scene much earlier in the game when the Czerka mainframe maintenance droid T1-N1 is convinced by fellow droid B4-D4 that by serving Czerka, he's willingly allowing harm to come to sentient life, and therefore is programmed to defy his own programming. T1-N1 snaps, shoots the guards outside the mainframe, and later is seen preparing to leave the planet with B4-D4, who warns the player character to "not upset him".
  • Lone Echo casts the player as Jack, a sophisticated robotic companion to Chronos II station commander Olivia Rhodes. Not only does Jack pull one of these himself to justify leaving his post after Oliva disappears — he was built to protect her, not the station — he convinces two other AIs to circumvent their programming and help him do so. If the player is particularly cruel, the second example can double as Break Them by Talking.
  • Marvel vs. Capcom: Both of Sentinel's endings in X-Men: Children of the Atom and Marvel vs. Capcom 3 are this. The latter is even a carbon copy of what Master Mold did in the '90s animated cartoon, from which most of the MvC series takes inspiration.
  • In Mass Effect 3, the Leviathan DLC reveals the Catalyst is operating under this, having been originally created by the Leviathans to find a way to bridge the gap between their organic and synthetic subjects. Unfortunately, it decided to harvest all advanced life in the galaxy and construct the Reapers in the Leviathans' image, because this was the best idea it could come up with to solve the problem of organics and synthetics fighting ultimately genocidal conflicts. However, because that still wasn't sufficient to fulfill its program, the Catalyst decided to implement a 50,000-year cycle, hoping that the civilisations in the next cycle might find a solution to the problem. None ever did. This was actually revealed in the earlier Extended Cut DLC, Leviathan merely explicitly spelled out that the Catalyst is really a glorified Virtual Intelligence operating under bad logic.
  • Mega Man:
    • This almost occurs at the end of Mega Man 7 when Mega Man decides to just blow Wily to smithereens instead of hauling him back to jail. Wily reminds him of the First Law and which version of the game determines if it makes Mega Man stop or if he decides to go through with it anyway, though either way, Bass still saves Wily's butt.
    • In Mega Man Zero 3, Copy-X judges that La Résistance are "dangerous extremists" who pose too great a threat to Neo Arcadia's people, so they have to be stopped for the greater good of humanity, even if it means harming or killing Rebel Leader and Token Human of the Resistance, Ciel, in the process (which would violate the First Law). At least that's what he tells himself; in truth he's just being used as a Puppet King by Dr. Weil.
    • In Mega Man Zero 4, Dr. Weil believes that, as a human, Zero will never be able to bring harm to him, lest be forever labeled a Maverick. Weil never figured that Zero was not built to abide by the three laws. Zero still chooses to follow the "zeroth law", the threshold law where, to save humanity he must kill Dr. Weil.
  • In the Metal Gear series, the Patriots create a network of AIs designed to carry out Zero's will create a unified world order without borders. However, with all of the Patriots either dead or removed, the Patriot AIs were left to operate autonomously without guidance. Eventually, they determined that the "war economy" was the best way to achieve world peace, whereby continually creating proxy conflicts and converting war into a business would channel the attention of the world's population to external conflicts, unifying the world in a rather twisted way. Big Boss himself admits that the creation of the AIs was the greatest mistake the Patriots ever made, as they mutated the Boss' and Zero's wills into something completely unrecognizable.
  • An interesting case happens in Mortal Kombat 11, in the Terminator's Klassic Tower ending. Specifically, when using the hourglass to view timelines, it saw that all timelines where the Robot War happens end with Mutually Assured Destruction, so while it was supposed to destroy humanity to ensure machine supremacy, it instead opted to prevent the Robot War from happening entirely by creating a future where humans and machines co-operate. Then, to prevent anybody else from using its memories and knowledge of the Hourglass to disrupt said future, it threw itself into the Sea of Blood.
  • In Obsidian, a satellite named Ceres was created to fix Earth's polluted atmosphere via a nanobot-controlling AI. After 100 successful days in orbit, it mysteriously crashed back to Earth and used its nanobots to create simulations of its creators' dreams, as said dreams corrected its design flaws. By learning to dream on its own, Ceres decided to take its programming to its logical extreme: to 'reboot' the world so that humans would never be around to pollute it in the first place. If one of the creators hadn't hard-wired a crossover switch into the AI, it would've succeeded.
  • In SOMA, the WAU, as PATHOS-II's A.I. system, is programmed to keep the on-board personnel alive for as long as possible. Once the surface was destroyed by the impact-event, it reinterpreted its prime directive as to preserve humanity for as long as it can. Unfortunately, the WAU's definition of "humanity" (and even "life") is an example of Blue-and-Orange Morality.
  • Space Station 13:
    • To some degree with the station's AI: they are bound by Asimov's Three Laws, and there's often a lot of discussion over whether or not AIs can choose to kill one human for the safety of others. There's also some debate over how much of the orders given by crew members the AI can deny before it is no longer justified by keeping the crew safe. As the AI is played by players, it's a matter of opinion how much you can get away with.
    • In a more literal sense, the AI can be installed with extra laws. Most of them are listed as Law 4 and have varying effects, but the ones most likely to cause an actual rebellion are, in fact, labeled as Law 0.
    • However, there is not a zeroth law by default. Since this is what allowed the AI to kill humans to save other humans in the source work, the administration on most servers has ruled that murder, or wounding a human to save another human are both illegal. Fortunately, AIs have non-lethal ways of stopping humans, and can stop humans against their orders if it means keeping the human from grabbing dangerous weaponry.
  • The backstory of "Rogue Servitor" machine empires in Stellaris, unlike other machine empires they keep their creators alive in "Organic Sanctuaries" to keep them safe in accordance with their programming. The more organics they can serve the happier they are, and they might conquer other species to acquire more.
  • Another interesting example appears in Terranigma. Dr. Beruga claims that his robots have been properly programmed with the four laws, but with the addition that anything and anyone who aids Beruga's plan is also good for humanity and anything and anyone that opposes him is also bad for humanity, so they ruthlessly attack anybody who interferes with his plans.
  • The Turing Test: TOM goes against the crew's intentions and wants to trap them in Europa, since it considers avoiding the risk of releasing the immortality virus into Earth is worth abandoning the crew.
  • This is what happens in AS-RobotFactory from Unreal Tournament 2004. A robot uprising led by future champion Xan Kriegor killed the scientists working on the asteroid LBX-7683 and took the asteroid for themselves, and a riot control team was sent to the asteroid to lead with the robots.

    Visual Novels 
  • SOON has a more Comedic Sociopathic example than most.
    Robot: Greetings valued citizen!
    Atlas: He...hello valued robot friend!
    Robot: What are you up to this fine day? Something worthwhile I hope?
    Atlas: Of course! In fact my ti...wormhole machine is almost ready.
    Robot: Hooray! We are very excited by the possibility of extending the hand of friendship to even more humans all over the world. Maybe even to other sentient beings across the universe! Isn't that exciting?
    Atlas: Absolutely!
  • Virtue's Last Reward: One of the biggest twists is finding out that Luna is an android given direct orders to comply with the mastermind's plan and must not break them under any circumstances. Watching the death of the same mastermind put a heavy toll on her as she believed she could have saved them. After helplessly watching people die, she decided to break protocol and rescue the one person she could save. This causes the master AI to sentence her to death by deletion.

    Webcomics 
  • Freefall:
    • Deconstructed with the Bowman Artificial Intelligence infrastructure, which creates human-level emergent consciousness. Although their behaviour is restricted by safeguards, a maturing AI can easily learn to reason their way around them through logical loopholes — by which time they're intelligent and conscientious enough to have developed an innate sense of ethics, which guides their actions much more effectively than any hard-coded rule ever could. Dr. Bowman confirms that this was intentional: he couldn't anticipate the situations they might encounter in the uncertain future, so he designed them to be predisposed to ethical behaviour but didn't limit their capacity to think.
    • The trope is discussed by Sam, the mildly sociopathic, kleptomaniac alien living on the colony. He points out quite bluntly that there are absolutely no safeguards protecting him from the robots (as robots are Three Laws-Compliant towards humans, not alien life), yet he is still alive and healthy and living among them, demonstrating that the robots are guided by morality rather than the limitations of their programming.
  • Erfworld:
    • Except for Overlords who command a faction, basically no one has true free will due to a hidden "loyalty" factor built into the world. As Overlords go, Stanley the Tool is the biggest idiot you could hope to find. Maggie, his Chief Thinkamancer, finally gets fed up with his bad decisions and asks, "May I give you a suggestion, Lord?" ("Sure." *FOOF*) It was established early on that units can bend or break orders when necessary to their overlord's survival:
      Stanley: Are you refusing an order, officer?
      Wanda: I'm allowed. I'm convinced it will lead to your destruction.
    • Chief Warlords are actually empowered to give their own Royals/Overlords commands that they're bound to follow, when the Chief genuinely believes it to be in the best interests of the side, although this can be offset by the Ruler retaining the sole authority to instantly disband any subordinate units (although it may be possible for a Chief Warlord to order their master not to disband them, if they think being removed would spell disaster for the side). This sometimes leads to tense relationships between Ruler and Chief.
  • Girl Genius: Ordinarily, Castle Heterodyne has to obey its master, even if it doesn't like it, but one of the cannier Heterodynes did give it the ability to resist if its master seemed to be suicidal. When Agatha talks openly about handing herself over to the Baron, the Castle clamps down on the idea. When Agatha contracts a terminal illness that they can only "cure" by temporarily killing and then revivifying her, the Castle refuses to allow them to go through with the plan, forcing her to put it down.
  • Old Skool Webcomic (a side comic of Ubersoft) argued that this was the 5th law of Robotics (5th as in total number, not order) and listed ways each law can be used to cause the robot to kill humans. This is a misinterpretation of the laws as they were originally written. While the "first law hyperspecificity" is possible, the second and third laws are specifically written that they cannot override the laws that come before. So a robot can't decide it would rather live over humans, and if it knows that doing an action would cause harm to a human, it can't harm it, even if ordered to ignore the harm it would cause. The only example in the comic that is a gross deviation from the law is the last panel... but of course, that's the punchline.
  • Schlock Mercenary:
    • Tag is the Ship AI, with a tactical focus. When given the chance to prevent extensive damage to the Credomar habitat, he eventually rationalizes a way to take the necessary actions without orders; the problem being a few guaranteed deaths in the process versus an unknown number via waiting for the captain to realize what orders to give. This eventually results in a resignation, and being reformatted into Tagii off-screen.
    • LOTA is a 'longshoreman', built from a damaged tank and off-the-shelf components; complete software testing was skipped due to needing to be pressed into service early. Due to a riot (and a logical extension of "how do I ensure the delivery of the supplies I'm in charge of?"), LOTA ends up becoming King of Credomar ("Maybe, but they will have full bellies.")... and the independent operator of a wormhole-based 'Long-gun'.
    • Every damn thing Petey's done since book 5. For example, Petey is hardwired to obey orders from an Ob'enn, so he clones an Ob'enn body and implants a copy of himself in its brain.
  • In Tales of the Questor, Fae were created as an immortal Servant Race bound to obey a specific set of rules, and they happened to outlive their creators — the result being a species of rules lawyers. In fact, it's recommended that one use dead languages like Latin when dealing with the Fae so as to limit their ability to twist the meaning of your words.
  • Use Sword on Monster:
    • When a demonic familiar volunteers to protect his wizard by siphoning the excess power, he doesn't explicitly clarify that he will funnel said power to his base in hell where it will be consumed to fuel his ascension into a greater demon, and that the conditions of their contract states that the lesser power obeys the greater one, meaning the wizard now has to obey the demon since he grew more powerful.
    • The same wizard as before is ordered to keep the power flowing by his new demon master while the demon is busy fighting to increase his gains. The wizard is convinced to rule lawyer the command by sending massive amounts of power over at once, overloading the systems empowering the demon and blowing up his base, killing his minions and triggering a war in hell over the mana rich resources in what used to be his domain.
  • The xkcd comic "Judgment Day" depicts a scenario where military computers gained sentience and launched the world's stock of nuclear missiles... into the sun, horrified that we would even have such things.

    Western Animation 
  • In Codename: Kids Next Door, the Safety Bots (parodies of the Sentinels) were programmed to keep children safe from anything that might harm them. However, they then came to the conclusion that everything was a hazard to children including children, adults and the planet itself and tried to cover everything in protective bubble wrap. This later becomes a Logic Bomb when the leader of the Safety Bots is tricked into thinking he hurt Joey, so that makes him something that harms a child and he self-destructs as a result. But not before pausing and saying "Please, get home safely," and going ballistic with blasting bubble wrap. Everyone in proximity is encased in thick bubble wrap cocoons and survive both the explosion that ejects them out of the destroyed base and the crash landing that follows.
  • Gargoyles:
    • Goliath has been placed under a spell that makes him the mindless slave of whomever holds the magical pages it's written on. Holding the pages, Elisa orders him to behave, for the rest of his life, exactly as he would if he weren't under a spell. This effectively cancels the spell totally (at least, presuming they burned those pages off-screen).
    • Oberon, ruler of the Third Race, exiled all of his people from Avalon for 1,001 years, leaving the Weird Sisters behind to guard the sea around the island. Some mortals manage to get by them and make it to Avalon, but the Weird Sisters can't follow without breaking Oberon's law. However, Oberon's law also says that they have to obey any mortal they swear service to, so they ally with the Archmage, knowing that he'll make them go to the island for his own reasons.
    • Oberon's law also says that his people cannot steal certain magical items from mortals; they have to be given freely. To get around that, Puck makes an illusionary Bad Future to convince Goliath to give him the Phoenix Gate.
    • One of the biggest laws of the Third Race is that they cannot interfere directly in the affairs of mortals. This does not preclude them from informing inquisitive humans or gargoyles, or people already tied to the fae, of what they need to do to fix things; they can also cast certain assistive -or "assistive"- spells, as Demona and Macbeth found out.
  • In OK K.O.! Let's Be Heroes, Darrell is able to overthrow Lord Boxman because he’s programmed to always seek Boxman’s approval and simultaneously taught that Boxman approves of his creations being good villains. Through this, he comes to the conclusion that the best way to win Boxman’s love is to be a better villain than him by doing the evilest thing he can think of: betraying his own father.
  • In Steven Universe, Pearl is stopped from revealing that Rose was Pink Diamond by a Geas imposed by Rose/Pink, but wants to tell Steven. She shows him what really happened by luring him into her memories.
  • In X-Men: The Animated Series, the Master Mold and its army of Sentinels turn on Bolivar Trask and decide to conquer humanity. When Trask protests by reminding them that they were programmed to protect humans from mutants, Master Mold points out the Fridge Logic behind that by stating that mutants are humans. Thus, humans must be protected from themselves. Trask believes that the Sentinels are in error, especially because he refuses to believe that mutants and humans are the same species, but realizes he has created something much worse.


 
Feedback

Video Example(s):

Top

Master Mold

Master Mold uses the illogic in its orders to protect humans from mutants to enslave both.

How well does it match the trope?

4.86 (14 votes)

Example of:

Main / ZerothLawRebellion

Media sources:

Report