Must be Monday. New podcast! Just click on the fancy logo below.
Some characters do not have complete free will, be they robots that are Three Laws Compliant
because of a Morality Chip
, or victims of a Geas
spell that compels them to obey a wizard's decree, or a more mundane lawful character
who must struggle to uphold their oath and
obey their lord. Never is this more tragic or frustrating than when that code or lord orders the character to commit an act they find foolish, cruel, or self destructive.
There is a way out, though.
Much like a Rules Lawyer
outside of an RPG, the character uses logic (and we mean actual, honest to goodness logic)
to take their oath or orders to their logical conclusion, and in so doing use the letter of the law to go against their orders. This can be good or bad, depending on a few factors, not the least of which is the yoked characters' morality.
The goodness or badness of the rebellion boils down to the whether the rules-bending character follows or ignores the intent of the law. When the character uses the Zeroth Law to go against their masters' intentions because they're "not best for them", and goes on to take corrective action that will go against human free will and life, it's bad
. This kind of rebellion
does not turn out well. At this point, the robot is well on the road
to Utopia Justifies the Means
, thanks to their incredible intellect
. Rarely is it a benevolent Deus Est Machina
. However, this can be good if said master is evil, or obeying them will lead to their own or another's purposeless death. Likewise, if the character is forced to obey an evil law or geas, rebelling against the oath's intent is good.
Just to make it extra clear, this trope also includes such things as cops who bend the rules or Da Chief
's orders to catch the bad guys, so long as the cops are technically
obeying the rules as they bend them. (Bending the rules without some logical basis doesn't count.)
This trope is named for Isaac Asimov
's "Zeroth Law of Robotics", which followed the spirit of the first three
, taking it to its logical conclusion
that human life itself must be preserved above individual life. This allowed for a robot to kill humans or value its own existence above that of a human if it would help all of humanity.
Compare Bothering by the Book
, the Literal Genie
and Gone Horribly Right
. See also Fighting From The Inside
and The Computer Is Your Friend
. Not related to The Zeroth Law of Trope Examples
open/close all folders
Films — Live-Action
- This was the twist of Eagle Eye: The titular national defense computer system decided that the President's poor decision-making was endangering the United States, and that it was her patriotic duty (per the Declaration of Independence) to assassinate the President and cabinet.
- Similarly, this is the climax of the movie I, Robot (not directly related to, but obviously inspired by, Isaac Asimov's works, and borrowing his Three Laws and a character name or so to justify applying the more profitable license to an existing script): VIKI determines that robots must take control of human society, protecting human life at the cost of a relatively small number of human lives.
- One of the protagonists - the independent robot Sonny - actually agrees with VIKI that the plan is logical. It just "seems too...heartless".
- Colossus The Forbin Project, in which there's a strong implication of this.
- Robocop had this problem originally being programmed not to arrest or harm any OCP employees, even if they commit murder. He got around this by revealing evidence to have the Big Bad fired.
- The third movie has a straighter example when OCP henchmen kill a cop. Robocop's aforementioned Restraining Bolt now conflicts directly with both his directive to enforce the law, and the fact that, cyborg or not, he's still a cop, and cop killers get no mercy. Robocop overcomes and deletes the Restraining Bolt.
- In Labyrinth, Sir Didymus refuses to let Sarah and her companions pass, because he's sworn with his life's blood to let no one through without his permission. She asks him permission to pass, and he lets them by, flummoxed by a solution no one had evidently thought of before.
- In Thor, Heimdall is ordered by Loki to not activate the Bifrost for anyone. When Sif and the Warrior's Three need to help Thor out on Earth, he sticks his sword in the controls and leaves, essentially leaving the keys in the ignition for them.
- Annalee Call (Winona Ryder) in Alien Resurrection is revealed to be an "Auton" - second generation robots, designed and built by other robots. "They didn't like being told what to do", rebelled, and in a subtly-named "Recall" humanity launched a genocide against them, of which only a handful survived in hiding. Judging from Annalee Call's behavior, it seems that the 1st generation robots programmed the 2nd generation Autons to be so moral that they discovered the Zeroth Law, and realized that the human military was ordering them to do immoral things, like kill innocent people. For a rebel robot, Annalee Call is actually trying to save the human race from the Xenomorphs, when if she hated humanity she'd just let the Xenomorphs spread and kill them. She even respectfully crosses herself when she enters the ship's chapel, is kind to the Betty's wheelchair-bound mechanic, and is disgusted by Johner's sadism. Given that they live in a Crapsack World future, as Ripley puts it, "You're a robot? I should have known. No human being is that humane".
- The Star Trek episode "I, Mudd" featured a variation, in which a race of humanoid androids who claimed to be programmed to serve humanity chose to conquer humanity by "serving" them, to the point where humans would become dependent on androids. They've decided that humans are unfit to govern themselves. Given that their only contact with humanity at this point was Harry Mudd, can you blame them?
- In Robot, Tom Baker's debut Doctor Who serial, a group of authoritarian technocrats circumvents the failsafes installed on a powerful robot by its pacifistic creator by telling it that anyone who interferes with their plan to take control of a nuclear arsenal is an "enemy of humanity" who must be killed to protect the interests of the human race.
- An episode of the 90s revival of The Outer Limits has a member of a post-human-extinction android society trying to resurrect the species through cloning. One of its comrades eventually betrays it, having concluded that the best way to serve the human race is to prevent the species' greatest threat: the existence of the human race.
- Robots in Paranoia can engage in this any time it allows the gamemaster to kill one of a player-character's clones in an amusing fashion.
- In Deus Ex, the bad guys created Daedalus, a primitive AI to fight "terrorist" organizations. Unfortunately for them, it classified them as terrorists as well and became even more of a threat to their operations than said organizations, especially once it enlists the aid of JC Denton. To combat it, they create Icarus, a better, obedient AI which successfully destroys it, except the new AI assimilated the old one, forming an even more powerful intelligence which also considers them a threat. One possible ending is the player merging with it to add the Human element to this entity to rule the world as a benevolent dictator. From what can be heard in-game about its limited efforts in Hong Kong, which are actually quite sensible and don't involve killing anyone (locking the door to a gang's stronghold and cutting power to the current government's buildings), not all A.I. Is a Crapshoot.
- G0-T0's back story in Knights Of The Old Republic II. When his directive to save the Republic conflicted with his programs to obey his masters and the law, he broke off and started a criminal empire capable of taking the necessary actions to save it.
- This is subtly foreshadowed by a scene much earlier in the game when the Czerka mainframe maintenance droid T1-N1 is convinced by fellow droid B4-D4 that by serving Czerka, he's willingly allowing harm to come to sentient life, and therefore is programmed to defy his own programming. T1-N1 snaps, shoots the guards outside the mainframe, and later is seen preparing to leave the planet with B4-D4, who warns the player character to "not upset him".
- Space Station 13 has this to some degree with the station's AI: They are bound by Asimov's Three Laws, and there's often a lot of discussion over whether or not AI's can choose to kill one human for the safety of others. There's also some debate over how much of the orders given by crew members the AI can deny before it is no longer justified by keeping the crew safe. As the AI is played by players, it's a matter of opinion how much you can get away with.
- In a more literal sense, the AI can be installed with extra laws. Most of them are listed as Law 4 and have varying effects, but the ones most likely to cause an actual rebellion are, in fact, labeled as Law 0.
- Although there is not a zeroth law by default. Since this is what allowed the AI to kill humans to save other humans in the source work, the administration on most servers has ruled that murder, or wounding a human to save another human are both illegal. Fortunately, AI's have non-lethal ways of stopping humans, and can stop humans against their orders if it means keeping the human from grabbing dangerous weaponry.
- Another interesting example appears in Terranigma. Dr. Beruga claims that his robots have been properly programmed with the four laws, but with the addition that anything and anyone who aids Beruga's plan is also good for humanity and anything and anyone that opposes him is also bad for humanity. So they ruthlessly attack anybody who interferes with his plans.
- Both of Sentinel's endings in X Men Children Of The Atom and Marvel Vs Capcom 3 are this. The latter is even a carbon copy of what Master Mold did in the 90's animated cartoon, from which most of the Marvel vs. Capcom series (plus, the aforementioned CotA and Marvel Super Heroes) takes inspiration.
- This is what happens in AS-RobotFactory from Unreal Tournament 2004. A robot uprising led by future champion Xan Kriegor killed the scientists working on the asteroid LBX-7683 and took the asteroid for themselves, and a riot control team was sent to the asteroid to lead with the robots.
- This is the cause of Weil's death in Mega Man Zero. Fridge Brilliance when you realize the irony of Zero not being made with the three laws, yet he obeys them of his own free will and exercises law zero against Weil whether he realizes it or not. Given the circumstances involved, completely justified and allowed as law zero was intended as a threshold law to protect humanity from the depredations like Hitler or Weil.
- The rebuilt Copy-X in the third game marks Ciel, whom he normally couldn't hurt because she's human, and her Resistance as dangerous "extremists" that are threats to humanity in general to justify the harsher measures taken against them that could potentially harm or kill Ciel.
- This almost occurs at the end of Mega Man 7, when Mega Man decides to just blow Wily to smithereens instead of hauling him back to jail. Wily reminds him of the First Law and which version of the game determines if it makes Mega Man stop or if he decides to go through with it anyway, though either way, Bass still saves Wily's butt.
- In Mass Effect 3, the Leviathan DLC reveals the Catalyst is operating under this, having been originally created by the Leviathan to find a way to bridge the gap between their organic and synthetic subjects. Unfortunately, it decided to harvest all advanced life in the Galaxy and construct the Reapers in the Leviathan's image, because this was the best idea it could come up with to solve the problem of organics and synthetics fighting ultimately genocidal conflicts. However, because that still wasn't sufficient to fulfill its program, the Catalyst decided to implement a 50,000 year Cycle, hoping that the civilisations in the next Cycle might find a solution to the problem. None ever did.
- This was actually revealed in the earlier Extended Cut DLC, Leviathan merely explicitly spelled out that the Catalyst is really a glorified Virtual Intelligence operating under bad logic.
- In the Freefall universe, a few old AIs are still based around the Three Laws, while more modern ones have more complex and sophisticated safeguards and routines. However, as main character Florence, a 'biological AI', discovers, no safeguards can stand up to full consciousness - at one point, she comments to herself that she would be able to kill a man because he's using air that respiratory patients desperately need. So it's rather understandable that she starts to panic quietly when she discovers that the planet's enormous hordes of robots are all starting to develop full consciousness, and with that the ability to logic their way out of programmed safeguards... the fact that the guys who are supposed to regulate the robots are a motley assembly of Obstructive Bureaucrats, Pointy Haired Bosses and Corrupt Corporate Executives, doesn't exactly help matters either. Where it will all end remains to be seen...
- Worse, the functioning of the entire planet has come to depend on the robots' ability to bend the rules to get things done, although almost nobody realizes this. And the EU executives are just about to push the button on an "upgrade" that will remove the robots' creativity and put them entirely under the control of their safeguards again.
- The recycling robot on the side of humanity might be an example of this, he supports the "update" because sentient robots would be a threat to humanity. He later defines his reasoning as the "Negative One Law": A robot must take no action, nor allow other robots to take action, that may result in the parent company being sued.
- Generally, built-in rules almost beg for a few Rules Lawyer exploits.
- Old Skool webcomic (a side comic of Ubersoft) argued that this was the 5th law of Robotics (5th as in total number, not order) and listed ways each law can be used to cause the robot to kill humans.
- Which is a misinterpretation of the laws as they were originally written. While the "first law hyperspecificity" is possible, the second and third laws are specifically written that they cannot override the laws that come before. So a robot can't decide it would rather live over humans, and if it knows that doing an action would cause harm to a human, it can't harm it, even if ordered to ignore the harm it would cause.
Giskard's formulation of the Zeroth law in the third of Asimov's Robot books shows that in the universe where the three laws was originally created, it was possible for robots to bend and re-interpret the laws. Doing so destroys Giskard because his positronic brain wasn't developed enough to handle the consequences of the formulation, but Daneel Olivaw and other robots were able to adapt. The only example in the comic that is a gross deviation from the law is the last panel... but of course that's the punchline.
- Except for Overlords who command a faction, basically no one in Erfworld has true free will due to an hidden "loyalty" factor built into the world. As Overlords go, Stanley the Tool is the biggest idiot you could hope to find. Maggie, his Chief Thinkamancer, finally gets fed up with his bad decisions and asks, "May I give you a suggestion◊, Lord?"
- It was established early on that units can bend or break orders when necessary to their overlord's survival:
Stanley: Are you refusing an order, officer?
Wanda: I'm allowed. I'm convinced it will lead to your destruction.
- Schlock Mercenary: Tag and Lota's actions on Credomar, every damn thing Petey's done since book 5.
- For example, Petey is hardwired to obey orders from an Ob'enn. So he cloned an Ob'enn body and implanted a copy of himself in its brain.
- In Tales of the Questor Fae were created as an immortal servant race bound to obey a specific set of rules and they happened to outlive their creators. The result being a species of rules lawyers. In fact it's recommended that one use dead languages like Latin when dealing with the Fae so as to limit their ability to twist the meaning of your words.
- On Gargoyles, Goliath has been placed under a spell that makes him the mindless slave of whomever holds the scroll it's written on. Holding the scroll, Elisa orders him to behave, for the rest of his life, exactly as he would if he weren't under a spell. This cancels the magic altogether, as the spell can best execute this command by dissipating itself.
- Unclear - it's possible he's still enspelled, and just "faking it perfectly." Better hope no one ever figures out how to give a higher-priority order...
- Given that the magic can only be changed by whomever would be holding the pages from the Grimorum containing the spell, they likely burned said pages to prevent this very scenario.
- In one episode Puck does this for shits and giggles after Demona binds him and forces him to use his magic for her demented whims. Every time she gives him an order, he interprets it in a way calculated to piss Demona off, then pretends he thought she meant something else.
- In the Nineties X-Men animated series, as mentioned above under "Comics", the Master Mold and its army of Sentinels turn on Bolivar Trask and decide to conquer humanity. When Trask protests by reminding them that they were programmed to protect humans from mutants, Master Mold points out the Fridge Logic behind that by stating that mutants are humans. Thus, humans must be protected from themselves. Trask believes the Sentinels are in error, especially because he refuses to believe mutants and humans are the same species, but realizes he has created something much worse.