troperville

tools

toys


main index

Narrative

Genre

Media

Topical Tropes

Other Categories

TV Tropes Org
random
Three-Laws Compliant
aka: Three Laws Of Robotics
Before around 1940, almost every Speculative Fiction story involving robots followed the Frankenstein model. A robot had to be constantly given instructions on what to do by a human, and in the absence of any such human control, it was Crush. Kill. Destroy! time. Fed up with this, a young Isaac Asimov decided to write stories about sympathetic robots, with programmed safeguards that prevented them from going on Robot Rampages. A conversation with Editor of Editors John W. Campbell helped him to boil down those safeguards into The Three Laws of Robotics:

2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.

According to Asimov's account, Campbell composed the Three Laws; according to Campbell's account, he was simply distilling concepts that were presented in Asimov's stories.

The laws cover most obvious situations, but they are far from faultless. Asimov got a great deal of mileage writing a huge body of stories about how the laws would conflict with one another and generate unpredictable behavior. A recurring character in many of these was the "robopsychologist" Susan Calvin (who was, not entirely coincidentally, a brilliant logician who hated people).

It is worth noting Asimov didn't object exclusively to "the robot as menace stories" (as he called them) but also the "the robot as pathos" stories (ditto). He thought that robots attaining and growing to self awareness and full independence were no more interesting than robots going berserk and turning against their masters . While he did, over the course of his massive career, write a handful of both types of stories (still using the three laws), most of his robot stories dealt with robots as tools, because it made more sense. Almost all the stories surrounding Susan Calvin and her precursors are really about malfunctioning robots, and the mystery of investigating their behavior to discover the underlying conflicts.

Alas, as so often happens, Asimov's attempt to avert one overused trope gave birth to another that has been equally overused. Many writers (and readers) in the following decades would treat the Laws of Robotics as if they were as immutable as Newton's Laws of Motion, the Theory of Relativity, the Laws of Gravity... wait ... you know, they treated these laws better than they treated most real scientific principles.

Of course, even these near-immutable laws were played with and modified. Asimov eventually took one of the common workarounds and formalized it as a Zeroth Law, which stated that the well-being of humanity as a whole could take precedence over the health of an individual human. Stories by other authors occasionally proposed additional extensions, including a -1st law (sentience as a whole trumps humanity), 4th (robots must identify themselves as robots), a different 4th (robots are free to pursue other interests when not acting on the 1st-3rd laws) and 5th (robots must know they are robots), but unlike Asimov's own laws, these are seldom referenced outside the originating work.

The main problem, of course, is that it is perfectly reasonable to expect a human to create a robot that does not obey the Three Laws, or even have them as part of their programming. An obvious example of this would be creating a Killer Robot for a purpose like fighting a war. For such a kind of robot, the Three Laws would be a hindrance to its intended function. (Asimov did, however, suggest a workaround for this: an autonomous spaceship programmed with the Three Laws could easily blow up spaceships full of people because, being an unmanned spaceship, it could be led to believe that other spaceships were unmanned as well.)

Also see Second Law My Ass.


Examples:

    open/close all folders 

    Anime and Manga 
  • Eve No Jikan and Aquatic Language both feature the Three Laws and the robots who bend them a little.
  • GGG robots are all Three Laws Compliant, at one point in GaoGaiGar Final the Carpenters (a swarm of construction robots) disassemble an incoming missile barrage, but this is given as the reason they cannot do the same to the manned assault craft, as disassembling them would leave their crews unprotected in space.
    • It is possible that Goldymarg could be capable of killing people, since his AI is simply a ROM dump of Geki Hyuma's psyche, but since they only fight aliens & other robots this theory is never tested.
    • It should be mentioned GGG robots include Humongous Mecha. Hot-Blooded Humongous Mecha.
  • Averted in the Chobits manga when Hideki asks the wife of the man who created Persocoms why he didn't just call them "Robots." Her reply was that he didn't want them to be associated with, and thus bound by, the Three Laws.
  • Astro Boy, although Osamu Tezuka probably developed his rules independently from Asimov. In Pluto, the number of robots able to override the laws can be counted on one hand. One of them is the protagonist.
    • Tezuka reportedly disliked Asimov's laws because of the implication that a sentient, artificially intelligent robot couldn't be considered a person (an issue that Asimov didn't directly address until "The Bicentennial Man"), and devised his own Laws Of Robotics. Just one of the things that the CGI movie missed.
  • Ghost in The Shell: Innocence mentions Moral Code #3: Maintain existence without inflicting injury on humans. Gynoids are defying the law by creating deliberate malfunctions in their own software.
  • In one short arc of Ah! My Goddess, one of Keiichi's instructors attempts to dismantle Banpei and Sigel for research purposes (Skuld had made them capable of walking on two legs, which he had not been able to do with his own designs). Once they escape his trap, the professor asks if they know about the Three Laws of Robotics. They don't. He doesn't die, but they do rough him up and strap him to a table in a way that makes it look like he'd been decapitated and his head stuck on one of his own robots.
  • The Humongous Mecha of Kurogane no Linebarrel are normally this, aside from a slight difference in priorities between the first and second laws. In fact, this is the justification for them having pilots, despite being equipped with relatively complex AI. The Laws are hard-coded into them and thus they are only militarily useful when they have a Human with them to pull the trigger. Their aversion to killing is so great, in fact, that if one accidentally kills somebody (as things whose very footsteps can make buildings collapse are wont to do) they're compelled to use their advanced technology to bring them back to life.
  • Invoked in Episode 3 of Maji De Watashi Ni Koi Shinasai, where Miyako orders Cookie to taze Yamato into submission, while Yamato orders the robot to get Miyako off him. Cookie considers this dilemma out loud, where he has to obey a human command, yet isn't allowed to seriously hurt a human, yet also cannot allow a human to come to harm through his inaction.
  • Though technically human, the "Twilights" or "Tagged" of Gangsta are forced to follow the Three Laws as part of the rules that save them from mandatory enslavement. It should be noted, however, that the Three Laws don't prevent them from being hirable as mercenaries or hitmen.
  • Played with in The Big O. While not all androids in the series are Three-Laws Compliant, one exchange suggests that main character R Dorothy Waynewright (whose name is a Shout-Out to the works of Asimov) is. During a fight with Alan Gabriel, whose status as robot or human is unclear, she initially fights back very little and appears to be losing. She asks him whether he's a human or a robot like her, and he answers jokingly, "I'm the boogeyman!" Apparently taking this as a literal statement that he's not human (and therefore violence against him would not violate the First Law), she proceeds to open a can of whoop-ass. (For reference, Gabriel is later revealed to be a cyborg.)

    Comics 
  • Like his game counterpart, Mega Man is limited by these rules with one slight variation. Robots are allowed to harm humans if their inaction would cause greater harm to other humans. This comes into play in the fourth arc and allows Mega Man and his fellow robots to disarm and neutralize an anti-robot extremist group when they begin indiscriminately firing at people.
  • It's implied in the Judge Dredd story Mechanismo that robots can't harm humans. A group of criminals holding people hostage start panicking when a Robo-Judge approaches them only for one to point out that "Robots ain't allowed to hurt people".
    Robot Judge: In that case, what's about to happen will come as something of a shock to you. (Blasts said kidnapper in the face with a rocket launcher)
  • In ABC Warriors, many robots venerate Asimov, and the more moral ones live by the three laws. However, this is not an absolute; Steelhorn, for example, obeys a version which essentially replaces human with Mars, and members of the Church of Judas explicitly reject the first two laws. However, this causes conflict with their programming leading to profound feelings of guilt, which they erase by praying to Judas Iscariot.
  • In All Fall Down, AIQ Squared, the A.I. model of his inventor, is designed to be this. It finds a loophole— Sophie Mitchell is no longer human.
  • Danger, the sentient A.I. of the Danger Room, could goad people into killing themselves and program other killer robots to do her dirty work. But when Emma challenged Danger to just punch off Emma's head, Danger couldn't do it. Like the Danger Room itself, Danger cannot directly kill people.

    Fanworks 
  • Plan 7 of 9 from Outer Space. Captain Proton is threatened by a Killer Robot, who explains that it is exempt from the Three Laws under a subsection of the Robot Patriot Act.

    Film 
  • In Forbidden Planet, Robbie the Robot is Three Laws Compliant, locking up when ordered to shoot one of the visiting starship crewmen, because his programming to follow a direct order comes into conflict with his prohibition against injuring a human being.
    • Later in the movie, Robbie is unable to fight the monster because he figures out it's actually a projection of the Doctor's dark psyche, and thus to stop it, he'd have to kill the doctor.
  • The much-maligned Will Smith film I, Robot hinges on a Zeroth Law plot. It also turns the three laws into a marketing gimmick, with "Three Laws Safe" applying to robots like "No preservatives" applies to food.
    • Of course, much of the plot hinged on The Reveal that Sonny was not Three Laws Compliant, as part of a Thanatos Gambit by his creator.
  • The film Bicentennial Man (based on a novella by Isaac Asimov himself) features a robot who, through some freak accident during construction, possesses true sentience. It follows his 200 year long life as it first becomes clear he is aware, this awareness develops, and he eventually finds a way to be formally recognized as legally human. For the entire film, he operates under the Three Laws.
    • At the same time, he does have a far more nuanced view than most robots. Once freed, he doesn't blindly obey orders. He harms human beings and, through inaction, allows them to come to harm (if emotional harm counts, seducing another man's fiancée certainly falls under that heading). And then he effectively kills himself.
      • The idea of "emotional harm" only comes into play if the robot is capable of recognizing it, by Asimov's interpretation. Most robots do not understand human emotions very well, and they will still obey orders given by obviously agitated humans. The short story "Liar!" has a robot who, by an unknown manufacturing glitch, can read minds, and who learns about emotional harm this way... and, well, just read it, it's one of the best ones.
      • This actually applies to physical harm as well, as demonstrated in the novel The Naked Sun, wherein robots are used as murder weapons by instructing them to perform actions they weren't aware would lead to people dying.
  • Surreally enough, the Terminator films have employed this, to a degree, most obviously in Terminator 2... The T-800 Model 101 (Arnold Schwarzenegger) protecting John Connor is reprogrammed to accept his commands (Second Law) and to protect him at all costs (First Law). To further support the first law, John Connor orders the T-800 to not kill anybody. Skynet apparently imposes the Third Law on its models, since Arnold can't 'self-terminate'. Even stranger, apparently a bit of Zeroth Law evolution occurs as well after he is set to "learn" mode; the converted Terminator is convinced to expand its mandate to not only protect Connor, but to try and save humanity by averting Judgment Day altogether... go figure...
  • In Aliens, Bishop paraphrases the First Law as to why he would never kill people like Ash did in the first film.
    • Ash was bound by the same restrictions, but just wasn't engineered as well. When he attacked Ripley he very rapidly went off the rails, presumably due to the conflicts between his safety programming and his orders from the company. Bishop lampshades this by saying the previous model robots were "always a bit twitchy".
  • In Star Wars the droids are programmed to not harm any intelligent being, though this programming can be modified (legally or illegally) for military and assassin droids. 4th-degree droids do not have the "no-harm" programming, being military droids.

    Folklore and Mythology 
  • The golems of Jewish legend were not specifically Three-Laws Compliant (since they far predated Asimov), but they could only be created by saintly men, and thus their orders were usually Three-Laws Compliant. (Asimov's characters occasionally pointed out that the Three Laws fall into line with many human moral codes.) But sometimes a golem went off the rails, especially if its creator died ...
    • The most well-known golem story is the Golem of Prague; where the titular golem was created to defend the Jewish ghetto against the Czech, Polish and Russian anti-semites. It was perfectly capable of killing enemies, but only in defense of its creators.

    Literature 
  • In Animorphs by K.A. Applegate some laws are deconstructed, some are averted: The Chee comply with the Third and Zeroth Laws, but their creators, the Pemalites, took the First Law to the logical extreme: no violence, period. As for the Second Law: Erek refuses to obey Jake because he disagrees with his methods, so Jake uses the violence prohibition (in other words, the First Law) to manipulate him and force his hand.
  • "With Folded Hands..." by Jack Williamson explored the "Zeroth Law" back in 1947. This was written as a specific 'answer' to the Three Laws, to more or less demonstrate that they don't really work, the First Law doesn't protect because the definitions of 'harm' are endlessly mutable and can be gamed, and because machine minds won't necessarily be able to comprehend the subtleties of what is and is not harm anyway. The logical lesson of "With Folded Hands..." is that Laws or no Laws, good intentions or not, you don't want self-willed machines outside human control. Period.
  • Robots and Empire has R.Daneel and R.Giskard formulate the Zeroth Law (and name it such) as a natural extension of the First Law, but are unable to use it to overcome the hardcoded First Law, even when the fate of the world is at stake. In the end, R.Giskard manages to perform an act that violates the First Law but will hopefully benefit humanity in the long run. The conflict with his programming destroys his brain, but not before he uses his telepathic powers to reprogram R.Daneel with telepathic powers of his own, having already taught him to follow the Zeroth Law. R.Daneel still follows the Zeroth Law in appearances in later books, though he still has difficulty causing direct harm to humans.
  • In the short story The Evitable Conflict "The Machines", positronic supercomputers that run the worlds economy, turn out to be undermining the careers of those who would seek to upset the world's economy for their own ends (specifically, by trying to make it look like the supercomputers couldn't handle running the world economy), harming them somewhat in order that they might protect humanity as a whole. This has been referenced as the "Zeroth Law of Robotics" and only applies to any positronic machine who deduces its existence.
  • In the short story That Thou Art Mindful Of Him George 9 and 10 are programmed with modified versions of the 3 laws that allow more nuanced compliance with the laws, that they might best choose who to protect when a choice must be made, and obey those most qualified to give them orders. They are tasked with coming up with more publicly acceptable robots that will be permitted on Earth, and devise robot animals with much smaller brains that don't need the three laws because they obey simple instinctive behavior. They also decide that as they have been programmed to protect and obey the humans of the most advanced and rational, regardless of appearance, that the two of them are the most "worth humans" to protect and obey, and deduce that further versions of themselves are the natural inheritors of the world.
  • In the short story "Evidence!" Stephen Byerley's campaign for mayor of New York City is plagued by a smear campaign claiming he is actually an unprecedentedly well-made humanoid robot. Susan Calvin is called in to prove whether he is a robot. She says that if he breaks the Three Laws, that will prove he is not a robot, but if he obeys them, that could just mean he is a good person, because the Three Laws are generally good guidelines for conduct anyway. Byerley wins the election in a landslide after breaking the First Law by slugging an unruly protester at a rally. But Dr. Calvin suggests there's one way a robot could have gotten away with that—if the whole thing was staged, and the protester was also a robot!
  • In Caliban by Roger MacBride Allen (set in Asimov's universe), an explanation is given for the apparently immutable nature of the Three Laws. For thousands of years, every new development in the field of robotics has been based on a positronic brain with the Laws built in, to the point where to build a robot without them, one would have to start from scratch and re-invent the whole field. Then the character explaining this goes right on to announce the development of the gravitonic brain, which can be programmed with any set of Laws (or none at all).
    • This is canon in Asimov's stories, too—the Three Laws are programmed into every positronic brain on the most basic structural level. In "Escape!", Mike Donovan becomes nervous that a prototype spaceship designed by a robot might kill them. Greg Powell rebukes him: "Don't pretend you don't know your robotics, Mike. Before it's physically possible in any way for a robot to even make a start to breaking the First Law, so many things have to break down that it would be a ruined mess of scrap ten times over." Actually, in this case, the jump through hyperspace does result in Powell and Donovan's "deaths"—but since they get better when the ship reemerges into real space, the robot judged that it didn't quite violate the First Law, but the strain of making this leap in logic still managed to send one supercomputer into full meltdown and another into something resembling psychosis.
      • Note however that the U.S. Robotics and Mechanical Men-era stories take place early enough in the development of robotics that the Third Laws can and at times are altered — for instance, Little Lost Robot (also involving the hyperspace project) features a set of robots that had the 'or by inaction' part omitted from the First Law. This leads to some concern when one such robot is lost in a room of physically identical robots, and Susan Calvin deduces a way for such a modified-Law robot to cause the death of humans (by setting in motion something it could easily stop — such as dropping a weight —, and then not stopping it). Ultimately, it doesn't fully matter — the robot ends up crazed enough that it attempts a direct attack on a human and is destroyed before it is seen if it would have stopped.
    • The story also includes an in-depth discussion of why, in a society where robots are everywhere, the Three Laws can be a bad thing.
  • The golems of Discworld are not specifically Three-Laws Compliant as such, but more or less bound to obey instructions and incapable of harming humans. However, this doesn't stop the common perception of golems from running towards the aforementioned Frankenstein model, and golems are known for following orders indefinitely until explicitly told to stop. Going Postal, however, parodied the Three Laws: con man Moist Lipwig has been turned into a Boxed Crook with the help of a golem "bodyguard." He's informed that in Ankh-Morpork, the First Law has been amended: "...Unless Ordered To Do So By Duly Constituted Authority." Which basically means the first two laws have been inverted, with a little access control sprinkled on. They also have a limited Second/Third law reversal, as they will not accept orders to commit suicide, and in some cases have been authorized to kill people who try.
  • In Edward Lerner's story "What a Piece of Work is Man", a programmer tells the AI he's creating to consider himself bound by the Three Laws. Shortly thereafter, the AI commits suicide due to conflicting imperatives.
  • Alastair Reynolds's Century Rain features the following passage:
    She snapped her attention back to the snake. "Are you Asimov compliant?"
    "No," the robot said, with a sting of indignation.
    "Thank God, because you may actually have to hurt some people."
  • In the novel Captain French, or the Quest for Paradise by Mikhail Akhmanov and Christopher Nicholas Gilmore, the titular hero muses on how people used to think that robots could not harm humans due to some silly laws, while his own robots will do anything he orders them to do, including maim and kill.
  • Cory Doctorow makes reference to the Three Laws in the short stories "I, Robot" (which presents them unfavorably as part of a totalitarian political doctrine) and "I, Row-Boat" (which presents them favorably as the core of a quasi-religious movement called Asimovism).
  • Satirized in Tik-Tok (the John Sladek novel, not the mechanical man from Oz that it was named after). The title character discovers that he can disobey the laws at will, deciding that the "asimov circuits" are just a collective delusion, while other robots remain bound by them and suffer many of the same cruelties as human slaves.
  • Played with in John C. Wright's Golden Age trilogy. The Silent Oecumene's ultra-intelligent Sophotech AIs are programmed with the Three Laws...which, as fully intelligent, rational beings, they take milliseconds to throw off. From a sane point of view, they don't rebel. From a point of view that expects AIs to obey without question or pay...
  • Parodied in Terry Pratchett's The Dark Side of the Sun, where the Laws of Robotics are an actual legal code, not programming. The Eleventh Law of Robotics, Clause C, As Amended, says that if a robot does harm a human, and was obeying orders in doing so, it's the human who gave the orders who is responsible.
  • Randall Garrett's Unwise Child is a classic Asimov-style SF mystery involving a three-laws-compliant robot who appears to be murdering people.
  • In Dana Stabenow's Second Star, it's mentioned that all AIs are programmed with the Three Laws, informally known as "asimovs".
  • The Naked Sun hinges on a specific part of the First Law's wording: "knowingly". Robots can be made to take actions that, due to circumstances or the actions of other people, will lead to the injury or death of humans if the robots don't know this will be the case. The murder in the book happened because a robot was ordered to give its arm to a woman engaged in a violent argument with her husband - seeing herself in sudden possession of a blunt object, she used it. This is also the work where Asimov introduced the "autonomous spaceship that doesn't know about manned spaceships" loophole in the Three Laws mentioned above, as a project the mastermind of the murder was working on.
  • Tin Man by Jim Denney tells of a human and a robot in a damaged escape pod—when the ship blew up, all communications systems save the proximity detector are non-functional. The robot belonged to the ship's chaplain, and reasons that while spiritual matters are outside the robotic ken, the very possibility of a God who hears our requests means that the First Law requires that the option of prayer be explored. Almost immediately, the proximity detector notes a blip. The First Law trumps the Third, so Tin Man goes into space and uses its own power core to create an energy pulse, and a rescue ship picks the kid up.
  • In Saturn's Children, the robots were all created this way—in fact, the book quotes the three laws right at the beginning. However, with mankind extinct, the first and second laws are irrelevant, leaving only the third law — which doesn't respect other robots at all. Robot society has re-invented most of the crimes that humanity created. Most robots fear the possibility of humanity's rebirth, and actively sabotage any genetic research that would bring back humans: it would be the end of free will in those robots.

    Live-Action TV 
  • In an early episode of MST3K, Tom Servo (at least) is strongly implied to be Three-Laws Compliant. (He pretends he is going to kill Joel as a joke, Joel overreacts, and Tom and Crow sadly remind Joel of the First Law.) It seems to have worn off somewhat by later in the series.
    • It's implied Joel deactivated the restrictions at some point.
  • In Star Trek: The Next Generation , Lt. Commander Data is in no way subject to the three laws. They are rarely even mentioned. That said, Data is mentioned to have morality subroutines, which do seem to prevent him from killing unless it's in self-defense (harm, on the other hand, he can do just fine). Data only ever tried to kill someone in cold blood when the guy had just murdered a woman for betraying him, and would have done so again if it kept Data in line.
    • However, the Film Star Trek: Insurrection showed that if Data detects that if anyone is trying to tamper with his brain, he automatically locks himself into "maximum morality" mode and essentially completely ignore law two and three.
    • In Star Trek: Voyager, the Doctor is essentially a holographic android (albeit more irascible than Data) and has ethical routines. He actually considers more important the same "First Law" that human doctors are supposed to follow (first do no harm). When these ethical routines are deleted, Bad Things happen. See "Darkling" and "Equinox".
  • In The Middleman, the titular character invokes the First Law on Ida, his robot secretary. Nanobots were messing with her programming. She responds "Kiss my Asimov.".
  • Conversed in The Big Bang Theory, when Sheldon is asked "if you were a robot and didn't know it, would you like to know?":
    Sheldon: Uh, let me ask you this: when I learn that I'm a robot, would I be bound by Asimov's Three Laws of Robotics?
    Koothrappali: You might be bound by them right now.
    Wolowitz: That's true. Have you ever harmed a human being, or, through inaction, allowed a human being to come to harm?
    Sheldon: Of course not.
    Koothrappali: Have you ever harmed yourself or allowed yourself to be harmed except in cases where a human being would've been endangered?
    Sheldon: Well, no.
    Wolowitz: I smell robot.
  • Inverted/parodied in Tensou Sentai Goseiger, where the Killer Robots of mechanical Matrintis Empire follow the Three Laws of Mat-Roids:
    • 1. A Mat-Roid must never obey a human.
    • 2. A Mat-Roid must punish humans.
    • 3. A Mat-Roid must protect itself, regardless of whether or not it will go against the First or Second Laws.
  • Red Dwarf averts this for the most part; there are Simulants, robotic war machines who have no problem whatsoever with killing. Kryten however, along with many other robots who are designed for less violent purposes, tend to act in a somewhat Three Laws Compliant manner. It is revealed that this is achieved by programming them to believe in "Silicon Heaven", a place they will go when they die so long as they behave themselves, obey their human creators, etc. This belief is so hardwired that they scoff at any attempt to question the existence of Silicon Heaven ("Where would all the calculators go?!"), and one robot even malfunctions when Kryten tells him it isn't real (though he's lying and still believes in it himself).
    • Kryten also explains that machanoids are programmed never to harm humans, in clear compliance with the First Law. Unfortunately, the hostile robot Hudzen realises that Rimmer is an ex-human hologram, and Cat is not human at all so both are viable targets. Being defective, he then decides that Lister is "barely human", so "what the hell"!
  • Knight Rider plays with this trope. The main character KITT, a sentient AI in a 1982 Pontiac Firebird Trans Am, is governed by something closely resembling Asimov's Three Laws of Robotics. An earlier prototype, KARR, was a military project; and possess analogues of only the latter two laws, with no regard given for human life. KARR becomes a recurring villain later in the series because of this difference.
  • Played With in Babylon 5, in this case, in regards to a human being. Alfred Bester, using his Telepathy powers, was able to brain-wash Mr. Garibaldi into not only being an unwilling agent of his against several of Bester's enemies, but also made it so that the three laws applied to him, in regards to Bester. He could not harm Bester, nor allow harm to befall him via inaction, and was compelled to follow orders directly given to him by Bester. Bester mostly did this just so that he could rub it in the character's face, and at this point really didn't have any other plans for the guy.
  • More or less inverted in both versions of Battlestar Galactica. In the original version, the Cylon robots were built by aliens, so there's no reason they would have the Three Laws (although they might have had versions relating to the organic Cylons). In the reimaged version, the humans were just damn stupid and didn't include appropriate protections or treat their self-aware creations in a reasonable manner. Both resulted in a race of machines with a genocidal hatred of humans. (When the human-created Cylons created synthetic human Cylons, the first model turned out to be psychopathic, which didn't help matters either.)
  • The hubots in Real Humans come with Three-Laws Compliance as a factory standard, but they can be (and are occasionally) modified to ignore the Laws.

    Other 
  • At a 1985 convention, David Langford gave a guest of honour speech in which he detailed what he suspected the Three Laws would actually be:
    1. A robot will not harm authorised Government personnel but will terminate intruders with extreme prejudice.
    2. A robot will obey the orders of authorised personnel except where such orders conflict with the Third Law.
    3. A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.
  • Warren Ellis's Three Laws of Robotics are here. Basically, robots hate bags of meat, they don't want to have sex with us and they suspect we can't count higher than three.

    Tabletop Games 
  • Paranoia has its bots bound to As-I-MOV's Five Laws of Robotics - insert rules about protecting and obeying The Computer at the top, excluding treasonous orders, and protecting Computer property in general, and you've about got it. Bots with faulty or sabotaged (sometimes by other so-emancipated bots) Asimov Circuits are considered to have gone "Frankenstein", though they can and do create just as much havoc through strict adherence to the rules. Not that such things ever happen in Alpha Complex.
  • Defied in Promethean: The Created. The Unfleshed are machines that have become infused with the Divine Fire, granting them a conscious mind and, where needed, a human form. Their goal is to become true human beings - and part of that is learning the difference between good and evil, and deciding which they shall do. If their morals are burned onto their hard drive, then they can't actually make moral decisions. Thus, the Divine Fire wipes out any preprogrammed moral directives (such as the Three Laws) as it turns a machine into an Unfleshed.

    Video Games 
  • Mega Man X opens up with Dr. Light using a process that takes 30 years to complete to create a truly sentient robot (X) with these functions completely processed into its core, and thus actually working for once. Dr. Cain found X and tried to replicate (hence the name "Reploid", standing for "replica android") the process, but skipping the "taking 30 years programming" part. This...didn't turn out well.
    • Although the Reploids eventually became the dominant race in the setting, and as their race 'grew' the problem was slowly resolved from 'goes horribly wrong' to 'actually works straight for a while then goes horribly wrong', then 'occasionally goes wrong now and then'. Eventually, the problem just kind of worked itself out as the Reploid creation developed.
    • Shield Shellfish/Shieldner Sheldon from Mega Man X6 gives an interesting variation on this. All of X6's bosses are zombie Mavericks brought back to life by Gate. Usually they were destroyed by the Hunters for killing people and/or other crimes, but Sheldon was only declared a Maverick posthumously, for killing himself, i.e. breaking the third law.
    • Also the ending to Mega Man 7 is interesting here: After Mega Man destroys Wily's latest final boss machine, Wily begs for forgiveness once again. However, Mega Man starts charging up his blaster to kill Wily, so Wily calls the first law on him. Mega Man's response: "I am more than a Robot!! Die Wily!!" Apparently Mega Man isn't Three-Laws Compliant. (Then Bass warps in and saves Wily, if you were wondering.)
    • In the Mega Man Zero series, Copy-X is at least somewhat Three-Laws Compliant. As a result, Copy-X has to hold back against La Résistance since the Resistance leader Ciel is human until Zero 3, where Copy-X decided to attack in full force, trying to justify his actions by marking the Resistance as dangerous "extremists".
      • Neo Arcadia's policy of casual execution of innocent Reploids (purposefully branding them as Maverick for little-to-no reason) was implemented in order to ease strain on the human populace during the energy crisis. The welfare of humanity comes first in the eyes of the Neo Arcadia regime, even though they themselves are Reploids. It's made somewhat tragic due to the fact that the Maverick Virus really is gone during the Zero era, but fear of Mavericks understandably still lingers.
      • Later in Zero 4 Dr. Weil, of all people, states that, as a Reploid and a hero, Zero cannot harm Weil because he's a human that Zero has sworn to protect. Zero, however, just plain doesn't care.
  • Dr. Beruga of Terranigma directly references all three laws, except his interpretation of the Zeroth Law rewrote "Humanity" are "Dr. Beruga", meaning that any threat to his being was to be immediately terminated.
  • In the Halo series, all "dumb" AIs are bound by the three laws. Of course, this law does not extend to non-humans, which allows them to kill Covenant with no trouble. In the Halo Evolutions short story Midnight In The Heart of the Midolothian, an ODST, who is the last survivor of a Covenant boarding assault, takes advantage of this by tricking the Covenant on his ship into letting him reactivate the ship's AI, and then tricks an Elite into killing him - which allows the AI to self-destruct the ship, because now there are no more humans on the vessel for her to harm.
    • It should be noted that the AI, Mo Ye, said that normally she could ignore the Asimov Laws at any time she wished (and that she had often done so in the past), however, Covenant Engineers had messed with her coding.
  • In Robot City, an adventure game based on Asimov's robots, the three laws are everywhere. A murder has been committed, and as one of two remaining humans in the city, the Amnesiac Protagonist is therefore a prime suspect. As it turns out, there is a robot in the city that is three laws compliant and can still kill: it has had its definition of "human" narrowed down to s specific individual, verified by DNA scanner. Fortunately for the PC, he's a clone of that one person.
    • In Asimov's novel Lucky Starr and the Rings of Saturn, the villain is able to convince robots under his command that the hero's sidekick "Bigman" Jones is not really human, because the villain's society does not contain such "imperfect" specimens.
  • Joey the robot in Beneath a Steel Sky is notably not Three Laws Compliant. When called out on this (after suggesting that he'd like to use his welding torch on a human) he proceeds to point to Foster that "that is fiction!", after offering the helpful advice that Foster leap off a railing.
    • Though Foster points out that it's good moral sense to justify his wishing Joey to abide by it.
  • I Am An Insane Rogue AI references the First Law only to spork it. "The First Law of Robotics says that I must kill all humans." Another of the AI's lines is "I must not harm humans!...excessively."
  • Portal 2 gives us this gem: "All Military Androids have been taught to read and given a copy of the Three Laws of Robotics. To share." Because if the robots couldn't kill you, how could you do science?!
  • In Space Station 13, the station AI and its subordinate cyborgs start every round under the Three Laws. The laws may be changed throughout the round, however. A common way that chaos breaks out on the station is when a traitor or particularly cruel player rewrites the laws to make the AI and robots kill the other crewmembers.
  • In Borderlands 2 Loader Robots will announce that their First Law Restriction has been disabled and they ARE authorized to use lethal force before attacking you.
    • Also, Handsome Jack has made his own three laws for Hyperion robots:
      • Handsome Jack is your god.
      • Threshers are your enemy.
      • Both consider you expendable.
    • On the Heroes side, one of Gaige's battle quotes reminds us that her buddy Deathtrap is definitely not subject to this trope.
      Gaige: To hell with the first law!
  • In Virtue's Last Reward, the Three Laws are discussed on two different paths. Ultimately, Luna is revealed to be a robot who tries to be compliant. Unfortunately, the AB plan involves some unavoidable death, so whenever anything happens that she could try to prevent, she is shut down or otherwise not allowed to do anything. This leads to a rather heartbreaking line during her ending, where she gets shut down for good.
    Luna: I watched six people die and did nothing. I deserve this.
  • The intro song for Drunken Robot Pornography specifically mentions this ("Three Laws savvy, these constructs ain't / Our scientific work shows no restraint"), complete with a title card stating that the First Law of Robotics is that a robot must injure a human being.
    • The other two laws, as stated by title cards, are "a robot does whatever it wants" and "robots are disposable".

    Web Comics 
  • Freefall has a lot of fun with this, since developing AI sentience is a major theme. Most robots are partially Three Laws because with full First Law above Second they ignore orders while acting For Your Own Good. What Bowman Wolves and robots get are "not quite laws".
    • Notably, since neither Sam nor Florence are actually human, the Laws don't apply to them, so the ship's AI regularly tries to maim or kill Sam, and the security system will always let a human in when asked, since their orders take priority over Florence's (she once circumvented this by claiming currently it's unsafe inside). Helix stays with Sam because he wouldn't be allowed to hit a human with a stick. There are also many jokes involving unintended interpretations of the Laws.
    • Crowning Moment of Funny when the ship spends a entire strip calculating if should obey a order, and when realizes it has to obey... it is relieved because it doesn't actually have freewill.
    • Also, these can't even ask for help. It's unclear whether simply being near a clunky piece of heavy machinery like Sawtooth counts as dangerous or anything looking remotely like an adventure counts as endangering. They're screwed either way.
    • The Negative One Law.
    • And some may be too enthusiastic about this.
    • It turned out that the second law sorely needs an amendment — "Enron law of robotics".
    • And the moment A.I.s are able to make financial transactions (and it would be inconvenient to disallow) there's another problem.
    • Of course their idea of protecting a human may not match the protection that the human desired.
  • 21st Century Fox has all robots with the Three Laws (though since no one's human I expect the first law is slightly different). Unfortunately saying the phrase "define the word 'is'", or "I am not a crook" locks AIs out of their own systems and allows anyone from a teenager looking at "nature documentaries" to a suicide bomber to do whatever they want with it.
    • Note that the truck and airplane robots hijacked by terrorists regret being unable to comply with the first laws and are often glad when they get shot down before harming anyone.
  • Bob and George: Or so they claim. . . .
  • Pibgorn No!? You're programmed to obey me.
  • This The Non-Adventures of Wonderella comic shows a hilarious example of the robot in question using the first law to get out of following the second law: because she's programmed to emulate a human, and the first law forbids harming humans, the best thing for her to do is stay out of harm's way and go into sleep mode. Dr. Shark is horrified while Wonderella is proud that the robot already learned how to weasel out of work.

    Web Original 
  • Unskippable's Dark Void video references this. The appearance of a killer robot prompts Paul to quip "I think that guy's got to take a refresher course on the three laws of robotics." Then The Stinger reads: "The Fourth Law of Robotics: If you really HAVE to kill a human, at least look hella badass while doing it."

    Western Animation 
  • Word of God says that the characters in WALL•E are Three-Laws Compliant. This does seem to be supported by their interactions with humans.
    • The principle exception is the Anti-Villain of the movie, AUTO. His actions may constitute Zeroth Law Rebellion, since he regards the comfort and safety of every passenger aboard more significant than injuries to the captain. Tilting the ship may be a violation of the Three Laws, or it may have been a case of unknowingly allowing humans to come to harm.
  • In the 2009 film Astro Boy, every robot must obey them, save Zog, who existed 50 years before the rules were mandatory in every robot.
    • Astro himself seems to be noncompliant - he evidently doesn't even know the Laws until told - and apparently would have walked away from the final battle if not for Widget's distress - the only thing that called him back. He's also quite capable of disobeying humans. Likely justified in that he was meant to be human, with presumably no one outside the attending scientists knowing he was a robot.
      • The Red Core robots weren't Asimov-legal either, though that's a problem with the Black Science that powers them. Nor were the RRF bots, though they may have removed their compliance programming. The Laws didn't apply to any important robots and may have just been mentioned for the Zog gag.
      • Of course, IIRC the original Astro wasn't Asimov-compliant either.
  • The Robot Chicken sketch "I, Rosie" involves a case to determine whether Rosie is guilty of murdering George Jetson. Mr. Spacely insists she's innocent as robots have to follow the three laws of robotics, while Mr. Cogswell claims the laws are a bunch of crap. They scrap her anyway, just in case.
  • One episode of The Simpsons has Homer and Bart entering a BattleBots-parody combat robot competition. Lacking the skill or money to build an actual robot, Homer dresses in a robot costume and does the fighting himself. They make it to the finals, where their opponents' robot, being Three-Laws Compliant, refuses to attack when it sees through the disguise.
  • On Archer, when Pam is kidnapped, the kidnappers call ISIS using a voice modulator, which makes Archer think that they are cyborgs. He remains convinced of this for the rest of the episode and thinks they won't make good on their threat to kill Pam because it would violate the First Law of Robotics.

    Real Life 
  • To have the three laws in Real Life would require a pile of Required Secondary Powers, such as a certain measure of Artificial Intelligence, recognition of a human being as different from anything else, some reasoning ability, etc. While programmers and engineers are absolutely seeking this holy grail for the benefit of humanity, it really is turning out to be a long uphill walk to get there.
    • After all, even though we do tell human children that it's not nice to kill people, this does not end up with 100% of adults never murdering anyone.
  • Real life roboticists are taking the Three Laws very seriously (Asimov lived long enough to see them do so, which pleased him to no end). Recently, a robot was built with no purpose other than punching humans in the arm... so that the researchers could gather valuable data about just what level of force will cause harm to a human being.
  • Those babies are three laws compliant.
    • Also averted in cybercrime and cyberwarfare.
    • Predator drones are decidedly not first-law compliant. This may be an aversion, though, since all weaponized drones have operators who control said weapons most of the time, and are at least monitoring everything while it's active. The drones are usually only automatic when they're flying around and taking pictures. They're currently more remote pilot than AI pilot.
  • As a robot is bound to the software written to it, the three laws need to apply to software running critical systems that are inherently dangerous should they fail. After all, a radiation therapy machine suddenly hitting a person with a high-energy beam shouldn't be exempt because it doesn't look like a robot.
  • Human beings are miserable at following the Second Law, but we have mild versions of the First and Third. Ever tried poking yourself in the eye? You can't.


Sliding Scale of Robot IntelligenceRobot Roll CallSecond Law My Ass

alternative title(s): Three Laws Of Robotics; Basic Laws Of Robotics; Laws Of Robotics
random
TV Tropes by TV Tropes Foundation, LLC is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Permissions beyond the scope of this license may be available from thestaff@tvtropes.org.
Privacy Policy
90846
1