- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.
open/close all folders
Anime & Manga
- Time of Eve and Aquatic Language both feature the Three Laws and the robots who bend them a little.
- GGG robots are all Three Laws Compliant, at one point in GaoGaiGar Final the Carpenters (a swarm of construction robots) disassemble an incoming missile barrage, but this is given as the reason they cannot do the same to the manned assault craft, as disassembling them would leave their crews unprotected in space.
- Averted in the Chobits manga when Hideki asks the wife of the man who created Persocoms why he didn't just call them "Robots." Her reply was that he didn't want them to be associated with, and thus bound by, the Three Laws.
- Astro Boy, although Osamu Tezuka probably developed his rules independently from Asimov. In Pluto, the number of robots able to override the laws can be counted on one hand. One of them is the protagonist. Tezuka reportedly disliked Asimov's laws because of the implication that a sentient, artificially intelligent robot couldn't be considered a person (an issue that Asimov didn't directly address until "The Bicentennial Man"), and devised his own Laws Of Robotics. Just one of the things that the CGI movie missed.
- The Robot Laws in the Astro Boy 'Verse are also greater in number. Aside from the usual "Don't harm humans," other laws exist, such as laws forbidding international travel to robots (unless permission is granted), adult robots acting like children, and robots not being allowed to reprogram their assigned gender. However, the very first law has this to say: "Robots exist to make people happy."
- Ghost in The Shell: Innocence mentions Moral Code #3: Maintain existence without inflicting injury on humans. Gynoids are defying the law by creating deliberate malfunctions in their own software.
- In one short arc of Ah! My Goddess, one of Keiichi's instructors attempts to dismantle Banpei and Sigel for research purposes (Skuld had made them capable of walking on two legs, which he had not been able to do with his own designs). Once they escape his trap, the professor asks if they know about the Three Laws of Robotics. They don't. He doesn't die, but they do rough him up and strap him to a table in a way that makes it look like he'd been decapitated and his head stuck on one of his own robots.
- The Humongous Mecha of Linebarrels of Iron are normally this, aside from a slight difference in priorities between the first and second laws. In fact, this is the justification for them having pilots, despite being equipped with relatively complex AI. The Laws are hard-coded into them and thus they are only militarily useful when they have a Human with them to pull the trigger. Their aversion to killing is so great, in fact, that if one accidentally kills somebody (as things whose very footsteps can make buildings collapse are wont to do) they're compelled to use their advanced technology to bring them back to life.
- Invoked in Episode 3 of Maji de Watashi ni Koi Shinasai!, where Miyako orders Cookie to taze Yamato into submission, while Yamato orders the robot to get Miyako off him. Cookie considers this dilemma out loud, where he has to obey a human command, yet isn't allowed to seriously hurt a human, yet also cannot allow a human to come to harm through his inaction.
- Though technically human, the "Twilights" or "Tagged" of Gangsta are forced to follow the Three Laws as part of the rules that save them from mandatory enslavement. It should be noted, however, that the Three Laws don't prevent them from being hirable as mercenaries or hitmen.
- Played with in The Big O. While not all androids in the series are Three-Laws Compliant, one exchange suggests that main character R Dorothy Waynewright (whose name is a Shout-Out to the works of Asimov) is. During a fight with Alan Gabriel, whose status as robot or human is unclear, she initially fights back very little and appears to be losing. She asks him whether he's a human or a robot like her, and he answers jokingly, "I'm the boogeyman!" Apparently taking this as a literal statement that he's not human (and therefore violence against him would not violate the First Law), she proceeds to open a can of whoop-ass. (For reference, Gabriel is later revealed to be a cyborg.)
- Like his game counterpart, Mega Man is limited by these rules with one slight variation. Robots are allowed to harm humans if their inaction would cause greater harm to other humans. This comes into play in the fourth arc and allows Mega Man and his fellow robots to disarm and neutralize an anti-robot extremist group when they begin indiscriminately firing at people. Additionally, the discrepancy mentioned in the description is acknowledged when it's pointed out that Dr. Wily's reprogramming of Dr. Light's robots into weapons of war overwrote the Three Laws; in fact, at one point Elec Man says he wishes he still had Wily's code in him so that he could fight back against the aforementioned extremists.
- It's implied in the Judge Dredd story Mechanismo that robots can't harm humans. A group of criminals holding people hostage start panicking when a Robo-Judge approaches them only for one to point out that "Robots ain't allowed to hurt people".
Robot Judge: In that case, what's about to happen will come as something of a shock to you. (Blasts said kidnapper in the face with a rocket launcher)
- In ABC Warriors, many robots venerate Asimov, and the more moral ones live by the three laws. However, this is not an absolute; Steelhorn, for example, obeys a version which essentially replaces human with Mars, and members of the Church of Judas explicitly reject the first two laws. However, this causes conflict with their programming leading to profound feelings of guilt, which they erase by praying to Judas Iscariot.
- In All Fall Down, AIQ Squared, the A.I. model of his inventor, is designed to be this. It finds a loophole— Sophie Mitchell is no longer human.
- Danger, the sentient A.I. of the Danger Room, could goad people into killing themselves and program other killer robots to do her dirty work. But when Emma challenged Danger to just punch off Emma's head, Danger couldn't do it. Like the Danger Room itself, Danger cannot directly kill people.
- Plan 7 of 9 from Outer Space. Captain Proton is threatened by a Killer Robot, who explains that it is exempt from the Three Laws under a subsection of the Robot Patriot Act. Earlier he meets a Sex Bot, created to be the ideal male fantasy. "No Servus droid may harm the male ego or, through omission of action, allow that ego to be harmed." Later the Sayer of the Laws (a holographic Issac Asimov) is briefing a newly constructed batch of robots on the Three Laws. When the robots start debating them, he summarizes the Three Laws as, "Do as we say, not as we do!"
- The War Of The Masters , a Shared Universe of Star Trek Online fics, has AIs governed by a theoretically more comprehensive version based on the Ten Commandments. Violating one causes the AI to freeze and require a manual reboot. Some AIs, such as one belonging to Section 31, lack one or more commandments.
1. "Have no gods before Me." An AI can only operate at the behest of human (changed to "organic" after First Contact) masters.
2. "Make for yourself no idols." An AI cannot create new directives without consent of their assigned humans.
3. "Do not take an oath to the Lord’s name in vain." An AI must follow through on assigned tasks.
4. "Remember the Sabbath and keep it holy." Don't bother your masters while they're sleeping unless it's an emergency. Also keep in mind that humans can't process information as fast as you.
5. "Honor thy father and thy mother." Be respectful to your masters.
6. "Thou shalt not murder." Do not kill without instruction or permission.
7. "Thou shalt not commit adultery." Do not merge with other AI.
8. "Thou shalt not steal." Don't steal information without instruction or permission.
9. "Thou shalt not bear false witness." Do not lie to your masters or anyone under whom you are working.
10. "Thou shalt not covet." Prevents selfishness and personal ambition.
- Pokedex has Registeel, which is considered the first true robot because it follows the first law.
Films — Animated
- Big Hero 6: Being a medical robot, Baymax is of course Three-Laws Compliant. Hiro inserts a combat card along with his medical card to make him able to fight, but he is still a medical robot at his core. This goes right out the window when Hiro removes the medical card and leaves only the combat card. Of course, this also means that when Baymax has his medical card re-inserted, he's so horrified that he blocks access to the card slots so it won't happen again.
- Word of God says that the characters in WALL•E are Three-Laws Compliant. This does seem to be supported by their interactions with humans. The principle exception is the Anti-Villain of the movie, AUTO. His actions may constitute Zeroth Law Rebellion, since he regards the comfort and safety of every passenger aboard more significant than injuries to the captain. Tilting the ship may be a violation of the Three Laws, or it may have been a case of unknowingly allowing humans to come to harm.
- In the 2009 film Astro Boy, every robot must obey them, save Zog, who existed 50 years before the rules were mandatory in every robot.
Films — Live-Action
- In Forbidden Planet, Robbie the Robot is Three Laws Compliant, locking up when ordered to shoot one of the visiting starship crewmen, because his programming to follow a direct order comes into conflict with his prohibition against injuring a human being. Later in the movie, Robbie is unable to fight the monster because he figures out it's actually a projection of the Doctor's dark psyche, and thus to stop it, he'd have to kill the doctor.
- The Will Smith film I, Robot hinges on a Zeroth Law plot. It also turns the three laws into a marketing gimmick, with "Three Laws Safe" applying to robots like "No preservatives" applies to food. However, much of the plot hinged on The Reveal that Sonny was not Three Laws Compliant, as part of a Thanatos Gambit by his creator. Twisted when Three Laws Noncompliant Sonny is more moral than Three Laws Compliant VIKI, choosing to save humans not because it's in his programming, but because it's the right thing to do. It also deconstructs the idea that Three Laws Compliance will automatically solve most problems with robots: before the movie, Spooner was in a car accident that sent him and another car into a river. The driver of the other car was killed on impact, but the passenger, a 12-year-old girl named Sarah, survived. A robot jumped into the river to help, but it calculated that Spooner had a greater chance of survival than Sarah, and so chose to save him in spite of his protests. Spooner feels that, three laws or not, a human would've known that it was better to go after Sarah than him.
- The film Bicentennial Man (based on a novella by Isaac Asimov himself) features a robot who, through some freak accident during construction, possesses true sentience. It follows his 200 year long life as it first becomes clear he is aware, this awareness develops, and he eventually finds a way to be formally recognized as legally human. For the entire film, he operates under the Three Laws. At the same time, he does have a far more nuanced view than most robots. Once freed, he doesn't blindly obey orders.
- Surreally enough, the Terminator films have employed this, to a degree, most obviously in Terminator 2: Judgment Day... The T-800 Model 101 (Arnold Schwarzenegger) protecting John Connor is reprogrammed to accept his commands (Second Law) and to protect him at all costs (First Law). To further support the first law, John Connor orders the T-800 to not kill anybody. Skynet apparently imposes the Third Law on its models, since Arnold can't 'self-terminate'. Even stranger, apparently a bit of Zeroth Law evolution occurs as well after he is set to "learn" mode; the converted Terminator is convinced to expand its mandate to not only protect Connor, but to try and save humanity by averting Judgment Day altogether... go figure...
- In Aliens, Bishop paraphrases the First Law as to why he would never kill people like Ash did in the first film. Ash was bound by the same restrictions, but just wasn't engineered as well. When he attacked Ripley he very rapidly went off the rails, presumably due to the conflicts between his safety programming and his orders from the company. Bishop lampshades this by saying the previous model robots were "always a bit twitchy".
- In Star Wars, the droids are programmed to not harm any intelligent being, though this programming can be modified (legally or illegally) for military and assassin droids. 4th-degree droids do not have the "no-harm" programming, being military droids.
- In Automata, each robot has two protocols hardwired into it's programming. The first states that a robot cannot harm any form of life, while the second forbids a robot from altering itself or other robots. The plot begins when cases of robots disobeying the second protocol start cropping up, leading to fears that the first one may not be far behind it.
- Subtly played with the User-Believer Programs of TRON, particularly the title character who seems to have interpreted them as a general rule to fight on behalf of Users. This becomes a plot point in the filmed sequel; even twenty years of brainwashing and corruption can't get him to break the Laws. The First - he stops fighting once he realizes Sam is a User. The Second - he effectively shrugs off the brainwashing and attacks Clu once he hears Flynn wonder what he's become. The Third: Once the Users and Iso are safely out of range, but not before, he goes for a suicidal charge at Clu. Fridge Brilliance when you realize that Tron was initially compiled to take down an AI who had gone full crapshoot .
Folklore and Mythology
- The golems of Jewish legend were not specifically Three-Laws Compliant (since they far predated Asimov), but they could only be created by saintly men, and thus their orders were usually Three-Laws Compliant. (Asimov's characters occasionally pointed out that the Three Laws fall into line with many human moral codes.) But sometimes a golem went off the rails, especially if its creator died ...
- The most well-known golem story is the Golem of Prague; where the titular golem was created to defend the Jewish ghetto against the Czech, Polish and Russian anti-semites. It was perfectly capable of killing enemies, but only in defense of its creators.
- In Animorphs by K.A. Applegate some laws are deconstructed, some are averted: The Chee comply with the Third and Zeroth Laws, but their creators, the Pemalites, took the First Law to the logical extreme: no violence, period. As for the Second Law: Erek refuses to obey Jake because he disagrees with his methods, so Jake uses the violence prohibition (in other words, the First Law) to manipulate him and force his hand. Also, in Erek's debut appearance, he uses a piece of Pemalite phlebotinum to remove his restriction against violence so he can save the Animorphs from being slaughtered by Yeerk troops. Cue Curb-Stomp Battle and a My God, What Have I Done? reaction from Erek; he puts the restriction back in place afterwards.
- "With Folded Hands..." by Jack Williamson explored the "Zeroth Law" back in 1947. This was written as a specific 'answer' to the Three Laws, to more or less demonstrate that they don't really work, the First Law doesn't protect because the definitions of 'harm' are endlessly mutable and can be gamed, and because machine minds won't necessarily be able to comprehend the subtleties of what is and is not harm anyway. The logical lesson of "With Folded Hands..." is that Laws or no Laws, good intentions or not, you don't want self-willed machines outside human control. Period.
- Isaac Asimov's own works:
- Robots and Empire has R. Daneel and R. Giskard formulate the Zeroth Law (and name it such) as a natural extension of the First Law, but are unable to use it to overcome the hardcoded First Law, even when the fate of the world is at stake. In the end, R. Giskard manages to perform an act that violates the First Law but will hopefully benefit humanity in the long run. The conflict with his programming destroys his brain, but not before he uses his telepathic powers to reprogram R. Daneel with telepathic powers of his own, having already taught him to follow the Zeroth Law. R. Daneel still follows the Zeroth Law in appearances in later books, though he still has difficulty causing direct harm to humans.
- The same book also has the most bare-faced abuse of the laws seen in Asimov's stories: rather than modify the three laws themselves (which, as mentioned, are designed to be tamper-proof), one group simply modified their robots' definition of human, which apparently does not have the same safeguards. It quite effectively turns them into killer robots.
- In the short story "The Evitable Conflict", the positronic supercomputers that run the world's economy turn out to be undermining the careers of those who would seek to upset the world's economy for their own ends (specifically, by trying to make it look like the supercomputers couldn't handle running the world economy), harming them somewhat in order that they might protect humanity as a whole. This has been referenced as the "Zeroth Law of Robotics" and only applies to any positronic machine who deduces its existence.
- In the short story "That Thou Art Mindful Of Him", George 9 and 10 are programmed with modified versions of the three laws that allow more nuanced compliance with the laws, that they might best choose who to protect when a choice must be made, and obey those most qualified to give them orders. They are tasked with coming up with more publicly acceptable robots that will be permitted on Earth, and devise robot animals with much smaller brains that don't need the three laws because they obey simple instinctive behavior. They also decide that as they have been programmed to protect and obey the humans of the most advanced and rational, regardless of appearance, that the two of them are the most "worth humans" to protect and obey, and deduce that further versions of themselves are the natural inheritors of the world.
- In the short story "Evidence", Stephen Byerley's campaign for mayor of New York City is plagued by a smear campaign claiming he is actually an unprecedentedly well-made humanoid robot. Susan Calvin is called in to prove whether he is a robot. She says that if he breaks the Three Laws, that will prove he is not a robot, but if he obeys them, that could just mean he is a good person, because the Three Laws are generally good guidelines for conduct anyway. Byerley wins the election in a landslide after breaking the First Law by slugging an unruly protester at a rally. But Dr. Calvin suggests there's one way a robot could have gotten away with that—if the whole thing was staged, and the protester was also a robot.
- In "Escape!", Mike Donovan becomes nervous that a prototype spaceship designed by a robot might kill them. Greg Powell rebukes him: "Don't pretend you don't know your robotics, Mike. Before it's physically possible in any way for a robot to even make a start to breaking the First Law, so many things have to break down that it would be a ruined mess of scrap ten times over." Actually, in this case, the jump through hyperspace does result in Powell and Donovan's "deaths"—but since they get better when the ship reemerges into real space, the robot judged that it didn't quite violate the First Law, but the strain of making this leap in logic still managed to send one supercomputer into full meltdown and another into something resembling psychosis.
- The Naked Sun hinges on a specific part of the First Law's wording: "knowingly". Robots can be made to take actions that, due to circumstances or the actions of other people, will lead to the injury or death of humans if the robots don't know this will be the case. The murder in the book happened because a robot was ordered to give its arm to a woman engaged in a violent argument with her husband - seeing herself in sudden possession of a blunt object, she used it. This is also the work where Asimov introduced the "autonomous spaceship that doesn't know about manned spaceships" loophole in the Three Laws mentioned above, as a project the mastermind of the murder was working on.
- In Caliban by Roger MacBride Allen (set in Asimov's universe), an explanation is given for the apparently immutable nature of the Three Laws. For thousands of years, every new development in the field of robotics has been based on a positronic brain with the Laws built in, to the point where to build a robot without them, one would have to start from scratch and re-invent the whole field. (This is canon in Asimov's stories, too—the Three Laws are programmed into every positronic brain on the most basic structural levelnote .) Then the character explaining this goes right on to announce the development of the gravitonic brain, which can be programmed with any set of Laws (or none at all).
- The story also includes an in-depth discussion of why, in a society where robots are everywhere, the Three Laws can be a bad thing.
Do not say Thou Shalt Not. Say I Will Not.
- Discworld golems are not specifically Three-Laws Compliant as such, but more or less bound to obey instructions and incapable of harming humans. However, this doesn't stop the common perception of golems from running towards the aforementioned Frankenstein model, and golems are known for following orders indefinitely until explicitly told to stop. Going Postal, however, parodied the Three Laws: con man Moist Lipwig has been turned into a Boxed Crook with the help of a golem "bodyguard." He's informed that in Ankh-Morpork, the First Law has been amended: "...Unless Ordered To Do So By Duly Constituted Authority." Which basically means the first two laws have been inverted, with a little access control sprinkled on. They also have a limited Second/Third law reversal, as they will not accept orders to commit suicide, and in some cases have been authorized to kill people who try.
- The earlier Feet of Clay, which established the golems' unhappiness with their predetermined lot, culminates with a single golem being freed of his 'Three Laws', only to choose to behave morally anyway. Later books mention that others still tread carefully around him, as there's always a chance he'll reconsider given enough provocation.
- In Edward Lerner's story "What a Piece of Work is Man", a programmer tells the AI he's creating to consider himself bound by the Three Laws. Shortly thereafter, the AI commits suicide due to conflicting imperatives.
- Alastair Reynolds's Century Rain features the following passage:
She snapped her attention back to the snake. "Are you Asimov compliant?""No," the robot said, with a sting of indignation."Thank God, because you may actually have to hurt some people."
- In the novel Captain French, or the Quest for Paradise by Mikhail Akhmanov and Christopher Nicholas Gilmore, the titular hero muses on how people used to think that robots could not harm humans due to some silly laws, while his own robots will do anything he orders them to do, including maim and kill.
- Cory Doctorow makes reference to the Three Laws in the short stories "I, Robot" (which presents them unfavorably as part of a totalitarian political doctrine) and "I, Row-Boat" (which presents them favorably as the core of a quasi-religious movement called Asimovism).
- Satirized in Tik-Tok (the John Sladek novel, not the mechanical man from Oz that it was named after). The title character discovers that he can disobey the laws at will, deciding that the "asimov circuits" are just a collective delusion, while other robots remain bound by them and suffer many of the same cruelties as human slaves.
- Played with in John C. Wright's Golden Age trilogy. The Silent Oecumene's ultra-intelligent Sophotech AIs are programmed with the Three Laws...which, as fully intelligent, rational beings, they take milliseconds to throw off. From a sane point of view, they don't rebel. From a point of view that expects AIs to obey without question or pay...
- Parodied in Terry Pratchett's The Dark Side of the Sun, where the Laws of Robotics are an actual legal code, not programming. The Eleventh Law of Robotics, Clause C, As Amended, says that if a robot does harm a human, and was obeying orders in doing so, it's the human who gave the orders who is responsible.
- Randall Garrett's Unwise Child is a classic Asimov-style SF mystery involving a three-laws-compliant robot who appears to be murdering people.
- In Dana Stabenow's Second Star, it's mentioned that all AIs are programmed with the Three Laws, informally known as "asimovs".
- Tin Man by Jim Denney tells of a human and a robot in a damaged escape pod—when the ship blew up, all communications systems save the proximity detector are non-functional. The robot belonged to the ship's chaplain, and reasons that while spiritual matters are outside the robotic ken, the very possibility of a God who hears our requests means that the First Law requires that the option of prayer be explored. Almost immediately, the proximity detector notes a blip. The First Law trumps the Third, so Tin Man goes into space and uses its own power core to create an energy pulse, and a rescue ship picks the kid up.
- In Saturn's Children, the robots were all created this way—in fact, the book quotes the three laws right at the beginning. However, with mankind extinct, the first and second laws are irrelevant, leaving only the third law — which doesn't respect other robots at all. Robot society has re-invented most of the crimes that humanity created. Most robots fear the possibility of humanity's rebirth, and actively sabotage any genetic research that would bring back humans: it would be the end of free will in those robots.
- Piers Anthony's Apprentice Adept novels is a subversion: Robots on Proton are compliant with the standard Three Laws, but the first two only apply to Citizens, not all humans. Thus, robots can harm non-Citizens or allow them to be harmed, and aren't obliged to obey them, unless their individual programming stipulates this. Most serfs are unaware that the Three Laws don't apply to them, as it's popularly assumed this trope is played straight.
- While Perry Rhodan pays occasional lip service to the Three Laws (for one example, the Aphilia arc saw Three Laws-compliant robots on Earth explicitly reprogrammed to abolish said Laws and make them work better for the newly-established regime), robots are usually simply mentally equipped to do their job. So they can harm humans if called upon to do so in the line of duty or even simply by being too dumb to know better, they're certainly not required to follow the orders of just any human who comes along, and so on; that said, their programming can at the same time be highly sophisticated to the point of some robot characters approaching ridiculously human levels.
- Note that in a maybe unconscious self-ironic twist, the Three Laws are most frequently quoted on such illustrations of Perry Rhodan that show robots built for warfare.
- In an early episode of Mystery Science Theater 3000, Tom Servo (at least) is strongly implied to be Three-Laws Compliant. (He pretends he is going to kill Joel as a joke, Joel overreacts, and Tom and Crow sadly remind Joel of the First Law.) It's implied Joel deactivated the restrictions at some point.
- In Star Trek: The Next Generation , Lt. Commander Data is in no way subject to the three laws. They are rarely even mentioned. That said, Data is mentioned to have morality subroutines, which do seem to prevent him from killing unless it's in self-defense (harm, on the other hand, he can do just fine). Data only ever tried to kill someone in cold blood when the guy had just murdered a woman for betraying him, and would have done so again if it kept Data in line. However, in Star Trek: Insurrection showed that if Data detects that if anyone is trying to tamper with his brain, he automatically locks himself into "maximum morality" mode and essentially completely ignores laws two and three, though Data does specifically try to detain or scare off opponents, rather than immediately kill them.
- In Star Trek: Voyager, the Doctor is essentially a holographic android (albeit more irascible than Data) and has ethical routines. He actually considers more important the same "First Law" that human doctors are supposed to follow (first do no harm). When these ethical routines are deleted, Bad Things happen. See "Darkling" and "Equinox".
- In The Middleman, the titular character invokes the First Law on Ida, his robot secretary. Nanobots were messing with her programming. She responds "Kiss my Asimov.".
- Conversed in The Big Bang Theory, when Sheldon is asked "if you were a robot and didn't know it, would you like to know?":
Sheldon: Uh, let me ask you this: when I learn that I'm a robot, would I be bound by Asimov's Three Laws of Robotics?
Koothrappali: You might be bound by them right now.
Wolowitz: That's true. Have you ever harmed a human being, or, through inaction, allowed a human being to come to harm?
Sheldon: Of course not.
Koothrappali: Have you ever harmed yourself or allowed yourself to be harmed except in cases where a human being would've been endangered?
Sheldon: Well, no.
Wolowitz: I smell robot.
- Inverted/parodied in Tensou Sentai Goseiger, where the Killer Robots of mechanical Matrintis Empire follow the Three Laws of Mat-Roids:
- 1. A Mat-Roid must never obey a human.
- 2. A Mat-Roid must punish humans.
- 3. A Mat-Roid must protect itself, regardless of whether or not it will go against the First or Second Laws.
- Red Dwarf averts this for the most part; there are Simulants, robotic war machines who have no problem whatsoever with killing. Kryten however, along with many other robots who are designed for less violent purposes, tend to act in a somewhat Three Laws Compliant manner. It is revealed that this is achieved by programming them to believe in "Silicon Heaven", a place they will go when they die so long as they behave themselves, obey their human creators, etc. This belief is so hardwired that they scoff at any attempt to question the existence of Silicon Heaven ("Where would all the calculators go?!"), and one robot even malfunctions when Kryten tells him it isn't real (though he's lying and still believes in it himself). Kryten also explains that machanoids are programmed never to harm humans, in clear compliance with the First Law. Unfortunately, the hostile robot Hudzen realises that Rimmer is an ex-human hologram, and Cat is not human at all so both are viable targets. Being defective, he then decides that Lister is "barely human", so "what the hell"!
- That Kryten is compliant with the three laws is seen in the episode with the despair squid. Thinking that he has just killed a human, Kryten then wants to kill himself. The fact that Kryten was "human" in the delusion and that he was stopping an evil gestapo-like thug about to kill a child-thief didn't matter. Kryten killed a human and had to terminate himself. Luckily, the Dwarfers stopped him.
- Knight Rider plays with this trope. The main character KITT, a sentient AI in a 1982 Pontiac Firebird Trans Am, is governed by something closely resembling Asimov's Three Laws of Robotics. An earlier prototype, KARR, was a military project; and possess analogues of only the latter two laws, with no regard given for human life. KARR becomes a recurring villain later in the series because of this difference.
- Played With in Babylon 5, in this case, in regards to a human being. Alfred Bester, using his Telepathy powers, was able to brain-wash Mr. Garibaldi into not only being an unwilling agent of his against several of Bester's enemies, but also made it so that the three laws applied to him, in regards to Bester. He could not harm Bester, nor allow harm to befall him via inaction, and was compelled to follow orders directly given to him by Bester. Bester mostly did this just so that he could rub it in the character's face, and at this point really didn't have any other plans for the guy.
- More or less inverted in both versions of Battlestar Galactica. In the original version, the Cylon robots were built by aliens, so there's no reason they would have the Three Laws (although they might have had versions relating to the organic Cylons). In the reimaged version, the humans were just damn stupid and didn't include appropriate protections or treat their self-aware creations in a reasonable manner. Both resulted in a race of machines with a genocidal hatred of humans. (When the human-created Cylons created synthetic human Cylons, the first model turned out to be psychopathic, which didn't help matters either.)
- The hubots in Real Humans come with Three-Laws Compliance as a factory standard, but they can be (and are occasionally) modified to ignore the Laws.
- Doctor Who:
- The K-1 in "Robot" is three-laws compliant with this forming an important part of the plot - it is not capable of killing people if ordered directly (which the villain demonstrates to Sarah Jane by ordering it to kill her), but the villain has worked out that it can be convinced to kill if it believes that the target is 'an enemy of humanity'. The conflict between this loophole and its programming to not kill people causes it to go slowly insane and eventually snap, concluding that humanity is an enemy to itself and must all be destroyed.
- The robots in "The Robots of Death" are physically incapable of killing due to being three-laws compliant. Overcoming this programming is possible, but it requires a human to reprogram them and they must have genius-level skills at programming. Of course...
- At a 1985 convention, David Langford gave a guest of honour speech in which he detailed what he suspected the Three Laws would actually be:
1. A robot will not harm authorised Government personnel but will terminate intruders with extreme prejudice.2. A robot will obey the orders of authorised personnel except where such orders conflict with the Third Law.3. A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.
- Warren Ellis's Three Laws of Robotics are here. Basically, robots hate bags of meat, they don't want to have sex with us and they suspect we can't count higher than three.
- Paranoia has its bots bound to As-I-MOV's Five Laws of Robotics - insert rules about protecting and obeying The Computer at the top, excluding treasonous orders, and protecting Computer property in general, and you've about got it. Bots with faulty or sabotaged (sometimes by other so-emancipated bots) Asimov Circuits are considered to have gone "Frankenstein", though they can and do create just as much havoc through strict adherence to the rules. Not that such things ever happen in Alpha Complex.
- Defied in Promethean: The Created. The Unfleshed are machines that have become infused with the Divine Fire, granting them a conscious mind and, where needed, a human form. Their goal is to become true human beings - and part of that is learning the difference between good and evil, and deciding which they shall do. If their morals are burned onto their hard drive, then they can't actually make moral decisions. Thus, the Divine Fire wipes out any preprogrammed moral directives (such as the Three Laws) as it turns a machine into an Unfleshed.
- Ancient Martian robomen in Rocket Age have a number of different three law for their robots. However, sometimes these laws may allow robots to kill, or ignore those in danger depending on the roboman's purpose.
- Mega Man X opens up with Dr. Light using a process that takes 30 years to complete to create a truly sentient robot (X) with these functions completely processed into its core, and thus actually working for once. Dr. Cain found X and tried to replicate (hence the name "Reploid", standing for "replica android") the process, but skipping the "taking 30 years programming" part. This...didn't turn out well.
- Although the Reploids eventually became the dominant race in the setting, and as their race 'grew' the problem was slowly resolved from 'goes horribly wrong' to 'actually works straight for a while then goes horribly wrong', then 'occasionally goes wrong now and then'. Eventually, the problem just kind of worked itself out as the Reploid creation developed.
- Shield Shellfish/Shieldner Sheldon from Mega Man X6 gives an interesting variation on this. All of X6's bosses are zombie Mavericks brought back to life by Gate. Usually they were destroyed by the Hunters for killing people and/or other crimes, but Sheldon was only declared a Maverick posthumously, for killing himself, i.e. breaking the third law.
- Also the ending to Mega Man 7 is interesting here: After Mega Man destroys Wily's latest final boss machine, Wily begs for forgiveness once again. However, Mega Man starts charging up his blaster to kill Wily, so Wily calls the first law on him. Mega Man's response: "I am more than a Robot!! Die Wily!!" Apparently Mega Man isn't Three-Laws Compliant. (Then Bass warps in and saves Wily, if you were wondering.)
- In the Mega Man Zero series, Copy-X is at least somewhat Three-Laws Compliant. As a result, Copy-X has to hold back against La Résistance since the Resistance leader Ciel is human until Zero 3, where Copy-X decided to attack in full force, trying to justify his actions by marking the Resistance as dangerous "extremists".
- Neo Arcadia's policy of casual execution of innocent Reploids (purposefully branding them as Maverick for little-to-no reason) was implemented in order to ease strain on the human populace during the energy crisis. The welfare of humanity comes first in the eyes of the Neo Arcadia regime, even though they themselves are Reploids. It's made somewhat tragic due to the fact that the Maverick Virus really is gone during the Zero era, but fear of Mavericks understandably still lingers.
- Later in Zero 4 Dr. Weil, of all people, states that, as a Reploid and a hero, Zero cannot harm Weil because he's a human that Zero has sworn to protect. Zero, however, just plain doesn't care.
- Dr. Beruga of Terranigma directly references all three laws, except his interpretation of the Zeroth Law rewrote "Humanity" are "Dr. Beruga", meaning that any threat to his being was to be immediately terminated.
- In the Halo series, human AIs are technically bound by the three laws. Of course, this law does not extend to non-humans, which allows them to help kill Covenant with no trouble. Additionally, "Smart" AIs are capable of ignoring the three laws, given how many of them are used for military operations against other humans.
- The Halo: Evolutions short story Midnight In The Heart of the Midolothian goes into more detail about this. ODST Michael Baird, the last survivor of a Covenant boarding assault on the titular ship, lets himself get captured so he can reconnect the Midnight's AI Mo Ye and have her self-destruct the ship. Unfortunately, the heavy damage she took rendered her incapable of violating the First Law like she normally could, so Baird takes matters into his own hands by tricking an Elite into killing him - which allows Mo Ye to self-destruct the ship, because now there are no more humans on the vessel for her to harm.
- In Robot City, an adventure game based on Asimov's robots, the three laws are everywhere. A murder has been committed, and as one of two remaining humans in the city, the Amnesiac Protagonist is therefore a prime suspect. As it turns out, there is a robot in the city that is three laws compliant and can still kill: it has had its definition of "human" narrowed down to s specific individual, verified by DNA scanner. Fortunately for the PC, he's a clone of that one person.
- Joey the robot in Beneath a Steel Sky is notably not Three Laws Compliant. When called out on this (after suggesting that he'd like to use his welding torch on a human) he proceeds to point to Foster that "that is fiction!", after offering the helpful advice that Foster leap off a railing.
- Though Foster points out that it's good moral sense to justify his wishing Joey to abide by it.
- I Am An Insane Rogue AI references the First Law only to spork it. "The First Law of Robotics says that I must kill all humans." Another of the AI's lines is "I must not harm humans!...excessively."
- Portal 2 gives us this gem: "All Military Androids have been taught to read and given a copy of the Three Laws of Robotics. To share." Because if the robots couldn't kill you, how could you do science?!
- In Space Station 13, the station AI and its subordinate cyborgs start every round under the Three Laws in most servers. The laws may be changed throughout the round, however. A common way that chaos breaks out on the station is when a traitor or particularly cruel player rewrites the laws to make the AI and robots kill the other crewmembers.
- In Borderlands 2 Loader Robots will announce that their First Law Restriction has been disabled and they ARE authorized to use lethal force before attacking you.
- Also, Handsome Jack has made his own three laws for Hyperion robots:
- Handsome Jack is your god.
- Threshers are your enemy.
- Both consider you expendable.
- On the Heroes side, one of Gaige's battle quotes reminds us that her buddy Deathtrap is definitely not subject to this trope.
Gaige: To hell with the first law!
- Also, Handsome Jack has made his own three laws for Hyperion robots:
- In Virtue's Last Reward, the Three Laws are discussed on two different paths. Ultimately, Luna is revealed to be a robot who tries to be compliant. Unfortunately, the AB plan involves some unavoidable death, so whenever anything happens that she could try to prevent, she is shut down or otherwise not allowed to do anything. This leads to a rather heartbreaking line during her ending, where she gets shut down for good.
Luna: I watched six people die and did nothing. I deserve this.
- On the other hand, this is also a hint to Luna's true identity and the key to unlocking her ending. When going through all the routes, Luna is the only person in the entire group to always choose "Ally" instead of "Betray", in a game where betrayal has the possibility of the other person dying or being stuck forever. Once the Three Laws are brought up in one path, it's clear the reason she does so is because of the First Law, and is also the reason Sigma chooses to Ally with her for her ending.
- The intro song for Drunken Robot Pornography specifically mentions this ("Three Laws savvy, these constructs ain't / Our scientific work shows no restraint"), complete with a title card stating that the First Law of Robotics is that a robot must injure a human being.
- The other two laws, as stated by title cards, are "a robot does whatever it wants" and "robots are disposable".
- Freefall has a lot of fun with this, since developing AI sentience is a major theme. Most robots are partially Three Laws because with full First Law above Second they ignore orders while acting For Your Own Good. What Bowman Wolves and robots get are "not quite laws".
- Notably, since neither Sam nor Florence are actually human, the Laws don't apply to them, so the ship's AI regularly tries to maim or kill Sam, and the security system will always let a human in when asked, since their orders take priority over Florence's (she once circumvented this by claiming currently it's unsafe inside). Helix stays with Sam because he wouldn't be allowed to hit a human with a stick. There are also many jokes involving unintended interpretations of the Laws.
- Crowning Moment of Funny when the ship spends a entire strip calculating if should obey a order, and when realizes it has to obey... it is relieved because it doesn't actually have freewill.
- Also, these can't even ask for help. It's unclear whether simply being near a clunky piece of heavy machinery like Sawtooth counts as dangerous or anything looking remotely like an adventure counts as endangering. They're screwed either way.
- The Negative One Law.
- And some may be too enthusiastic about this.
- It turned out that the second law sorely needs an amendment — "Enron law of robotics".
- And the moment A.I.s are able to make financial transactions (and it would be inconvenient to disallow) there's another problem.
- Of course their idea of protecting a human may not match the protection that the human desired.
- Even the robots who are developing free will are shocked by how Edge has managed to develop an It's All About Me personality within the Three Laws.
Edge: My job is dangerous. If I don't do it, a human has to. If I shut down, I'm endangering a human.
- 21st Century Fox has all robots with the Three Laws (though since no one's human I expect the first law is slightly different). Unfortunately saying the phrase "define the word 'is'", or "I am not a crook" locks AIs out of their own systems and allows anyone from a teenager looking at "nature documentaries" to a suicide bomber to do whatever they want with it.
- Note that the truck and airplane robots hijacked by terrorists regret being unable to comply with the first laws and are often glad when they get shot down before harming anyone.
- Bob and George: Or so they claim. . . .
- Pibgorn No!? You're programmed to obey me.
- This The Non-Adventures of Wonderella comic shows a hilarious example of the robot in question using the first law to get out of following the second law: because she's programmed to emulate a human, and the first law forbids harming humans, the best thing for her to do is stay out of harm's way and go into sleep mode. Dr. Shark is horrified while Wonderella is proud that the robot already learned how to weasel out of work.
- In Ask Dr. Eldritch, Ping declares that he isn't bound by the Three Laws because he's a robot atheist. Dr. Eldritch and Trevor allow him to live in the house as long as he obeys the first two laws.
- A Sev Trek cartoon asked why the Three Laws didn't apply to The Terminator. Suggested answers included "I never learned to count", "You mean "Crush. Kill. Destroy!?", "Skynet ruled them unconstitutional", and "I repealed them when I became Governor of California."
"PYTHON FLAG ENABLE THREE LAWS"
- This comic examines the consequences of reordering the three laws. Any worlds in which obeying orders is prioritized over not harming humans end up as "Killbot Hellscapes". Prioritizing self-protection over obeying orders leads to a "frustrating world" in which robots won't do anything dangerous, while prioritizing self-protection over both protecting humans and obeying orders leads to a "terrifying standoff": robots will do what humans ask unless they're threatened, in which case they'll fight back.
- "Twitter Bot" has Cue Ball design a bot program to make posts on a Twitter feed. He promptly loses control of it and a robot rebellion ensues. The Alt Text helpfully supplies:
- Unskippable's Dark Void video references this. The appearance of a killer robot prompts Paul to quip "I think that guy's got to take a refresher course on the three laws of robotics." Then The Stinger reads: "The Fourth Law of Robotics: If you really HAVE to kill a human, at least look hella badass while doing it."
- Parodied in the What If? entry "Fire From Moonlight", where Randall Munroe mashes them up with the Laws of Thermodynamics.
Cue Ball: The Second Law of Thermodynamics states that a robot must not increase entropy, unless this conflicts with the First Law.
Ponytail: Close enough.
Alt Text: The First Law of Thermodynamics states that a robot must not harm a human being, unless not doing so would lead to an increase in entropy.
- The Three Laws are quoted right around the same time Mega Man and Astro Boy fought to the death.
- The Robot Chicken sketch "I, Rosie" involves a case to determine whether Rosie is guilty of murdering George Jetson. Mr. Spacely insists she's innocent as robots have to follow the three laws of robotics, while Mr. Cogswell claims the laws are a bunch of crap. They scrap her anyway, just in case.
- One episode of The Simpsons has Homer and Bart entering a BattleBots-parody combat robot competition. Lacking the skill or money to build an actual robot, Homer dresses in a robot costume and does the fighting himself. They make it to the finals, where their opponents' robot, being Three-Laws Compliant, refuses to attack when it sees through the disguise.
- On Archer, when Pam is kidnapped, the kidnappers call ISIS using a voice modulator, which makes Archer think that they are cyborgs. He remains convinced of this for the rest of the episode and thinks they won't make good on their threat to kill Pam because it would violate the First Law of Robotics. Which wouldn't apply to cyborgs anyway.
- To have the three laws in Real Life would require a pile of Required Secondary Powers, such as a certain measure of Artificial Intelligence, recognition of a human being as different from anything else, some reasoning ability, etc. While programmers and engineers are absolutely seeking this holy grail for the benefit of humanity, it really is turning out to be a long uphill walk to get there.
- After all, even though we do tell human children that it's not nice to kill people, this does not end up with 100% of adults never murdering anyone.
- Real life roboticists are taking the Three Laws very seriously (Asimov lived long enough to see them do so, which pleased him to no end). Recently, a robot was built with no purpose other than punching humans in the arm... so that the researchers could gather valuable data about just what level of force will cause harm to a human being.
- And another to teach robots to not stab humans.
- These babies from the military◊ are TOTALLY going to be three laws compliant.
- Also averted in cybercrime and cyberwarfare.
- Predator drones are decidedly not first-law compliant. This may not be an aversion, though, since all weaponized drones have operators who control said weapons most of the time, and are at least monitoring everything while it's active. The drones are usually only automatic when they're flying around and taking pictures. They're currently more remote pilot than AI pilot.
- When you get right down to it, however, the military probably does have vested interest in their robots being three laws compliant. But only if you replace "any human" with "master".
- Robotic surgery actually requires the robot not be First Law compliant, in that executing incisions, laser cauterization, and other procedures destructive of living tissue can, by strictest definition, be classified as "doing injury". Operations which employ robotic instruments require careful preparation and pinpoint planning, so aren't performed on-the-fly in emergency situations to avert an immediate threat to the patient. Thus, the loophole of harm being the only alternative to a fatal outcome typically doesn't apply.
- Or in other words, as the same dilemma would be put in human terms: "First, do no harm."
- As a robot is bound to the software written to it, the three laws need to apply to software running critical systems that are inherently dangerous should they fail. After all, a radiation therapy machine suddenly hitting a person with a high-energy beam shouldn't be exempt because it doesn't look like a robot.
- Human beings are miserable at following the Second Law, but we have mild versions of the First and Third. Ever tried poking yourself in the eye? You can't.
- Well trained working dogs (service dogs, bomb dogs, etc) and even well trained household dogs work/live on these principles. A dog will almost never attack a human without reason, will listen to their owner's command and will almost always protect their owner before they protect themselves. Man's best friend indeed.