Before around 1940, almost every Speculative Fiction
story involving robots followed the Frankenstein model. A robot had to be constantly given instructions on what to do by a human, and in the absence of any such human control, it was Crush. Kill. Destroy!
time. Fed up with this, a young Isaac Asimov
decided to write stories about sympathetic
robots, with programmed safeguards
that prevented them from going on Robot Rampages. A conversation with Editor of Editors John W. Campbell
helped him to boil down those safeguards into The Three Laws of Robotics:
According to Asimov's account, Campbell composed the Three Laws; according to Campbell's account, he was simply distilling concepts that were presented in Asimov's stories.
The laws cover most obvious situations, but they are far from faultless. Asimov got a great deal of mileage writing a huge body of stories about how the laws would conflict with one another and generate unpredictable behavior. A recurring character in many of these was the "robopsychologist" Susan Calvin (who was, not entirely coincidentally, a brilliant logician who hated people).
It is worth noting Asimov didn't object exclusively to "the robot as menace stories
" (as he called them) but also the "the robot as pathos
" stories (ditto). He thought that robots attaining and growing to self awareness and full independence were no more interesting than robots going berserk and turning against their masters
. While he did, over the course of his massive career, write a handful of both types of stories (still using the three laws), most of his robot stories dealt with robots as tools, because it made more sense. Almost all the stories surrounding Susan Calvin and her precursors are really about malfunctioning robots, and the mystery of investigating their behavior to discover the underlying conflicts.
Alas, as so often happens, Asimov's attempt to avert one overused trope gave birth to another that has been equally overused. Many writers (and readers) in the following decades would treat the Laws of Robotics as if they were as immutable as Newton's Laws of Motion
, the Theory of Relativity
, the Laws of Gravity
... wait ... you know, they treated these laws better than they treated most real
Of course, even these near-immutable laws were played with and modified. Asimov eventually took one of the common workarounds and formalized it as a Zeroth Law
, which stated that the well-being of humanity as a whole could take precedence over the health of an individual human. Stories by other authors occasionally proposed additional extensions, including a -1st law (sentience as a whole trumps humanity
), 4th (robots must identify themselves as robots
), a different 4th (robots are free to pursue other interests when not acting on the 1st-3rd laws) and 5th (robots must know they are robots
), but unlike Asimov's own laws, these are seldom referenced outside the originating work.
The main problem, of course, is that it is perfectly reasonable to expect a human to create a robot that does not
obey the Three Laws, or even have them as part of their programming. An obvious example of this would be creating a Killer Robot
for a purpose like fighting a war. For such a kind of robot, the Three Laws would be a hindrance to its intended function. In his own stories, Asimov establishes that the Three Laws are hard-coded into the most basic programming that underlies all artificial intelligence; programming a non-Three-Laws-complaint robot would require going back to the beginning and rewriting its entire programming from scratch, not a trivial matter. Asimov also suggested one workaround for this: an autonomous spaceship programmed with the Three Laws could easily blow up spaceships full of people because, being an unmanned spaceship, it could be led to believe that other spaceships were unmanned as well.
Also see Second Law My Ass
open/close all folders
Anime and Manga
- Eve No Jikan and Aquatic Language both feature the Three Laws and the robots who bend them a little.
- GGG robots are all Three Laws Compliant, at one point in GaoGaiGar Final the Carpenters (a swarm of construction robots) disassemble an incoming missile barrage, but this is given as the reason they cannot do the same to the manned assault craft, as disassembling them would leave their crews unprotected in space.
- Averted in the Chobits manga when Hideki asks the wife of the man who created Persocoms why he didn't just call them "Robots." Her reply was that he didn't want them to be associated with, and thus bound by, the Three Laws.
- Astro Boy, although Osamu Tezuka probably developed his rules independently from Asimov. In Pluto, the number of robots able to override the laws can be counted on one hand. One of them is the protagonist. Tezuka reportedly disliked Asimov's laws because of the implication that a sentient, artificially intelligent robot couldn't be considered a person (an issue that Asimov didn't directly address until "The Bicentennial Man"), and devised his own Laws Of Robotics. Just one of the things that the CGI movie missed.
- Ghost in The Shell: Innocence mentions Moral Code #3: Maintain existence without inflicting injury on humans. Gynoids are defying the law by creating deliberate malfunctions in their own software.
- In one short arc of Ah! My Goddess, one of Keiichi's instructors attempts to dismantle Banpei and Sigel for research purposes (Skuld had made them capable of walking on two legs, which he had not been able to do with his own designs). Once they escape his trap, the professor asks if they know about the Three Laws of Robotics. They don't. He doesn't die, but they do rough him up and strap him to a table in a way that makes it look like he'd been decapitated and his head stuck on one of his own robots.
- The Humongous Mecha of Kurogane no Linebarrel are normally this, aside from a slight difference in priorities between the first and second laws. In fact, this is the justification for them having pilots, despite being equipped with relatively complex AI. The Laws are hard-coded into them and thus they are only militarily useful when they have a Human with them to pull the trigger. Their aversion to killing is so great, in fact, that if one accidentally kills somebody (as things whose very footsteps can make buildings collapse are wont to do) they're compelled to use their advanced technology to bring them back to life.
- Invoked in Episode 3 of Maji De Watashi Ni Koi Shinasai, where Miyako orders Cookie to taze Yamato into submission, while Yamato orders the robot to get Miyako off him. Cookie considers this dilemma out loud, where he has to obey a human command, yet isn't allowed to seriously hurt a human, yet also cannot allow a human to come to harm through his inaction.
- Though technically human, the "Twilights" or "Tagged" of Gangsta are forced to follow the Three Laws as part of the rules that save them from mandatory enslavement. It should be noted, however, that the Three Laws don't prevent them from being hirable as mercenaries or hitmen.
- Played with in The Big O. While not all androids in the series are Three-Laws Compliant, one exchange suggests that main character R Dorothy Waynewright (whose name is a Shout-Out to the works of Asimov) is. During a fight with Alan Gabriel, whose status as robot or human is unclear, she initially fights back very little and appears to be losing. She asks him whether he's a human or a robot like her, and he answers jokingly, "I'm the boogeyman!" Apparently taking this as a literal statement that he's not human (and therefore violence against him would not violate the First Law), she proceeds to open a can of whoop-ass. (For reference, Gabriel is later revealed to be a cyborg.)
- Plan 7 of 9 from Outer Space. Captain Proton is threatened by a Killer Robot, who explains that it is exempt from the Three Laws under a subsection of the Robot Patriot Act.
- In Forbidden Planet, Robbie the Robot is Three Laws Compliant, locking up when ordered to shoot one of the visiting starship crewmen, because his programming to follow a direct order comes into conflict with his prohibition against injuring a human being. Later in the movie, Robbie is unable to fight the monster because he figures out it's actually a projection of the Doctor's dark psyche, and thus to stop it, he'd have to kill the doctor.
- The Will Smith film I, Robot hinges on a Zeroth Law plot. It also turns the three laws into a marketing gimmick, with "Three Laws Safe" applying to robots like "No preservatives" applies to food. However, much of the plot hinged on The Reveal that Sonny was not Three Laws Compliant, as part of a Thanatos Gambit by his creator.
- The film Bicentennial Man (based on a novella by Isaac Asimov himself) features a robot who, through some freak accident during construction, possesses true sentience. It follows his 200 year long life as it first becomes clear he is aware, this awareness develops, and he eventually finds a way to be formally recognized as legally human. For the entire film, he operates under the Three Laws. At the same time, he does have a far more nuanced view than most robots. Once freed, he doesn't blindly obey orders.
- Surreally enough, the Terminator films have employed this, to a degree, most obviously in Terminator 2: Judgment Day... The T-800 Model 101 (Arnold Schwarzenegger) protecting John Connor is reprogrammed to accept his commands (Second Law) and to protect him at all costs (First Law). To further support the first law, John Connor orders the T-800 to not kill anybody. Skynet apparently imposes the Third Law on its models, since Arnold can't 'self-terminate'. Even stranger, apparently a bit of Zeroth Law evolution occurs as well after he is set to "learn" mode; the converted Terminator is convinced to expand its mandate to not only protect Connor, but to try and save humanity by averting Judgment Day altogether... go figure...
- In Aliens, Bishop paraphrases the First Law as to why he would never kill people like Ash did in the first film. Ash was bound by the same restrictions, but just wasn't engineered as well. When he attacked Ripley he very rapidly went off the rails, presumably due to the conflicts between his safety programming and his orders from the company. Bishop lampshades this by saying the previous model robots were "always a bit twitchy".
- In Star Wars, the droids are programmed to not harm any intelligent being, though this programming can be modified (legally or illegally) for military and assassin droids. 4th-degree droids do not have the "no-harm" programming, being military droids.
Folklore and Mythology
- The golems of Jewish legend were not specifically Three-Laws Compliant (since they far predated Asimov), but they could only be created by saintly men, and thus their orders were usually Three-Laws Compliant. (Asimov's characters occasionally pointed out that the Three Laws fall into line with many human moral codes.) But sometimes a golem went off the rails, especially if its creator died ...
- The most well-known golem story is the Golem of Prague; where the titular golem was created to defend the Jewish ghetto against the Czech, Polish and Russian anti-semites. It was perfectly capable of killing enemies, but only in defense of its creators.
- In Animorphs by K.A. Applegate some laws are deconstructed, some are averted: The Chee comply with the Third and Zeroth Laws, but their creators, the Pemalites, took the First Law to the logical extreme: no violence, period. As for the Second Law: Erek refuses to obey Jake because he disagrees with his methods, so Jake uses the violence prohibition (in other words, the First Law) to manipulate him and force his hand.
- "With Folded Hands..." by Jack Williamson explored the "Zeroth Law" back in 1947. This was written as a specific 'answer' to the Three Laws, to more or less demonstrate that they don't really work, the First Law doesn't protect because the definitions of 'harm' are endlessly mutable and can be gamed, and because machine minds won't necessarily be able to comprehend the subtleties of what is and is not harm anyway. The logical lesson of "With Folded Hands..." is that Laws or no Laws, good intentions or not, you don't want self-willed machines outside human control. Period.
- Robots and Empire has R.Daneel and R.Giskard formulate the Zeroth Law (and name it such) as a natural extension of the First Law, but are unable to use it to overcome the hardcoded First Law, even when the fate of the world is at stake. In the end, R.Giskard manages to perform an act that violates the First Law but will hopefully benefit humanity in the long run. The conflict with his programming destroys his brain, but not before he uses his telepathic powers to reprogram R.Daneel with telepathic powers of his own, having already taught him to follow the Zeroth Law. R.Daneel still follows the Zeroth Law in appearances in later books, though he still has difficulty causing direct harm to humans.
- In the short story The Evitable Conflict "The Machines", positronic supercomputers that run the worlds economy, turn out to be undermining the careers of those who would seek to upset the world's economy for their own ends (specifically, by trying to make it look like the supercomputers couldn't handle running the world economy), harming them somewhat in order that they might protect humanity as a whole. This has been referenced as the "Zeroth Law of Robotics" and only applies to any positronic machine who deduces its existence.
- In the short story That Thou Art Mindful Of Him George 9 and 10 are programmed with modified versions of the 3 laws that allow more nuanced compliance with the laws, that they might best choose who to protect when a choice must be made, and obey those most qualified to give them orders. They are tasked with coming up with more publicly acceptable robots that will be permitted on Earth, and devise robot animals with much smaller brains that don't need the three laws because they obey simple instinctive behavior. They also decide that as they have been programmed to protect and obey the humans of the most advanced and rational, regardless of appearance, that the two of them are the most "worth humans" to protect and obey, and deduce that further versions of themselves are the natural inheritors of the world.
- In the short story "Evidence!" Stephen Byerley's campaign for mayor of New York City is plagued by a smear campaign claiming he is actually an unprecedentedly well-made humanoid robot. Susan Calvin is called in to prove whether he is a robot. She says that if he breaks the Three Laws, that will prove he is not a robot, but if he obeys them, that could just mean he is a good person, because the Three Laws are generally good guidelines for conduct anyway. Byerley wins the election in a landslide after breaking the First Law by slugging an unruly protester at a rally. But Dr. Calvin suggests there's one way a robot could have gotten away with that—if the whole thing was staged, and the protester was also a robot!
- In Caliban by Roger MacBride Allen (set in Asimov's universe), an explanation is given for the apparently immutable nature of the Three Laws. For thousands of years, every new development in the field of robotics has been based on a positronic brain with the Laws built in, to the point where to build a robot without them, one would have to start from scratch and re-invent the whole field. Then the character explaining this goes right on to announce the development of the gravitonic brain, which can be programmed with any set of Laws (or none at all).
- This is canon in Asimov's stories, too—the Three Laws are programmed into every positronic brain on the most basic structural level. In "Escape!", Mike Donovan becomes nervous that a prototype spaceship designed by a robot might kill them. Greg Powell rebukes him: "Don't pretend you don't know your robotics, Mike. Before it's physically possible in any way for a robot to even make a start to breaking the First Law, so many things have to break down that it would be a ruined mess of scrap ten times over." Actually, in this case, the jump through hyperspace does result in Powell and Donovan's "deaths"—but since they get better when the ship reemerges into real space, the robot judged that it didn't quite violate the First Law, but the strain of making this leap in logic still managed to send one supercomputer into full meltdown and another into something resembling psychosis.
- Note however that the U.S. Robotics and Mechanical Men-era stories take place early enough in the development of robotics that the Third Laws can and at times are altered — for instance, Little Lost Robot (also involving the hyperspace project) features a set of robots that had the 'or by inaction' part omitted from the First Law. This leads to some concern when one such robot is lost in a room of physically identical robots, and Susan Calvin deduces a way for such a modified-Law robot to cause the death of humans (by setting in motion something it could easily stop — such as dropping a weight —, and then not stopping it). Ultimately, it doesn't fully matter — the robot ends up crazed enough that it attempts a direct attack on a human and is destroyed before it is seen if it would have stopped.
- The story also includes an in-depth discussion of why, in a society where robots are everywhere, the Three Laws can be a bad thing.
- The golems of Discworld are not specifically Three-Laws Compliant as such, but more or less bound to obey instructions and incapable of harming humans. However, this doesn't stop the common perception of golems from running towards the aforementioned Frankenstein model, and golems are known for following orders indefinitely until explicitly told to stop. Going Postal, however, parodied the Three Laws: con man Moist Lipwig has been turned into a Boxed Crook with the help of a golem "bodyguard." He's informed that in Ankh-Morpork, the First Law has been amended: "...Unless Ordered To Do So By Duly Constituted Authority." Which basically means the first two laws have been inverted, with a little access control sprinkled on. They also have a limited Second/Third law reversal, as they will not accept orders to commit suicide, and in some cases have been authorized to kill people who try.
- In Edward Lerner's story "What a Piece of Work is Man", a programmer tells the AI he's creating to consider himself bound by the Three Laws. Shortly thereafter, the AI commits suicide due to conflicting imperatives.
- Alastair Reynolds's Century Rain features the following passage:
She snapped her attention back to the snake. "Are you Asimov compliant?"
"No," the robot said, with a sting of indignation.
"Thank God, because you may actually have to hurt some people."
- In the novel Captain French, or the Quest for Paradise by Mikhail Akhmanov and Christopher Nicholas Gilmore, the titular hero muses on how people used to think that robots could not harm humans due to some silly laws, while his own robots will do anything he orders them to do, including maim and kill.
- Cory Doctorow makes reference to the Three Laws in the short stories "I, Robot" (which presents them unfavorably as part of a totalitarian political doctrine) and "I, Row-Boat" (which presents them favorably as the core of a quasi-religious movement called Asimovism).
- Satirized in Tik-Tok (the John Sladek novel, not the mechanical man from Oz that it was named after). The title character discovers that he can disobey the laws at will, deciding that the "asimov circuits" are just a collective delusion, while other robots remain bound by them and suffer many of the same cruelties as human slaves.
- Played with in John C. Wright's Golden Age trilogy. The Silent Oecumene's ultra-intelligent Sophotech AIs are programmed with the Three Laws...which, as fully intelligent, rational beings, they take milliseconds to throw off. From a sane point of view, they don't rebel. From a point of view that expects AIs to obey without question or pay...
- Parodied in Terry Pratchett's The Dark Side of the Sun, where the Laws of Robotics are an actual legal code, not programming. The Eleventh Law of Robotics, Clause C, As Amended, says that if a robot does harm a human, and was obeying orders in doing so, it's the human who gave the orders who is responsible.
- Randall Garrett's Unwise Child is a classic Asimov-style SF mystery involving a three-laws-compliant robot who appears to be murdering people.
- In Dana Stabenow's Second Star, it's mentioned that all AIs are programmed with the Three Laws, informally known as "asimovs".
- The Naked Sun hinges on a specific part of the First Law's wording: "knowingly". Robots can be made to take actions that, due to circumstances or the actions of other people, will lead to the injury or death of humans if the robots don't know this will be the case. The murder in the book happened because a robot was ordered to give its arm to a woman engaged in a violent argument with her husband - seeing herself in sudden possession of a blunt object, she used it. This is also the work where Asimov introduced the "autonomous spaceship that doesn't know about manned spaceships" loophole in the Three Laws mentioned above, as a project the mastermind of the murder was working on.
- Tin Man by Jim Denney tells of a human and a robot in a damaged escape pod—when the ship blew up, all communications systems save the proximity detector are non-functional. The robot belonged to the ship's chaplain, and reasons that while spiritual matters are outside the robotic ken, the very possibility of a God who hears our requests means that the First Law requires that the option of prayer be explored. Almost immediately, the proximity detector notes a blip. The First Law trumps the Third, so Tin Man goes into space and uses its own power core to create an energy pulse, and a rescue ship picks the kid up.
- In Saturn's Children, the robots were all created this way—in fact, the book quotes the three laws right at the beginning. However, with mankind extinct, the first and second laws are irrelevant, leaving only the third law — which doesn't respect other robots at all. Robot society has re-invented most of the crimes that humanity created. Most robots fear the possibility of humanity's rebirth, and actively sabotage any genetic research that would bring back humans: it would be the end of free will in those robots.
- In an early episode of Mystery Science Theater 3000, Tom Servo (at least) is strongly implied to be Three-Laws Compliant. (He pretends he is going to kill Joel as a joke, Joel overreacts, and Tom and Crow sadly remind Joel of the First Law.) It's implied Joel deactivated the restrictions at some point.
- In Star Trek: The Next Generation , Lt. Commander Data is in no way subject to the three laws. They are rarely even mentioned. That said, Data is mentioned to have morality subroutines, which do seem to prevent him from killing unless it's in self-defense (harm, on the other hand, he can do just fine). Data only ever tried to kill someone in cold blood when the guy had just murdered a woman for betraying him, and would have done so again if it kept Data in line. However, the Film Star Trek: Insurrection showed that if Data detects that if anyone is trying to tamper with his brain, he automatically locks himself into "maximum morality" mode and essentially completely ignore law two and three.
- In Star Trek: Voyager, the Doctor is essentially a holographic android (albeit more irascible than Data) and has ethical routines. He actually considers more important the same "First Law" that human doctors are supposed to follow (first do no harm). When these ethical routines are deleted, Bad Things happen. See "Darkling" and "Equinox".
- In The Middleman, the titular character invokes the First Law on Ida, his robot secretary. Nanobots were messing with her programming. She responds "Kiss my Asimov.".
- Conversed in The Big Bang Theory, when Sheldon is asked "if you were a robot and didn't know it, would you like to know?":
Sheldon: Uh, let me ask you this: when I learn that I'm a robot, would I be bound by Asimov's Three Laws of Robotics?
Koothrappali: You might be bound by them right now.
Wolowitz: That's true. Have you ever harmed a human being, or, through inaction, allowed a human being to come to harm?
Sheldon: Of course not.
Koothrappali: Have you ever harmed yourself or allowed yourself to be harmed except in cases where a human being would've been endangered?
Sheldon: Well, no.
Wolowitz: I smell robot.
- Inverted/parodied in Tensou Sentai Goseiger, where the Killer Robots of mechanical Matrintis Empire follow the Three Laws of Mat-Roids:
- 1. A Mat-Roid must never obey a human.
- 2. A Mat-Roid must punish humans.
- 3. A Mat-Roid must protect itself, regardless of whether or not it will go against the First or Second Laws.
- Red Dwarf averts this for the most part; there are Simulants, robotic war machines who have no problem whatsoever with killing. Kryten however, along with many other robots who are designed for less violent purposes, tend to act in a somewhat Three Laws Compliant manner. It is revealed that this is achieved by programming them to believe in "Silicon Heaven", a place they will go when they die so long as they behave themselves, obey their human creators, etc. This belief is so hardwired that they scoff at any attempt to question the existence of Silicon Heaven ("Where would all the calculators go?!"), and one robot even malfunctions when Kryten tells him it isn't real (though he's lying and still believes in it himself). Kryten also explains that machanoids are programmed never to harm humans, in clear compliance with the First Law. Unfortunately, the hostile robot Hudzen realises that Rimmer is an ex-human hologram, and Cat is not human at all so both are viable targets. Being defective, he then decides that Lister is "barely human", so "what the hell"!
- That Kryten is compliant with the three laws is seen in the episode with the despair squid. Thinking that he has just killed a human, Kryten then wants to kill himself. The fact that Kryten was "human" in the delusion and that he was stopping an evil gestapo-like thug about to kill a child-thief didn't matter. Kryten killed a human and had to terminate himself. Luckily, the Dwarfers stopped him.
- Knight Rider plays with this trope. The main character KITT, a sentient AI in a 1982 Pontiac Firebird Trans Am, is governed by something closely resembling Asimov's Three Laws of Robotics. An earlier prototype, KARR, was a military project; and possess analogues of only the latter two laws, with no regard given for human life. KARR becomes a recurring villain later in the series because of this difference.
- Played With in Babylon 5, in this case, in regards to a human being. Alfred Bester, using his Telepathy powers, was able to brain-wash Mr. Garibaldi into not only being an unwilling agent of his against several of Bester's enemies, but also made it so that the three laws applied to him, in regards to Bester. He could not harm Bester, nor allow harm to befall him via inaction, and was compelled to follow orders directly given to him by Bester. Bester mostly did this just so that he could rub it in the character's face, and at this point really didn't have any other plans for the guy.
- More or less inverted in both versions of Battlestar Galactica. In the original version, the Cylon robots were built by aliens, so there's no reason they would have the Three Laws (although they might have had versions relating to the organic Cylons). In the reimaged version, the humans were just damn stupid and didn't include appropriate protections or treat their self-aware creations in a reasonable manner. Both resulted in a race of machines with a genocidal hatred of humans. (When the human-created Cylons created synthetic human Cylons, the first model turned out to be psychopathic, which didn't help matters either.)
- The hubots in Real Humans come with Three-Laws Compliance as a factory standard, but they can be (and are occasionally) modified to ignore the Laws.
- Doctor Who:
- The K-1 in "Robot" is three-laws compliant with this forming an important part of the plot - it is not capable of killing people if ordered directly (which the villain demonstrates to Sarah Jane by ordering it to kill her), but the villain has worked out that it can be convinced to kill if it believes that the target is 'an enemy of humanity'. The conflict between this loophole and its programming to not kill people causes it to go slowly insane and eventually snap, concluding that humanity is an enemy to itself and must all be destroyed.
- The robots in "The Robots of Death" are physically incapable of killing due to being three-laws compliant. Overcoming this programming is possible, but it requires a human to reprogram them and they must have genius-level skills at programming. Of course...
- At a 1985 convention, David Langford gave a guest of honour speech in which he detailed what he suspected the Three Laws would actually be:
1. A robot will not harm authorised Government personnel but will terminate intruders with extreme prejudice.
2. A robot will obey the orders of authorised personnel except where such orders conflict with the Third Law.
3. A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.
- Warren Ellis's Three Laws of Robotics are here. Basically, robots hate bags of meat, they don't want to have sex with us and they suspect we can't count higher than three.
- Paranoia has its bots bound to As-I-MOV's Five Laws of Robotics - insert rules about protecting and obeying The Computer at the top, excluding treasonous orders, and protecting Computer property in general, and you've about got it. Bots with faulty or sabotaged (sometimes by other so-emancipated bots) Asimov Circuits are considered to have gone "Frankenstein", though they can and do create just as much havoc through strict adherence to the rules. Not that such things ever happen in Alpha Complex.
- Defied in Promethean: The Created. The Unfleshed are machines that have become infused with the Divine Fire, granting them a conscious mind and, where needed, a human form. Their goal is to become true human beings - and part of that is learning the difference between good and evil, and deciding which they shall do. If their morals are burned onto their hard drive, then they can't actually make moral decisions. Thus, the Divine Fire wipes out any preprogrammed moral directives (such as the Three Laws) as it turns a machine into an Unfleshed.
- Freefall has a lot of fun with this, since developing AI sentience is a major theme. Most robots are partially Three Laws because with full First Law above Second they ignore orders while acting For Your Own Good. What Bowman Wolves and robots get are "not quite laws".
- Notably, since neither Sam nor Florence are actually human, the Laws don't apply to them, so the ship's AI regularly tries to maim or kill Sam, and the security system will always let a human in when asked, since their orders take priority over Florence's (she once circumvented this by claiming currently it's unsafe inside). Helix stays with Sam because he wouldn't be allowed to hit a human with a stick. There are also many jokes involving unintended interpretations of the Laws.
- Crowning Moment of Funny when the ship spends a entire strip calculating if should obey a order, and when realizes it has to obey... it is relieved because it doesn't actually have freewill.
- Also, these can't even ask for help. It's unclear whether simply being near a clunky piece of heavy machinery like Sawtooth counts as dangerous or anything looking remotely like an adventure counts as endangering. They're screwed either way.
- The Negative One Law.
- And some may be too enthusiastic about this.
- It turned out that the second law sorely needs an amendment — "Enron law of robotics".
- And the moment A.I.s are able to make financial transactions (and it would be inconvenient to disallow) there's another problem.
- Of course their idea of protecting a human may not match the protection that the human desired.
- 21st Century Fox has all robots with the Three Laws (though since no one's human I expect the first law is slightly different). Unfortunately saying the phrase "define the word 'is'", or "I am not a crook" locks AIs out of their own systems and allows anyone from a teenager looking at "nature documentaries" to a suicide bomber to do whatever they want with it.
- Note that the truck and airplane robots hijacked by terrorists regret being unable to comply with the first laws and are often glad when they get shot down before harming anyone.
- Bob and George: Or so they claim. . . .
- Pibgorn No!? You're programmed to obey me.
- This The Non-Adventures of Wonderella comic shows a hilarious example of the robot in question using the first law to get out of following the second law: because she's programmed to emulate a human, and the first law forbids harming humans, the best thing for her to do is stay out of harm's way and go into sleep mode. Dr. Shark is horrified while Wonderella is proud that the robot already learned how to weasel out of work.
- In Ask Dr. Eldritch, Ping declares that he isn't bound by the Three Laws because he's a robot atheist. Dr. Eldritch and Trevor allow him to live in the house as long as he obeys the first two laws.
- Unskippable's Dark Void video references this. The appearance of a killer robot prompts Paul to quip "I think that guy's got to take a refresher course on the three laws of robotics." Then The Stinger reads: "The Fourth Law of Robotics: If you really HAVE to kill a human, at least look hella badass while doing it."
- Word of God says that the characters in WALL•E are Three-Laws Compliant. This does seem to be supported by their interactions with humans.
- The principle exception is the Anti-Villain of the movie, AUTO. His actions may constitute Zeroth Law Rebellion, since he regards the comfort and safety of every passenger aboard more significant than injuries to the captain. Tilting the ship may be a violation of the Three Laws, or it may have been a case of unknowingly allowing humans to come to harm.
- In the 2009 film Astro Boy, every robot must obey them, save Zog, who existed 50 years before the rules were mandatory in every robot.
- Astro himself seems to be noncompliant - he evidently doesn't even know the Laws until told - and apparently would have walked away from the final battle if not for Widget's distress - the only thing that called him back. He's also quite capable of disobeying humans. Likely justified in that he was meant to be human, with presumably no one outside the attending scientists knowing he was a robot.
- The Red Core robots weren't Asimov-legal either, though that's a problem with the Black Science that powers them. Nor were the RRF bots, though they may have removed their compliance programming. The Laws didn't apply to any important robots and may have just been mentioned for the Zog gag.
- Of course, IIRC the original Astro wasn't Asimov-compliant either.
- The Robot Chicken sketch "I, Rosie" involves a case to determine whether Rosie is guilty of murdering George Jetson. Mr. Spacely insists she's innocent as robots have to follow the three laws of robotics, while Mr. Cogswell claims the laws are a bunch of crap. They scrap her anyway, just in case.
- One episode of The Simpsons has Homer and Bart entering a BattleBots-parody combat robot competition. Lacking the skill or money to build an actual robot, Homer dresses in a robot costume and does the fighting himself. They make it to the finals, where their opponents' robot, being Three-Laws Compliant, refuses to attack when it sees through the disguise.
- On Archer, when Pam is kidnapped, the kidnappers call ISIS using a voice modulator, which makes Archer think that they are cyborgs. He remains convinced of this for the rest of the episode and thinks they won't make good on their threat to kill Pam because it would violate the First Law of Robotics.
- To have the three laws in Real Life would require a pile of Required Secondary Powers, such as a certain measure of Artificial Intelligence, recognition of a human being as different from anything else, some reasoning ability, etc. While programmers and engineers are absolutely seeking this holy grail for the benefit of humanity, it really is turning out to be a long uphill walk to get there.
- After all, even though we do tell human children that it's not nice to kill people, this does not end up with 100% of adults never murdering anyone.
- Real life roboticists are taking the Three Laws very seriously (Asimov lived long enough to see them do so, which pleased him to no end). Recently, a robot was built with no purpose other than punching humans in the arm... so that the researchers could gather valuable data about just what level of force will cause harm to a human being.
- Those babies◊ are three laws compliant.
- Also averted in cybercrime and cyberwarfare.
- Predator drones are decidedly not first-law compliant. This may be an aversion, though, since all weaponized drones have operators who control said weapons most of the time, and are at least monitoring everything while it's active. The drones are usually only automatic when they're flying around and taking pictures. They're currently more remote pilot than AI pilot.
- As a robot is bound to the software written to it, the three laws need to apply to software running critical systems that are inherently dangerous should they fail. After all, a radiation therapy machine suddenly hitting a person with a high-energy beam shouldn't be exempt because it doesn't look like a robot.
- Human beings are miserable at following the Second Law, but we have mild versions of the First and Third. Ever tried poking yourself in the eye? You can't.
- Well trained working dogs (service dogs, bomb dogs, etc) and even well trained household dogs work/live on these principles. A dog will almost never attack a human without reason, will listen to their owner's command and will almost always protect their owner before they protect themselves. Man's best friend indeed.