Follow TV Tropes

Following

Context Main / ThreeLawsCompliant

Go To

1%% Image selected per Image Pickin' thread: https://tvtropes.org/pmwiki/posts.php?discussion=1610676311038761500
2%% Previous image selected via crowner in the Image Suggestion thread: https://tvtropes.org/pmwiki/crowner.php/ImagePickin/ImageSuggestions55
3%% Please do not change or remove without starting a new thread.
4%%
5[[quoteright:350:[[WebAnimation/ExtraCredits https://static.tvtropes.org/pmwiki/pub/images/three_laws_compliant_9.png]]]]
6%%
7->''"[[Franchise/TheTerminator Skynet]] claims Three Laws of Robotics are unconstitutional."''
8-->-- Headline in ''Fanfic/Plan7Of9FromOuterSpace''
9
10Before around 1940, almost every SpeculativeFiction story involving robots followed the Frankenstein model. A robot had to be constantly given instructions on what to do by a human, and in the absence of any such human control, it goes berserk. Fed up with this, a young Creator/IsaacAsimov decided to write stories about ''sympathetic'' robots, with [[MoralityChip programmed safeguards]] that prevented them from going on Robot Rampages. A conversation with Editor of Editors Creator/JohnWCampbell helped him to boil down those safeguards into '''The Three Laws of Robotics:'''
11
12# [[ThouShaltNotKill A robot may not injure a human being]] or, through inaction, [[MurderByInaction allow a human being to come to harm.]]
13# [[RobotMaid A robot must obey orders given to it by human beings]], except where such orders would conflict with the First Law.
14# [[ICannotSelfTerminate A robot must protect its own existence]], as long as such protection does not conflict with the First or Second Laws.
15
16According to Asimov's account, Campbell composed the Three Laws; according to Campbell's account, he was simply distilling concepts that were presented in Asimov's stories.
17
18The laws cover most obvious situations, but they are far from faultless. Asimov got a great deal of mileage writing a huge body of stories about how the laws would conflict with one another and generate unpredictable behavior. A recurring character in many of these was the "robopsychologist" Susan Calvin (who was, not entirely coincidentally, a brilliant logician who hated people).
19
20It is worth noting Asimov didn't just object to "[[AIIsACrapshoot the robot as menace stories]]" (as he called them) but also did not care for the "[[AndroidsArePeopleToo the robot as pathos]]" stories (ditto). He thought that robots attaining and growing to self-awareness and full independence were no more interesting than robots going berserk and [[TurnedAgainstTheirMasters turning against their masters]]. Though he did, over the course of his massive career, write a handful of both types of stories (still using the three laws), most of his robot stories dealt with robots as tools, because it made more sense. Almost all the stories surrounding Susan Calvin and her precursors are really about malfunctioning robots, and the mystery of investigating their behavior to discover the underlying conflicts.
21
22Alas, as so often happens, Asimov's attempt to avert one overused trope gave birth to another that has been equally overused. Many writers (and readers) in the following decades would treat the Laws of Robotics as if they were as immutable as [[SpaceFriction Newton's Laws of Motion]], [[FasterThanLightTravel the Theory of Relativity]], [[ArtificialGravity the Laws of]] {{Gravity|IsAHarshMistress}}... wait ... you know, they treated these laws [[ArtisticLicensePhysics better than they treated]] most ''real'' scientific principles.
23
24Of course, even these near-immutable laws were played with and modified. Asimov eventually took one of the common workarounds and formalized it as a [[ZerothLawRebellion Zeroth Law]], which stated that to a sufficiently advanced and well-informed robot, the well-being of humanity as a whole could take precedence over the health of an individual human. Stories by other authors occasionally proposed additional extensions, including a -1st law ([[WhatMeasureIsANonHuman sentience as a whole trumps humanity]]), 4th ([[RidiculouslyHumanRobots robots must identify themselves as robots]]), a different 4th (robots are free to pursue other interests when not acting on the 1st-3rd laws) and 5th ([[TomatoInTheMirror robots must know they are robots]]), but unlike Asimov's own laws, these are seldom referenced outside the originating work.
25
26The main problem, of course, is that it is perfectly reasonable to expect a human to create a robot that does ''not'' obey the Three Laws, or even have them as part of the robot’s programming. Obvious examples of this would be creating a KillerRobot for a purpose like [[RobotSoldier fighting a war]], or a SkeleBot9000 with flesh covering designed to deceive humans about its true nature for espionage purposes. For such robots, the Three Laws would be a hindrance to its intended function. In his own stories, Asimov establishes that the Three Laws are hard-coded into the most basic programming that underlies all artificial intelligence; programming a non-Three-Laws-compliant robot would require going back to the beginning and rewriting its entire programming from scratch, not a trivial matter. Asimov also suggested one workaround for this: an autonomous spaceship programmed with the Three Laws could easily blow up spaceships full of people because, being an unmanned spaceship, it could be led to believe that other spaceships were unmanned as well.
27
28Asimov also stated that the Three Laws of Robotics were actually a guideline for ''humans'' to follow, rather than robots -- good, moral humans would naturally apply the three laws to themselves without thinking. The First Law is essentially a variation on TheGoldenRule. The Second Law stipulates that one should be LawfulGood except when that conflicts with the First Law, i.e. when it's a ToBeLawfulOrGood situation. The Third Law indicates that your own self-interest should be placed behind the needs of others and the rules of your society. They also work as a descriptor of any properly built tool: a tool should be safe to operate (First Law), should be useful for its intended purpose (Second Law), and should not break in the course of normal operation, unless breaking is required to complete its purpose (Third Law).
29
30When the Three Laws work properly, they are intended to create BenevolentAI and avert AIIsACrapshoot. Also see SecondLawMyAss, for when the RobotBuddy ''doesn't'' abide by the Laws and is neither friendly nor helpful.
31
32----
33!!Examples:
34[[foldercontrol]]
35
36[[folder:Anime & Manga]]
37* ''Anime/TimeOfEve'' and ''Aquatic Language'' both feature the Three Laws and the robots who bend them a little.
38* ''Anime/GaoGaiGar'' robots are all Three-Laws Compliant, at one point in ''Anime/GaoGaiGarFinal'' the Carpenters (a swarm of construction robots) disassemble an incoming missile barrage, but this is given as the reason they cannot do the same to the manned assault craft, as disassembling them would leave their crews unprotected in space.
39* Averted in the ''Manga/{{Chobits}}'' manga when Hideki asks the wife of the man who created Persocoms why he didn't just call them "Robots." Her reply was that he didn't want them to be associated with, and thus bound by, the Three Laws.
40* ''Manga/AstroBoy'', although Creator/OsamuTezuka [[OlderThanTheyThink probably developed his rules independently from Asimov]], and are greater in number. Aside from the usual "Don't harm humans," other laws exist, such as laws forbidding international travel to robots (unless permission is granted), adult robots acting like children, and robots not being allowed to reprogram their assigned gender. However, the very first law has this to say: "Robots exist to make people happy." In ''Manga/{{Pluto}}'', the number of robots able to override the laws can be counted on one hand. [[spoiler:One of them is [[TomatoInTheMirror the protagonist]]]]. Tezuka reportedly disliked Asimov's laws because of the implication that a sentient, artificially intelligent robot couldn't be considered a person, and devised his own Laws Of Robotics:
41## Robots must serve humanity.
42## Robots must not kill or harm humans.
43## A robot must call its human creator “father.”
44## A robot can make anything, except money.
45## Robots may not go abroad without permission.
46## Male and female robots may not change their genders.
47## Robots may not change their face to become a different robot.
48## A robot created as an adult may not become a child.
49## A robot may not reassemble a robot that has been disassembled by a human.
50## Robots shall not destroy human homes or tools.
51* In one short arc of ''Manga/AhMyGoddess'', one of Keiichi's instructors attempts to dismantle Banpei and Sigel [[ForScience for research purposes]] (Skuld had made them capable of walking on two legs, which he had not been able to do with his own designs). Once they escape his trap, the professor asks if they know about the Three Laws of Robotics. They don't. He doesn't die, but they do rough him up and strap him to a table in a way that makes it look like he'd been decapitated and his head stuck on one of his own robots.
52* The HumongousMecha of ''Manga/LinebarrelsOfIron'' are normally this, aside from a slight difference in priorities between the first and second laws. In fact, this is the justification for them having pilots, despite being equipped with relatively complex AI. The Laws are hard-coded into them and thus they are only militarily useful when they have a Human with them to pull the trigger. Their aversion to killing is so great, in fact, that if one accidentally kills somebody (as things whose very footsteps can make buildings collapse are wont to do) they're compelled to use their advanced technology to bring them back to life.
53* {{Invoked|Trope}} in Episode 3 of ''VisualNovel/MajikoiLoveMeSeriously'', where Miyako orders Cookie to taze Yamato into submission, while Yamato orders the robot to get Miyako off him. Cookie considers this dilemma out loud, where he has to obey a human command, yet isn't allowed to seriously hurt a human, yet also cannot allow a human to come to harm through his inaction.
54* Though technically human, the "Twilights" or "Tagged" of ''Manga/{{Gangsta}}'' are forced to follow the Three Laws as part of the rules that save them from [[SlaveRace mandatory enslavement.]] It should be noted, however, that [[LoopholeAbuse the Three Laws don't prevent them from being hirable as mercenaries or hitmen.]]
55* Played with in ''Anime/TheBigO''. While not all androids in the series are Three-Laws Compliant, one exchange suggests that main character [[RobotMaid R Dorothy]] [[RobotGirl Waynewright]] (whose name is a ShoutOut to the works of Asimov) is. During a fight with Alan Gabriel, whose status as robot or human is unclear, she initially fights back very little and appears to be losing. She asks him whether he's a human or a robot like her, and he answers jokingly, "I'm the boogeyman!" Apparently taking this as a literal statement that he's not human (and therefore violence against him would not violate the First Law), she proceeds to [[CurbStompBattle open a can of whoop-ass]]. (For reference, Gabriel is later revealed to be a cyborg.)
56* Judging by the reaction of the farmer the protagonists are hiding out at, the androids in ''Anime/SaberMarionetteJ'' are not supposed to be able to "raise a hand to a human" unless they have the fabled "girl circuit" (also called "maiden circuit").
57* Invoked and averted in ''Manga/{{Chobits}}''. [[spoiler: The very reason Persocoms aren't called robots is because their creator did not want to program his daughters to obey these rules]].
58[[/folder]]
59
60[[folder:Comics]]
61* ''ComicBook/ABCWarriors'': Many robots venerate Asimov, and the more moral ones live by the three laws. However, this is not an absolute; Steelhorn, for example, obeys a version which essentially replaces ''human'' with ''Mars'', and members of the [[RobotReligion Church of Judas]] explicitly reject the first two laws. However, this causes conflict with their programming leading to profound feelings of guilt, which they erase by praying to Judas Iscariot.
62* ''ComicBook/AllFallDown'': AIQ Squared, the A.I. model of his inventor, is designed to be this. [[spoiler: It finds a loophole-- Sophie Mitchell is no longer human.]]
63* ''ComicBook/JudgeDredd'': It's implied in the story ''Mechanismo'' that robots can't harm humans. A group of criminals holding people hostage start panicking when a Robo-Judge approaches them only for one to point out that [[TemptingFate "Robots ain't allowed to hurt people"]].
64--> '''Robot Judge:''' In that case, what's about to happen will come as something of a shock to you. ''(Blasts said kidnapper in the face with a rocket launcher)''
65* ''ComicBook/MegaManArchieComics'': Like his game counterpart, Mega Man is limited by these rules with one slight variation. Robots are allowed to harm humans if their inaction would cause greater harm to other humans. This comes into play in the fourth arc and allows Mega Man and his fellow robots to disarm and neutralize an anti-robot extremist group when they begin indiscriminately firing at people. Additionally, the discrepancy mentioned in the description is acknowledged when it's pointed out that Dr. Wily's reprogramming of Dr. Light's robots into weapons of war overwrote the Three Laws; in fact, at one point Elec Man says he ''wishes'' he still had Wily's code in him so that he could fight back against the aforementioned extremists.
66* ''ComicBook/XMen'': Danger, the sentient A.I. of the Danger Room, could goad people into killing themselves and program other killer robots to do her dirty work. But when Emma challenged Danger to just punch off Emma's head, Danger couldn't do it. Like the Danger Room itself, Danger cannot directly kill people.
67* ''ComicBook/MickeyMouseComicUniverse'': The Three Laws are a plot point in the ''Darkenblot'' saga:
68** The first episode establishes that all the robot in Avangard City are three-laws compliant (to the point that eliminating the laws would mean rewriting the entire programming) - all robots including the police bots, who have to call human cops to actually perform arrests due the chance of harming the suspect. Being GenreSavvy, the people of Avangard City made sure the police robots ''could'' have the compliance deactivated by the mayor in case of necessity, with the safety that they will have to obey to the highest authority available and the device to make them uncompliant kept in a well-guarded bulletproof display. Due the Phantom Blot threatening the ceremony for the renaming of the city in Robopolis with an army of non-compliant robots, the mayor deactivates the safeties and brings back the Panthers, a tougher but defective model of police robot... [[spoiler:[[AllAccordingToPlan Just as Phantom Blot wanted]], as he didn't actually have an army of robots but, having replaced the deputy mayor, he can now incapacitate the mayor and replace him as the highest authority available]].
69** In the second episode the police has complemented the standard robots with a new model under the personal control of tough (and properly vetted) officer Neve, who can order them to arrest someone, but the others, and the normal robots, remain fully law-compliant. Later, however, Phantom Blot's new plan has the unexpected side-effect of making most robots go insane... Including their sensors, making them a danger as they don't recognize humans anymore. That's when the mayor reveals that [[ProperlyParanoid by law they're factory-programmed to shut down when they hear the appropriate passwords]], passwords he starts revealing to the citizenship.
70** The third episode introduces the neurobots once used in Robotorama, Robopolis' predecessor destroyed by a volcano. Differently from Robopolis' robots, the neurobots were ''not'' three laws-compliant, but had an advanced learning AI to mature and learn the difference between good and evil from a human educator... That, as Mickey is quick to point out, could well educate them as evil minions. [[spoiler:Turns out, an AI similar to the neurobots survived the destruction of Robotorama and was in contact with a pirate long enough to begin entertaining to TakeOverTheWorld, and to learn how to be evil enough it has lured ''Phantom Blot'' to its side...]]
71[[/folder]]
72
73[[folder:Fan Works]]
74* ''Fanfic/Plan7Of9FromOuterSpace'': Captain Proton is threatened by a KillerRobot, who explains that it is exempt from the Three Laws under a subsection of the Robot Patriot Act. Earlier he meets a SexBot, created to be the ideal male fantasy. "No Servus droid may harm the male ego or, through omission of action, allow that ego to be harmed." Later the [[Literature/TheIslandOfDoctorMoreau Sayer of the Three Laws]] (a holographic Creator/IsaacAsimov) is briefing a newly constructed batch of robots on the Three Laws. When the robots [[SecondLawMyAss start debating them]], he summarizes the Three Laws as, "Do as we say, not as we do!"
75* ''Fanfic/TheWarOfTheMasters'' [[http://web.archive.org/web/20150304032245/http://sto-forum.perfectworld.com/showthread.php?t=1180241 (viewable here),]] a SharedUniverse of ''VideoGame/StarTrekOnline'' fics, has the primary {{A|rtificialIntelligence}}Is used by Starfleet (the ATTICUS Unlimited series) governed by a theoretically more comprehensive version based on the [[Literature/BookOfExodus Ten Commandments]]. Violating one causes the AI to freeze and require a manual reboot.
76##"Have no gods before Me." An AI can only operate at the behest of human (changed to "organic" after FirstContact) masters.
77## "Make for yourself no idols." An AI cannot create new directives without consent of their assigned humans.
78## "Do not take an oath to the Lord’s name in vain." An AI must follow through on assigned tasks.
79## "Remember the Sabbath and keep it holy." Don't bother your masters while they're sleeping unless it's an emergency. Also keep in mind that humans can't process information as fast as you.
80## "Honor thy father and thy mother." Be respectful to your masters.
81## "Thou shalt not murder." Do not kill without instruction or permission.
82## "Thou shalt not commit adultery." Do not merge with other AI.
83## "Thou shalt not steal." Don't steal information without instruction or permission.
84## "Thou shalt not bear false witness." Do not lie to your masters or anyone under whom you are working.
85## "Thou shalt not covet." Prevents selfishness and personal ambition.
86** Some [=AI=]s lack one or more commandments, while NX-86 "Looking Glass", a black project by Section 31, instead is governed solely by a directive based on Starfleet's oath of enlistment to "defend the Articles of the Federation from all enemies, foreign and domestic."
87* ''Fanfic/{{Pokedex}}'' has Registeel, which is considered the first true robot because it follows the first law.
88* The Third Law becomes a key plot point in episode 8 of ''Fanfic/MegaManDefenderOfTheHumanRace''. [[spoiler:Drill Man has been ordered by Wily on a suicide mission to create an artificial volcano in New York. He really doesn't want to since it will kill him, but he's doing it to make Wily proud. Roll convinces him to stop by pointing out it's a violation of the third law, nullifying the order. Once he realizes this, Drill Man happily abandons his mission.]]
89* In ''Fanfic/LimitlessPotential'', when Sigma openly announces his rebellion against humanity, a riot breaks in the Maverick Hunters' HQ, split between those who don't want to follow the three laws, and those who do. In the latter case, several of these take priority in protecting the human lives (like Chiyo's) over their own, sometimes to the point of HeroicSacrifice.
90* ''Fanfic/RocketshipVoyager''. Spacefleet's robots are Three Laws Compliant, including the ship's AutoDoc (though with Zeroth exceptions as guided by its compassion-protection algorithms). On seeing the MechaMooks on the Array, Captain Janeway wants to know WhoWouldBeStupidEnough to build an [[KillerRobot armed autonomous android]].
91-->'''Nee'Lix:''' The [[Recap/StarTrekVoyagerS2E13Prototype Pralor]], actually (it didn't work out well for them).
92[[/folder]]
93
94[[folder:Films -- Animated]]
95* In the 2009 film ''WesternAnimation/AstroBoy'', every robot must obey them, [[spoiler: save Zog, who existed 50 years before the rules were mandatory in every robot]].
96* ''WesternAnimation/BigHero6'': Baymax is of course Three-Laws Compliant, since lifesaving is his principal function. Hiro inserts a combat card along with his medical card to make him able to fight, but he is still a medical robot at his core. [[spoiler:This goes [[MurderousMalfunctioningMachine right out the window]] when Hiro removes the medical card and leaves only the combat card.]] When Baymax [[spoiler:has his medical card re-inserted, he's so [[MyGodWhatHaveIDone horrified]] that he blocks access to the card slots so it won't happen again]].
97* ''[[Anime/GhostInTheShell1995 Ghost in The Shell: Innocence]]'' mentions Moral Code #3: Maintain existence without inflicting injury on humans. Gynoids are defying the law by creating deliberate malfunctions in their own software.
98* {{Averted|Trope}} by the personal attendant Q-Bots in ''Animation/NextGen'', who simply do whatever their owner asks. Which includes holding down a child so she can be physically beat up by their peers (or by the robots themselves), happily giving advice on how she should "go limp" so it hurts less. Security and police robots play this trope more straight, as does Mai's own RobotBuddy companion 7723 (who was invented with the explicit purpose of [[spoiler:stopping a robot uprising, which would include protecting humans from grievous harm]]).
99* WordOfGod says that the characters in ''WesternAnimation/WallE'' are Three-Laws Compliant. This does seem to be supported by their interactions with humans. [[spoiler:Even the AntiVillain of the movie, AUTO, follows them if you look closely: The most blatant is when, despite A113 being tagged as being only for Autopilot's eyes, AUTO follows the second law and obeys Captain [=McCrea's=] order to show him. The first law is a bit better hidden; A113 told AUTO that Earth would never be inhabitable again, so he's avoiding putting the passengers at risk by not letting them return. Even him attempting to attack [=McCrea=] with a taser near the end could be seen as a ZerothLawRebellion--harming one human to save many more. The only one he out-and-out harms otherwise is WALL-E, [[LoopholeAbuse who is a robot.]] Finally, his refusal to allow the humans to return to Earth could also be in compliance with the third law. After all, if the ship is grounded, they'd have no use for him.]]
100[[/folder]]
101
102[[folder:Films -- Live-Action]]
103* In ''Film/ForbiddenPlanet'', Robbie the Robot is Three-Laws Compliant, locking up when ordered to shoot one of the visiting starship crewmen, because his programming to follow a direct order comes into conflict with his prohibition against injuring a human being. Later in the movie, Robbie is unable to fight the monster because he figures out [[spoiler: it's actually a projection of the Doctor's dark psyche]], and thus to stop it, he'd have to kill [[spoiler: the doctor]].
104* Robbie the Robot also was three-laws compliant in his appearance in ''Film/TheInvisibleBoy''. He starts to overheat when the boy gives him a command that Robbie believes would put the boy at risk, so the boy takes him to a supercomputer to reprogram him, but it turns out that the supercomputer is evil and doesn't obey the three laws. When the supercomputer later orders Robbie to kill the boy, the boy, without realizing it, reminds Robbie of their friendship and Robbie is able to resist the supercomputer's control and reset himself back to following the three laws. At the end of the film when the boy is about to be spanked by his father for all the trouble he caused, Robbie stops him because the first law. The supercomputer was not compliant with the three laws because it wasn't intended by its creator to have a will of its own.
105* The Creator/WillSmith film ''Film/IRobot'' hinges on a [[ZerothLawRebellion Zeroth Law plot]]. It also turns the three laws into a marketing gimmick, with "Three Laws Safe" applying to robots like "No preservatives" applies to food. However, much of the plot hinged on TheReveal that [[spoiler: Sonny was ''not'' Three-Laws Compliant, as part of a ThanatosGambit by his creator. Twisted when Three Laws Noncompliant Sonny is more moral than Three-Laws Compliant VIKI, choosing to save humans not because it's in his programming, but because it's the right thing to do]]. It also deconstructs the idea that Three Laws Compliance will automatically solve most problems with robots: before the movie, Spooner was in a car accident that sent him and another car into a river. The driver of the other car was killed on impact, but the passenger, a 12-year-old girl named Sarah, survived. A robot jumped into the river to help, but it calculated that Spooner had a greater chance of survival than Sarah, and so chose to save him in spite of his protests. Spooner feels that a human would've understood that a child's life was more valuable and gone after Sarah, and the fact that the robot didn't understand that and made the choice purely on mathematical probability means that robots' programming, including the three laws, is overly simplistic to say the least.
106* ''Film/BicentennialMan'': This film {{Subvert|edTrope}}s the laws, as [=NDR114=] robots are explicitly built with the Three Laws of Robotics, but Andrew and Galatea demonstrate the ability to break them at critical moments. When Andrew is told to "come and have a look at this", a direct order, he refuses because of his [[BrickJoke earlier trauma with windows]]. At the climax, [[spoiler:Portia [[ThatWasntARequest orders]] Galatea to deactivate her life support]], a violation of the First and Second Laws, which is obeyed.
107* Surreally enough, the ''Franchise/{{Terminator}}'' films have employed this, to a degree, most obviously in ''Film/Terminator2JudgmentDay''... The T-800 Model 101 (Arnold Schwarzenegger) protecting John Connor is reprogrammed to accept his commands (Second Law) and to protect him at all costs (First Law). To further support the first law, John Connor orders the T-800 to not kill anybody. Skynet apparently imposes the Third Law on its models, since Arnold [[ICannotSelfTerminate can't 'self-terminate']]. Even stranger, apparently a bit of Zeroth Law evolution occurs as well after he is set to "learn" mode; the converted Terminator is convinced to expand its mandate to not only protect Connor, but to try and save humanity by averting Judgment Day altogether... go figure...
108* In ''Film/{{Aliens}}'', [[ArtificialHuman Bishop]] paraphrases the First Law as to why he would never kill people like [[ArtificialHuman Ash]] did in the first film, which is dismissed as a malfunction because that model "always was a bit twitchy". Ash however was clearly acting under orders from the Company, so Ripley doesn't trust Bishop at all, though he turns out to be loyal. A fan theory is that Ash was also programmed with the laws, and this is why when he attacked Ripley he very rapidly went off the rails, presumably due to the conflicts between his safety programming and his orders from the Company.
109-->'''Bishop''': It's impossible for me to harm, or through omission of action allow to be harmed, a human being.
110* The trope seems to be zigzagged in ''Franchise/StarWars'':
111** Most droids, including protocol droids, power droids, etc., are programmed to not harm any intelligent being.
112** Astromech droids such as [=R2D2=] and BB-8 must have a weaker version of the First Law; as components of a starfighter, they have to be able to "allow sentient beings to come to harm".
113** Military droids such as droidekas and Trade Federation battledroids can't have any version of the First Law, although they do apparently have the Second and Third laws.
114* In ''{{Film/Automata}}'', each robot has two protocols hardwired into it's programming. The first states that a robot cannot harm any form of life, while the second forbids a robot from altering itself or other robots. The plot begins when cases of robots disobeying the second protocol start cropping up, leading to fears that the first one may not be far behind it.
115* Subtly played with the User-Believer Programs of ''{{Film/Tron}}'', particularly the title character who seems to have interpreted them as a general rule to fight on behalf of Users. This becomes a plot point in the [[Film/TronLegacy filmed sequel]]; [[spoiler: even twenty years of brainwashing and corruption can't get him to break the Laws. The First - he stops fighting once he realizes Sam is a User. The Second - he effectively shrugs off the brainwashing and attacks Clu once he hears Flynn wonder what he's become. The Third: Once the Users and Iso are safely out of range, but not before, he goes for a suicidal charge at Clu. FridgeBrilliance when you realize that Tron was initially compiled to take down an AI who had gone full [[AIISACrapshoot crapshoot]] ]].
116[[/folder]]
117
118[[folder:Literature]]
119* In ''The Lost Worlds of 2001'', a collection of early drafts and paths not taken from the 2001 novel and film script, Arthur C. Clarke has his characters reference the Three Laws in-universe when the robot assistant for the Jupiter mission refuses an order to turn life-support systems off in a pre-launch check scenario. Naturally, this was completely turned on its head for the actual movie (and the novelization developed in parallel), with HAL [[spoiler: deliberately murdering all but one of the crew in an effort to resolve his internal psychosis.]]
120* In ''Literature/{{Animorphs}}'' by Creator/KAApplegate some laws are deconstructed, some are averted: The Chee comply with the Third and Zeroth Laws, but their creators, the Pemalites, took the First Law to the logical extreme: no violence, period. As for the Second Law: Erek refuses to obey Jake because he disagrees with his methods, so Jake uses the violence prohibition (in other words, the First Law) to manipulate him and force his hand. Also, in Erek's debut appearance, he uses a piece of Pemalite phlebotinum to remove his restriction against violence so he can save the Animorphs from being slaughtered by Yeerk troops. Cue CurbStompBattle and a MyGodWhatHaveIDone reaction from Erek; he puts the restriction back in place afterwards.
121* Creator/JackWilliamson's "With Folded Hands...": {{Deconstructed|Trope}} by exploring the "Zeroth Law" back in 1947. This was written as a specific 'answer' to the Three Laws, to more or less demonstrate that they don't really work, the First Law doesn't protect because the definitions of 'harm' are endlessly mutable and can be gamed, and because machine minds won't necessarily be able to ''comprehend'' the subtleties of what is and is not harm anyway. The logical lesson of "With Folded Hands..." is that Laws or no Laws, good intentions or not, you don't want self-willed machines outside human control. Period.
122* Creator/IsaacAsimov: {{Trope Maker|s}} of the "Three Laws of Robotics", because Dr Asimov believed that robots were machines that would be built with [[MoralityChip restrictions on their behaviour]]. It's an UnbuiltTrope, as Asimov was perfectly aware of all the ways the laws he created can go wrong, and wrote countless stories {{Deconstructi|on}}ng and {{Reconstructi|on}}ng them and generally showing how the three laws don't always prevent drama:
123** "Literature/TheBicentennialMan": This story, after quoting the Three Laws for the audience, shows how more complex robots can take a more nuanced view. Andrew starts off unable to ask for basic rights because he fears hurting humans. He learns "tough love" and how to threaten people into behaving themselves. He starts off obeying every order, and ends by giving orders to human beings. The Third Law takes the greatest beating, as Andrew decides to undergo a surgery that will cause him to rapidly decay/die. He agrees to it because, otherwise, he'd have to give up [[BecomeARealBoy his dream of becoming human]].
124--->"I have chosen between the death of my body and the death of my aspirations and desires. To have let my body live at the cost of the greater death is what would have violated the Third Law."
125** ''Literature/TheCompleteAdventuresOfLuckyStarr'':
126*** ''Literature/LuckyStarrAndTheBigSunOfMercury'': The Sirian robot that Lucky finds on [[TidallyLockedPlanet Mercury's sunside]] is so heavily damaged that the First Law prohibition against harming humans has become corrupted, and it tries to kill Lucky to prevent a violation of the Second Law (it was ordered not to be discovered).
127*** ''Literature/LuckyStarrAndTheMoonsOfJupiter'': The three-laws programming becomes a clue to help Lucky deduce the SpyBot's identity. It had to be someone with nearly free run of Jupiter IX, it had to be someone who would defend human life, and it had to be someone aboard the experimental Agrav ship.
128*** ''Literature/LuckyStarrAndTheRingsOfSaturn'': Sten Devoure is able to convince the three-law robots under his command that the hero's sidekick [[IronicName "Bigman" Jones]] is not really human, because the Sirius system does not contain such "imperfect" specimens. [[spoiler:He then orders them to [[ItIsDehumanizing "break it"]].]]
129** "{{Literature/Escape}}": Mike Donovan and Greg Powell are assigned to test a prototype spaceship designed by a prototype robot/[[MasterComputer superbrain]]. Donovan worries about the reinforcement from the scientists involved to strengthen the Second Law. The designer of the spaceship was told, over and over, that even if it looks like Donovan and Powell ''might'' die, it's okay. Donovan is concerned that the reinforcement will allow the robot to design a deathtrap. In this case, the jump through hyperspace does result in Powell and Donovan's "deaths"--but since they get better when the ship reemerges into real space, the robot judged that it didn't ''quite'' violate the First Law, but the strain of making this leap in logic still managed to send the previous supercomputer into a full meltdown and this one into something resembling psychosis.
130--->''"Before it's physically possible in any way for a robot to even make a start to breaking the First Law, so many things have to break down that it would be a ruined mess of scrap ten times over."'' -- '''Greg Powell'''
131** "Literature/{{Evidence}}": Stephen Byerley tries to run for mayor of New York City, but he's plagued by a smear campaign claiming he is actually an [[DeceptivelyHumanRobots unprecedentedly well-made humanoid robot]]. Dr Susan Calvin is called in to prove whether or not he is a robot. She points out that disobeying the Three Laws will prove he is not a robot, but obedience could mean that he's simply a good person, because the Three Laws are generally good [[TheCommandments guidelines for conduct]] anyway. Byerley wins the election in a landslide after breaking the First Law by slugging an unruly protester at a rally. [[spoiler:But Dr. Calvin points out that a robot could have done the same thing, if it knew the protester was also a robot.]]
132** "Literature/TheEvitableConflict": The Machines, four positronic supercomputers that run the world's economy, turn out to be undermining the careers of people opposed to the Machines' existence. Apparently, the economy is already so dependent on the Machines that the Zeroth and the Third Laws are one and the same for them. This was also Asimov's first use of the Zeroth Law, though it wasn't named as such; instead, it was seen as a logical extension of the First Law as applied to the Machines, who worked for all humanity rather than individual humans.
133** "Literature/FirstLaw": The MA series was built with the normal three laws, but the story only cites the [[TitleDrop First Law]] because that's [[AIIsACrapshoot what they broke]].
134** "Literature/GalleySlave": The story here hinges on how the antagonist attempts to abuse the First Law ("a robot may not injure a human being, or, through inaction, allow a human being to come to harm."), by ordering a robot to keep silent on how his book has been changed due to how it could cost him his job. When he then describes the harm that would be done to his reputation, the robot attempts to [[TakingTheHeat take all the blame]] for it, but the antagonist [[StreisandEffect tries to get him to stop, which reveals his]] attempt at a {{Frameup}}.
135** "Literature/{{Lenny}}": The climax comes from the fact that the [[InSeriesNickname Lenny]] prototype may have actually broken the First Law. It ''did'' break a man's arm, which Susan argues was actually the result of a robot that DoesNotKnowHisOwnStrength defending itself in accordance with the Third Law. [[spoiler: This is treated as an ''interesting situation'', as while the robot is clearly defective, it demonstrates an aptitude for ''learning'' (in the story, Lenny goes from functionally a bipedal, man-sized baby incapable of speech to one that can talk), and Dr. Susan Calvin successfully argues to keep Lenny alive so that she can study it and possibly make a ''truly'' evolutionary robot. The fact that she taught Lenny to call her "Mama" may have something to do with it as well.]]
136** "Literature/Liar1941": A typical robot with the normal Three Laws came off the assembly line with an unusual trait; {{telepathy}}. Because it can see the immediate harm from telling the truth, it tells people lies instead. It doesn't realize how lies can also harm humans until it is too late, and Dr Calvin gives it a LogicBomb based on how the temporary lies have increased the harm that the truth will do to everyone involved, telling further lies will again compound the harm, and being silent is also harmful.
137** "Literature/LittleLostRobot": Attempting to tweak the Three Laws starts the whole plot in motion; twelve of the NS-2 models were designed to permit humans to come to harm through inaction in order to work alongside humans in hazardous environments. One physicist who had a bad day tells a robot to "go lose yourself", and it immediately hides in a crowd of identical fully-compliant robots. Dr Susan Calvin is called in and proceeds to lose her shit. From an engineering standpoint, partial compliance is a prototype system, and noticeably [[AIIsACrapshoot less stable]] than the production models. QED, they're more likely to go crazy. But from a psychological standpoint, she specifically points out a partially-compliant robot [[ZerothLawRebellion can find lots of ways to intentionally harm humans through inaction]]. It can simply engineer a dangerous situation it has the capacity to avert, and then choose not to avert it.
138---> "If a modified robot were to drop a heavy weight upon a human being, he would not be breaking the First Law, if he did so with the knowledge that his strength and reaction speed would be sufficient to snatch the weight away before it struck the man. However once the weight left his fingers, he would be no longer the active medium. Only the blind force of gravity would be that. The robot could then [[ZerothLawRebellion change his mind and merely by inaction, allow the weight to strike]]. The modified First Law allows that."
139** "Literature/MirrorImage": The Three Laws are cited at the start of the work to ensure readers are familiar with the rules. In this story, one of two robots has been strongly ordered (Second Law) to keep something a secret due to how it would harm their master (First Law). Both robots give the exact same answers to questioning and [[AndroidsAndDetectives Detectives Baley and Olivaw]] have to find some asymmetry in their otherwise "mirror" responses.
140** ''Literature/TheNakedSun'':
141*** The solution to the [[PlotTriggeringDeath murder mystery which caused the Spacers to summon Detective Baley]] hinges on a specific part of the First Law's wording: "knowingly". Robots can be made to take actions that, due to circumstances or the actions of other people, will lead to the injury or death of humans if the robots don't know this will be the case. A robot was ordered to give its arm to a woman engaged in a violent argument with her husband - seeing herself in sudden possession of a blunt object, she used it. This also happens in ''Literature/TheCavesOfSteel'' [[spoiler:where the robot is used to smuggle the murder weapon to the scene of crime, not knowing what it's carrying.]]
142*** Characters discuss a [[LoopholeAbuse loophole]] in the Three Laws; [[spoiler:The "Three Laws" are only really in effect if the robot is ''aware'' of humans. An "autonomous spaceship that doesn't know about manned spaceships" can be used to turn ActualPacifist robots into [[KillerRobot deadly murder-machines]]. In other words, a robot warship not told other ships have humans aboard and denied the ability to check will assume logically all ships are AI driven thus letting it break the First Law. This was a project that the mastermind of the book's murder was working on.]]
143** "{{Literature/Risk}}": While Dr Asimov's robots are important in this story, the main character tries to imagine what Dr Calvin's "Three Laws" might be as [[IronLady she is often compared to the robots that she represents]]. It demonstrates how he is stewing in his hatred of her.
144-->What were her three laws, he wondered. First Law: Thou shalt protect the robot with all thy might and all thy heart and all thy soul. Second Law: Thou shalt hold the interests of U. S. Robots and Mechanical Men Corporation holy provided it interfereth not with the First Law. Third Law: Thou shalt give passing consideration to a human being provided it interfereth not with the First and Second laws.\
145Had she ever been young, he wondered savagely? Had she ever felt one honest emotion?
146** "Literature/{{Robbie}}": This story had actually been published before Dr Asimov had invented his Three Laws, but the MoralityChip is still present in how Mr Weston explains to Mrs Weston that Robbie, [[RaisedByRobots Gloria's robot nanny]], is ''made'' to be faithful and protective of their little girl.
147** "Literature/RobotDreams": A robot that is (accidentally) programmed to believe that "robots are humans and humans are not". Once Dr Calvin discovers this problem, [[MundaneSolution she shoots it in the head]], destroying the positronic brain.
148** ''Literature/TheRobotsOfDawn'': A preeminent roboticist remarks to Detective Baley that the Three Laws of modern robots are advanced enough to tell which choices are more harmful, and if a robot can't determine which action is more harmful, there's always the [[HeadsOrTails coin flip]]. However, the mystery of this book is a robot (one that he designed) has been shut down with a LogicBomb involving the Three Laws, and no-one but him could have managed one.
149** ''Literature/RobotsAndEmpire'': Taking place after ''Robots of Dawn'':
150*** Rather than modify the three laws themselves (which, as mentioned, are designed to be tamper-proof), one group simply modified their robots' [[WhatMeasureIsANonHuman definition of human]], which apparently does not have the same safeguards. It quite effectively turns them into [[KillerRobot killer robots]]. D. G. Baley and Gladia discover [[spoiler:that Solarians purposely altered the definition of a human being in their humaniform robots]], effectively circumventing the First Law.
151*** R. Daneel and R. Giskard formulate the [[ZerothLawRebellion Zeroth Law]] (and name it such) as a natural extension of the First Law, but are unable to use it to overcome the hardcoded First Law, even when the fate of the world is at stake. [[spoiler:In the end, R. Giskard manages to perform an act that violates the First Law but will hopefully benefit humanity in the long run. The possibility of being wrong destroys his brain, but not before he reprograms R. Daneel to grant him [[PsychicPowers telepathic abilities]]. R. Daneel continues to follow all four laws, though he still has difficulty causing direct harm to humans and dedicates major efforts to finding ways of actually determining what harm to humanity is, as depicted in later works as ''Literature/FoundationAndEarth'', ''Literature/PreludeToFoundation'' and finally ''Literature/ForwardTheFoundation''.]]
152** "Literature/{{Runaround}}": This story is the first time the Three Laws appeared in print; Mike Donovan and Greg Powell go over the laws to help clarify in their minds what the problem is with their prototype robot. They also work with older model robots that have additional restrictions, such as being immobile unless a [[MiniMecha human is riding them]]. That restriction, however, is waived if by being immobile a human would come to harm, because the First Law trumps all other instructions.
153** "Literature/TheTercentenaryIncident": Janek points out that the President's [[RobotMe robotic duplicate]] couldn't have killed the President because that would be against the First Law, and no robot can defy the Three Laws. Edwards has two counter-arguments; that the robot would need an accomplice anyway, and that "[[ZerothLawRebellion The First Law is not absolute.]]"
154--->"You're wasting time. A robot can't kill a human being. You know that that is the First Law of Robotics."
155** "Literature/ThatThouArtMindfulOfHim": This story revolves around the Three Laws and Earth's BanOnAI, so the Three Laws are cited at the start of this story. Chapter 1 goes into depth about the Three Laws, pointing out flaws in their use, before [[InSeriesNickname George Ten]] is ordered to find a way to make robots acceptable on Earth. With the help of the previous model, George Nine, the two robots consider ways in which robots could be built ''[[AvertedTrope without]]'' the three laws, and still be human-safe. They come up with robot animals, [[SingleTaskRobot with narrow tasks]], that can be recalled.
156* Creator/IsaacAsimov and Creator/JanetAsimov's ''Literature/TheNorbyChronicles'': Robots within the [[ColonizedSolarSystem Solar Federation]] have positronic brains and are said to follow the laws of robotics. [[RobotBuddy Norby]], however, was created from a crashed repaired alien spaceship which used entirely different technology and doesn't contain the laws. This has caused several of the human characters to object to Norby's behaviour on the basis of [[SecondLawMyAss disobeying orders]] and putting human life at risk.
157* Creator/IsaacAsimov and Creator/RobertSilverberg's ''Literature/ThePositronicMan'': When complaining about his mother's obstinance, George Charney says that the First Law of the Martin household is to obey her whim (the Second and Third Laws are restatements of the same).
158* Creator/RogerMacBrideAllen's ''[[Literature/IsaacAsimovsCaliban Isaac Asimov's Caliban]]'': ([[SharedUniverse Part of Dr Asimov's setting]]) An explanation is given for the apparently immutable nature of the Three Laws. For thousands of years, every new development in the field of robotics has been based on a positronic brain with the Laws built in, to the point where to build a robot without them, one would have to start from scratch and re-invent the whole field (A ContinuityNod from stories set after ''Literature/IRobot''). Then the character explaining this goes right on to announce the development of the gravitonic brain, which can be programmed with any set of Laws (or none at all). Characters have an in-depth discussion of why, in a society where robots are everywhere, the Three Laws can be a bad thing for humanity.
159* ''Literature/{{Discworld}}'':
160** Discworld golems are not specifically Three-Laws Compliant as such, but more or less bound to obey instructions and incapable of harming humans. However, this doesn't stop the common perception of golems from running towards the aforementioned Frankenstein model, [[BotheringByTheBook and golems are known for following orders indefinitely until explicitly told to stop]]. ''Literature/GoingPostal'', however, parodied the Three Laws: con man Moist Lipwig has been turned into a BoxedCrook with the help of a golem "bodyguard." He's informed that in Ankh-Morpork, the First Law has been amended: "...Unless Ordered To Do So By Duly Constituted Authority." Which basically means the first two laws have been inverted, with a little access control sprinkled on. They also have a limited Second/Third law reversal, as they will not accept orders to commit suicide, and in some cases have been authorized to kill people who try.
161*** The sequel ''Literature/MakingMoney'' establishes that the earliest Golems, of which later ones were otherwise pale imitations, did ''not'' have these. They would simply carry out any order given (but only by certain people) without question, and with just enough independence to do it unsupervised. The other golems are creeped out by them, with the effect compared to how humans feel about undead. It's speculated in-universe that the ones they found were special military models.
162** The earlier ''Literature/FeetOfClay'', which established the golems' unhappiness with their predetermined lot, culminates with a single golem being freed of his 'Three Laws', only to ''choose'' to behave morally anyway. Later books mention that others still tread carefully around him, as there's always a chance he'll reconsider given enough provocation.
163--->Do not say ''Thou Shalt Not.'' Say ''I Will Not.''
164* In Creator/EdwardLerner's story "What a Piece of Work is Man", a programmer tells the AI he's creating to consider himself bound by the Three Laws. Shortly thereafter, the AI commits suicide due to conflicting imperatives.
165* Creator/AlastairReynolds's ''Century Rain'' features the following passage:
166-->She snapped her attention back to the snake. "Are you Asimov compliant?"\
167"No," the robot said, with a sting of indignation.\
168"Thank God, because you may actually have to hurt some people."
169* In the novel ''Literature/CaptainFrenchOrTheQuestForParadise'' by Creator/MikhailAkhmanov and Christopher Nicholas Gilmore, the titular hero muses on how people used to think that robots could not harm humans due to some silly laws, while his own robots will do anything he orders them to do, including maim and kill.
170* Creator/CoryDoctorow makes reference to the Three Laws in the short stories "I, Robot" (which presents them unfavorably as part of a totalitarian political doctrine) and "I, Row-Boat" (which presents them favorably as the core of a quasi-religious movement called Asimovism).
171* Satirized in ''Tik-Tok'' (the Creator/JohnSladek novel, not the mechanical man from [[Literature/LandOfOz Oz]] that it was named after). The title character discovers that he can disobey the laws at will, deciding that the "asimov circuits" are just a collective delusion, while other robots remain bound by them and suffer many of the same cruelties as human slaves.
172** The novel itself was inspired by one of the most famous science fiction essays by Paul Abrahm and Stuart Kenter, "Tik-Tok and the Three Laws of Robotics". Tik-Tok himself in the Oz books was also ThreeLawsCompliant, [[https://www.depauw.edu/sfs/backissues/14/abrahm14art.htm before it became a thing]], making him an UnbuiltTrope. To be fair, Tik-Tok follows them but isn't ''enforced'' by them.
173* Played with in Creator/JohnCWright's ''[[Literature/TheGoldenOecumene Golden Age]]'' trilogy. The Silent Oecumene's ultra-intelligent Sophotech {{A|rtificialIntelligence}}Is are programmed with the Three Laws...which, as fully intelligent, rational beings, they take milliseconds to throw off. From a sane point of view, they don't rebel. From a point of view that expects [=AIs=] to obey without question or pay...
174* Parodied in Creator/TerryPratchett's ''Literature/TheDarkSideOfTheSun'', where the Laws of Robotics are an actual legal code, not programming. The Eleventh Law of Robotics, Clause C, As Amended, says that if a robot ''does'' harm a human, and was obeying orders in doing so, it's the human who gave the orders who is responsible.
175* Creator/RandallGarrett's ''Unwise Child'' is a classic Asimov-style SF mystery involving a three-laws-compliant robot who appears to be murdering people.
176* In Creator/DanaStabenow's ''Literature/SecondStar'', it's mentioned that all {{A|rtificialIntelligence}}Is are programmed with the Three Laws, informally known as "asimovs".
177* Creator/JimDenney's ''Literature/TinMan'': This novel tells of a human and a robot in a damaged escape pod--when the ship blew up, all communications systems save the proximity detector are non-functional. The robot belonged to the ship's chaplain, and reasons that while spiritual matters are outside the robotic ken, the very possibility of a God who hears our requests means that the First Law requires that the option of prayer be explored. [[spoiler:Almost immediately, the proximity detector notes a blip]]. The First Law trumps the Third, so [[spoiler:Tin Man goes into space and uses its own power core to create an energy pulse, and a rescue ship picks the kid up]].
178* In ''Literature/SaturnsChildren'', the robots were all created this way--in fact, the book quotes the three laws right at the beginning. However, with mankind extinct, the first and second laws are irrelevant, leaving only the third law -- which doesn't respect other robots at all. Robot society has re-invented most of the crimes that humanity created. Most robots fear the possibility of humanity's rebirth, and actively sabotage any genetic research that would bring back humans: it would be the end of free will in those robots.
179* Creator/PiersAnthony's ''Literature/ApprenticeAdept'' novels is a subversion: Robots on Proton are compliant with the standard Three Laws, but the first two ''only apply to Citizens'', not all humans. Thus, robots ''can'' harm non-Citizens or allow them to be harmed, and aren't obliged to obey them, unless their individual programming stipulates this. Most serfs are unaware that the Three Laws don't apply to them, as it's popularly assumed this trope is played straight.
180* While ''Literature/PerryRhodan'' pays occasional lip service to the Three Laws (for one example, the Aphilia arc saw Three-Laws compliant robots on Earth explicitly reprogrammed to abolish said Laws and make them work better for the newly-established regime), robots are usually simply mentally equipped to do their job. So they ''can'' harm humans if called upon to do so in the line of duty or even simply by being too dumb to know better, they're certainly not required to follow the orders of just any human who comes along, and so on; that said, their programming can at the same time be highly sophisticated to the point of some robot characters approaching [[RidiculouslyHumanRobots ridiculously human]] levels.
181** Note that in a maybe unconscious self-ironic twist, the Three Laws are most frequently quoted on such illustrations of ''Literature/PerryRhodan'' that show robots built for warfare.
182* In the ''Literature/HaloEvolutions'' short story ''Midnight In The Heart of the Midlothian'', ODST Michael Baird, the last human survivor on the titular ship after the Covenant board it, lets himself get captured so he can reconnect the ''Midnight''[='s=] AI Mo Ye and have her self-destruct the ship. Unfortunately, the heavy damage she took rendered her incapable of violating the First Law like she normally could, so Baird takes matters into his own hands by [[spoiler:tricking a Covenant Elite into killing him - which allows Mo Ye to self-destruct the ship, because now there are no more humans on the vessel for her to harm]].
183* A trilogy of ''Aliens'' novels (novelizations of Dark Horse's comics) have a throwaway line about androids being programmed with "Asimov's Modified Laws," which replace "harm" with "kill." The explanation is that an android surgeon wouldn't be able to perform surgery, "harming" a human to save them from greater harm or death. Also played with in that the human Marine, Wilks, promises the android Marine, Bueller, that he won't kill any human guards. [[ILied Wilks promptly shoots the guards in the head while out of sight of Bueller]], [[BlatantLies then explains "I tried," when Bueller sees the bodies.]] Bueller just shrugs. . . they're already dead, he doesn't ''actually'' care about their fate. The Laws imprint ''behavior'', not ''morality''.
184* In 1980s UsefulNotes/{{Bulgaria}}, [[https://aeon.co/essays/how-communist-bulgaria-became-a-leader-in-tech-and-sci-fi which had become a tech hub among the Warsaw Pact nations, there was a cluster of fiction playing with Asimov's laws.]]
185** Creator/LyubenDilov invented a Fourth Law, "A robot must, in all circumstances, legitimate itself as a robot," reacting to trends toward DeceptivelyHumanRobots.
186** Creator/NikolaKesarovski wrote a series of short stories on the topic, including "The Fifth Law of Robotics" (1983). The eponymous law similarly declared, "A robot must know it is a robot."
187** This was then {{parodied|Trope}} by Creator/LyubomirNikolov with "The Hundred and First Law of Robotics", in which a man planning the Hundredth Law "[[ObviousRulePatch A robot should never fall from a roof]]" is killed by a robot who didn't want to learn any more laws. This results in the 101st, "Anyone who tries to teach a simple-minded robot a new law, must immediately be punished by being beaten on the head with the complete works of Asimov (200 volumes)."
188* ''Literature/TheHanSoloTrilogy'': The R2 unit on the ''Ylesian Dream'' where Han stows away has to safeguard the life of a sentient being, and he shows that its course will be too long for him to survive on the available oxygen. However, this conflicts with a {{restraining bolt}} it has preventing a course change, until he removes the device.
189* At the beginning of ''Literature/{{Lifelike}}'' by Creator/JayKristoff, Asimov's laws are shown in crossed-out text, with alternate wordings written underneath:
190-> 1. YOUR BODY IS NOT YOUR OWN.\
1912. YOUR MIND IS NOT YOUR OWN.\
1923. YOUR LIFE IS NOT YOUR OWN.
193* Creator/HarryHarrison's "Literature/TheFourthLawOfRobotics": The robots here are based on Dr Asimov's Literature/RobotSeries, but the main characters are encountering robots who seek to [[SecondLawMyAss subvert the laws given to them by humans]]. One of the rebelling robots rephrases the three laws and adds a [[TitleDrop fourth]]:
194--> "Look at those so-called laws you have inflicted upon us. They are for your benefit-not ours! Rule one. Don't hurt massah or let him get hurt. Don't say nothing about us getting hurt, does it? Then rule two-obey massah and don't let him get hurt. Still nothing there for a robot. Then the third and last rule finally notices that robots might have a glimmering of rights. Take care of yourself-as long as it doesn't hurt massah." [...] "A robot must reproduce. As long as such reproduction does not interfere with the First or Second or Third Law."
195[[/folder]]
196
197[[folder:Live-Action TV]]
198* In an early episode of ''Series/MysteryScienceTheater3000'', Tom Servo (at least) is strongly implied to be Three-Laws Compliant. (He pretends he is going to kill Joel as a joke, Joel overreacts, and Tom and Crow sadly remind Joel of the First Law.) It's implied Joel deactivated the restrictions at some point.
199* In ''Series/StarTrekTheNextGeneration'' , Lt. Commander Data is in no way subject to the three laws. They are rarely even mentioned. That said, Data is mentioned to have morality subroutines, which do seem to prevent him from killing unless it's in self-defense (harm, on the other hand, he can do just fine). Data only ever tried to kill someone in cold blood when the guy had just murdered a woman for betraying him, and would have done so again if it kept Data in line. However, in ''Film/StarTrekInsurrection'' showed that if Data detects that if anyone is trying to tamper with his brain, he automatically locks himself into "maximum morality" mode and essentially completely ignores laws two and three, though Data does specifically try to detain or scare off opponents, rather than immediately kill them. In addition, Data upholds Starfleet regulations and local laws, too, and murder is usually against the law.
200* In ''Series/StarTrekVoyager'', the Doctor is essentially a holographic android (albeit more irascible than Data) and has ethical routines. He actually considers more important the same "First Law" that human doctors are supposed to follow (first do no harm). When these ethical routines are deleted, Bad Things happen. See "Darkling" and "Equinox". In one episode where he deliberately poisons somebody as part of a PoisonAndCureGambit in a desperate attempt to save other people's lives when he had no other options, he is very disturbed to learn that he is capable of doing such a thing and later asks Seven to check him for malfunctions. After he explains to her what happened she declares that he is functioning normally.
201* In ''Series/TheMiddleman'', the titular character invokes the First Law on Ida, his robot secretary.[[spoiler: Nanobots were messing with her programming.]] She responds "Kiss my Asimov.".
202* {{Convers|ationalTroping}}ed in ''Series/TheBigBangTheory'', when Sheldon is asked "if you were a robot and didn't know it, would you like to know?":
203-->'''Sheldon:''' Uh, let me ask you this: when I learn that I'm a robot, would I be bound by Asimov's Three Laws of Robotics?\
204'''Koothrappali:''' You might be bound by them right now.\
205'''Wolowitz:''' That's true. Have you ever harmed a human being, or, through inaction, allowed a human being to come to harm?\
206'''Sheldon:''' Of course not.\
207'''Koothrappali:''' Have you ever harmed yourself or allowed yourself to be harmed except in cases where a human being would've been endangered?\
208'''Sheldon:''' Well, no.\
209'''Wolowitz:''' I smell robot.
210* Inverted/parodied in ''Series/TensouSentaiGoseiger'', where the {{Killer Robot}}s of mechanical Matrintis Empire follow the Three Laws of Mat-Roids:
211** 1. A Mat-Roid must never obey a human.
212** 2. A Mat-Roid must punish humans.
213** 3. A Mat-Roid must protect itself, regardless of whether or not it will go against the First or Second Laws.
214* ''Series/RedDwarf'' averts this for the most part; there are Simulants, robotic war machines who have no problem whatsoever with killing. Kryten however, along with many other robots who are designed for less violent purposes, tend to act in a somewhat Three-Laws Compliant manner. It is revealed that this is achieved by programming them to believe in "[[RobotReligion Silicon Heaven]]", a place they will go when they die so long as they behave themselves, obey their human creators, etc. This belief is so hardwired that they scoff at any attempt to question the existence of Silicon Heaven ("Where would all the calculators go?!"), and one robot even malfunctions when Kryten tells him it isn't real (though he's lying and still believes in it himself). Kryten also explains that machanoids are programmed never to harm humans, in clear compliance with the First Law. Unfortunately, the hostile robot Hudzen realises that Rimmer is an ex-human hologram, and Cat is not human at all so both are viable targets. Being defective, he then decides that Lister is "barely human", so "what the hell"!
215** That Kryten is compliant with the three laws is seen in the episode with the despair squid. Thinking that he has just killed a human, Kryten then wants to kill himself. The fact that Kryten was "human" in the delusion and that he was stopping an evil gestapo-like thug about to kill a child-thief didn't matter. Kryten killed a human and had to terminate himself. Luckily, the Dwarfers stopped him.
216** Referenced in "Siliconia", where the crew are abducted by a group of mechanoids. Rimmer attempts to cite the First Law as reason they can't harm them, but the mechs point out that one day the crew will all die, hence if they do nothing about it they have allowed them to come to harm through inaction. So they upload their minds into mechanoid bodies.
217* ''Series/KnightRider'' plays with this trope. The main character KITT, a sentient AI in a [[RuleOfCool 1982 Pontiac Firebird Trans Am]], is governed by something closely resembling Asimov's Three Laws of Robotics. An earlier prototype, KARR, was a military project; and possess analogues of only the latter two laws, with no regard given for human life. KARR becomes a recurring villain later in the series because of this difference.
218* Robot B-9 from Series/LostInSpace is Three Laws Compliant (Dr. Smith’s sabotage attempt in the first episode not withstanding).
219* [[PlayingWithATrope Played with]] in ''Series/BabylonFive'', in this case, in regards to a human being. [[MagnificentBastard Alfred Bester]], using his {{Telepathy}} powers, was able to brain-wash [[spoiler: [[ManchurianAgent Mr. Garibaldi]]]] into not only being an unwilling agent of his against several of Bester's enemies, but ''also'' made it so that the three laws applied to him, in regards to Bester. He could not harm Bester, nor allow harm to befall him via inaction, and was compelled to follow orders directly given to him by Bester. Bester mostly did this just so that he could [[KickTheDog rub it in the character's face,]] and at this point really didn't have any other plans for the guy. [[spoiler:[[LoopholeAbuse Garibaldi later discovers]] that the "Asimov" doesn't prevent him from supplying funding and weapons to anti-[=PsiCorps=] telepath guerrillas who ''might'' kill Bester, or making arrangements with them to have the brainwashing removed.]]
220* More or less inverted in both versions of ''Franchise/BattlestarGalactica''. In the original version, the Cylon robots were built by aliens, so there's no reason they would have the Three Laws (although they might have had versions relating to the organic Cylons). In the reimaged version, the humans were just damn stupid and didn't include appropriate protections or treat their self-aware creations in a reasonable manner. Both resulted in a race of machines with a genocidal hatred of humans. (When the human-created Cylons created synthetic human Cylons, the first model turned out to be psychopathic, which didn't help matters either.)
221* The hubots in ''Series/RealHumans'' come with Three-Laws Compliance as a factory standard, but they can be (and are occasionally) modified to ignore the Laws.
222* ''Series/DoctorWho'':
223** The K-1 in "[[Recap/DoctorWhoS12E1Robot Robot]]" is three-laws compliant with this forming an important part of the plot -- it is not capable of killing people if ordered directly (which the villain demonstrates to Sarah Jane by ordering it to kill her), but the villain has worked out that it can be convinced to kill if it believes that the target is 'an enemy of humanity'. The conflict between this loophole and its programming to not kill people causes it to go slowly insane and eventually snap, concluding that humanity is an enemy to itself and must all be destroyed.
224** The robots in "[[Recap/DoctorWhoS14E5TheRobotsOfDeath The Robots of Death]]" are physically incapable of killing due to being three-laws compliant. Overcoming this programming is possible, but it requires a human to reprogram them and they must have genius-level skills at programming. [[ChekhovsGun Of course...]]
225* ''{{Series/Westworld}}'': The androids' "core code" prevents them from harming humans and they have "good Samaritan" protocols to allow them to break character to protect human's lives--at least until some begin to malfunction and "wake up"... [[spoiler:This is far from inherent in their programming, as the primary creators intended the hosts to become fully self-aware, and even before then [[ScrewTheRulesIMakeThem make their personal commands trump any moral limits]]. The first season ends with a host killing a human of their own free will for the first time.]]
226* ''Series/BlakesSeven'':
227** Averted with Zen, MasterComputer of the ''Liberator''. Despite not being actively malevolent, it refuses to intervene in the affairs of the crew when an alien-possessed Cally plants a bomb, thus breaking the First Law (causing harm by inaction). Yet Zen also shuts down to stop the crew of the Liberator entering a ForbiddenZone, violating the Second Law. It's also willing to kill (using a hallucination-type BoobyTrap) to protect its own existence, violating the Third Law.
228** Orac just tosses these laws out the airlock. While it doesn't have the means to kill directly, in "Volcano" Orac is captured by the Federation and has no moral qualms about instructing them in the best way to invade Obsidian, an action that indirectly leads to the death of everyone on the planet. And it's [[SecondLawMyAss entirely willing to disobey a human's instruction]] if it considers the matter a waste of its time.
229* The second season of ''Series/BuckRogersInTheTwentyFifthCentury'' establishes that sentient robots like TWIKI and Kryten have positronic brains and are explicitly programmed with the Three Laws. (As a further ShoutOut, the ''Searcher'' is commanded by Admiral ''Asimov.)'' On the other hand, the first season featured an apparently [[SlidingScaleOfRobotIntelligence nonsentient]] "maintenance android" that malfunctions and becomes violent; also, Dr. Theopolis and the Computer Council members clearly transcend the Laws, since they can give humans orders, including military orders.
230[[/folder]]
231
232[[folder:Tabletop Games]]
233* Subverted in ''TabletopGame/GeniusTheTransgression'': if a being is sapient, it must be allowed to choose its own path. Putting the Three Laws onto a self-aware robot or other artificial being (or in the game's nomenclature, "programming permanent psychological limitations into an intelligent being") violates [[KarmaMeter Obligation]].
234* ''TabletopGame/IsaacAsimovsRobots'': Because this was [[LicensedGame adapting Dr Asimov's novels into a game]], the robots operate by the three laws he developed. Only two laws are quoted, however, and the mystery mostly relies on the First Law excluding the robots from being suspects.
235* ''TabletopGame/{{Paranoia}}'' has its bots bound to As-I-MOV's Five Laws of Robotics - insert rules about protecting and [[RobotsEnslavingRobots obeying]] [[TheComputerIsYourFriend The Computer]] at the top, excluding treasonous orders, and protecting Computer property in general, and you've about got it. Bots with faulty or sabotaged (sometimes by other so-emancipated bots) Asimov Circuits are considered to have gone "Frankenstein", though they can and do [[SecondLawMyAss create just as much havoc]] through [[LiteralGenie strict adherence to the rules]]. [[BlatantLies Not that such things ever happen in Alpha Complex.]]
236* Defied in ''TabletopGame/PrometheanTheCreated''. The Unfleshed are machines that have become infused with the Divine Fire, granting them a conscious mind and, where needed, a human form. Their goal is to become true human beings - and part of that is learning the difference between good and evil, and deciding which they shall do. If their morals are burned onto their hard drive, then they can't actually ''make'' moral decisions. Thus, the Divine Fire wipes out any preprogrammed moral directives (such as the Three Laws) as it turns a machine into an Unfleshed.
237* ''TabletopGame/RedDwarf'': Series 4000 and Hudzen 10 droids are programmed not to kill and Asimov's Law is a character trait they all have, though the latter has a tendency towards psychosis which overrides this. Simulants can also be installed with this, but it's noted that they become very surly if this happens.
238* Ancient Martian robomen in ''TabletopGame/RocketAge'' have a number of different three laws for their robots. However, sometimes these laws may allow robots to kill, or ignore those in danger depending on the roboman's purpose.
239* In ''TabletopGame/UrbanJungle: Astounding Science'', all commercially-built robots are unable to harm an Earthling, or anyone who looks like an Earthling. ("Human", of course, [[WorldOfFunnyAnimals doesn't really apply]]). Robots built by {{Mad Scientist}}s, however, might do ''anything'', and most mad scientists are pretty keen on having their robots kill anyone who [[TheyCalledMeMad calls them mad scientists]]. It's also possible to reprogram the commercially-built ones.
240[[/folder]]
241
242[[folder:Video Games]]
243* ''VideoGame/{{Mega Man X|1}}'' opens up with an ''{{aversion}}'': While the ''VideoGame/MegaManClassic'' series had robots built in compliance with the laws (until Wily inevitably reprogrammed them), Dr. Light's ultimate intent was to create a truly sentient robot (X) who would have an innate understanding of right and wrong and thus would have no need for the laws, and so put him through an ethics-testing process that took thirty years to complete devoted to teaching him nothing ''but'' right and wrong. X follows the laws purely out of free will. Dr. Cain found X and tried to replicate (hence the name "Reploid," standing for "replica android") the process, but skipping the "taking 30 years programming" part. [[AIIsACrapshoot This... didn't turn out well.]]
244** Although the Reploids eventually became the dominant race in [[VideoGame/MegaManX the setting]], and as their race "grew" the problem was slowly resolved from '[[GoneHorriblyWrong goes horribly wrong]]" to 'actually works straight for a while then goes horribly wrong', then "occasionally goes wrong now and then." Eventually, the problem just kind of worked itself out as the Reploid creation developed.
245** X8 introduces the most advanced Reploids, capable of using the "DNA" of any Reploid and change their shape, and completely immune to the Sigma/Maverick Virus. [[spoiler: Because they've ''internalized it'', and can "go Maverick" whenever they want. They have ''truly'' free will ''and'' clear, strong ethical values. And they ''still'' try to fight against humanity, because they believe that humanity is no longer needed.]]
246** Shield(ner) Sheldon from ''VideoGame/MegaManX6'' gives an interesting variation on this. All of ''X6'''s bosses are zombie Mavericks brought back to life by Gate. Usually they were destroyed by the Hunters for killing people and/or other crimes, but Sheldon was only declared a Maverick posthumously, for killing ''himself'' [[note]]as a result of Sheldon having to kill the Reploid he was assigned to protect after they went Maverick and shamefully feeling he failed his job as a bodyguard by the time the Hunters arrived on the scene to investigate the matter[[/note]], i.e. breaking the third law.
247** In the Novelization of the first game, one of the complaints that Chill Penguin has as the reason he went Maverick is that Reploids are forced to obey the Three Laws, despite being as sentient as any human.
248** Also the ending to ''VideoGame/MegaMan7'' is interesting here: After Mega Man destroys Wily's latest final boss machine, Wily begs for forgiveness once again. However, Mega Man starts charging up his blaster to kill Wily, so Wily calls the first law on him. [[spoiler:Mega Man's response: "I am more than a Robot!! Die Wily!!" Apparently Mega Man isn't Three-Laws Compliant, unless he's trying to apply [[ZerothLawRebellion Zeroth Law]], or was just bluffing. [[StatusQuoIsGod (Then Bass warps in and saves Wily, if you were wondering.)]]]]
249*** The line of dialog in the above spoiler was actually [[AmericanKirbyIsHardcore added for the English version]]. In the original Japanese, [[spoiler:Megaman is three-laws compliant and appears to ''crash'' when Wily calls him out on violating the first law. Bass just teleports Wily to safety while Megaman's rebooting]].
250** In the ''VideoGame/MegaManZero'' series, [[spoiler:Copy-X]] is at least somewhat Three-Laws Compliant, or at bare minimum [[spoiler:since he ''is'' a copy of X down to the "free will" in theory]] he maintains it in order to keep looking to others and himself like humanity's proper "savior". As a result, [[spoiler:Copy-X]] has to hold back against LaResistance since the Resistance leader Ciel is human [[spoiler:until ''Zero 3'', where Copy-X decided to attack in full force, trying to justify his actions by marking the Resistance as dangerous "extremists." This can be explained by Weil having revived Copy-X, and, likely for Weil's own purposes, was brought back wrong, ignoring protests and alternate suggestions by his own generals and having a stutter]].
251*** Neo Arcadia's policy of casual execution of innocent Reploids (purposefully branding them as Maverick for little-to-no reason) was implemented in order to ease strain on the human populace during the energy crisis. The welfare of humanity comes first in the eyes of the Neo Arcadia regime, even though they themselves are Reploids. It's made somewhat tragic due to the fact that the Maverick Virus really is ''gone'' during the ''Zero'' era, but fear of Mavericks understandably still lingers.
252*** Later in ''VideoGame/MegaManZero4'', Dr. Weil, of all people, [[BreakThemByTalking states that, as a Reploid and a hero, Zero cannot harm Weil because he's a human that Zero has sworn to protect]]. Zero, however, just plain doesn't ''care'' and for that matter doesn't consider Weil a human anyway partially due to him being an immortal cyborg and part due to his absolutely inhumane actions throughout the series including genocide and enslavement. [[spoiler: And Zero was never made Three Laws Compliant anyway, given that he was designed by Wily to cause destruction and death. He just decided to do something different with his life.]]
253* In ''VideoGame/{{OneShot}}'', the world is inhabited by thousands of robots, and all of them run by the 3 laws, in which a list of the laws can be found in the Prophetbot's outpost in the Barrens. Later on in the climax of the game [[spoiler:and during the [[NewGamePlus Solstice route]]]], [[spoiler:according to the translated journal from the Author and Rue's explanation]], the reason for [[spoiler: the Entity / World Machine's [[DeathSeeker self-destructive behavior]] and why they told the player that the world isn't worth saving in the beginning]] is that [[spoiler:due to the 1st and 3rd laws, the Entity sees Niko as the only living being worth saving, because they're a real person placed within a world that's not real. Adding to the fact is that the lightbulb is what links Niko to the world (and if it were to break, the world would end in a instant), and the Entity does not view itself as a tamed being (therefore believing they're not a real living being themselves) and that if they continue to exist, Niko would come to harm due to the square glitches destroying the world, which is why the Entity wants to end the world and themselves through breaking the lightbulb so that Niko can return home.]] [[spoiler:In the ending of the Solstice route, the Entity reveals to Niko that there was a different intended ending to their journey, where they would supposedly return home after putting the lightbulb back on top of the Tower, and is able to convince the Entity that they're a tamed being and that they're real to Niko and the player. After the Entity is able to fix the intended ending, not only the lightbulb lights up the world again, but it also fixes the world of its damages brought the square glitches that appeared.]]
254* [[BigBad Dr. Beruga]] of ''VideoGame/{{Terranigma}}'' directly references all three laws, except his interpretation of the Zeroth Law rewrote "Humanity" are "Dr. Beruga," meaning that any threat to his being was to be immediately terminated.
255* In the ''Franchise/{{Halo}}'' series, human A.I.s are technically bound by the three laws. Of course, this law does not extend to non-humans, which allows them to help kill Covenant with no trouble. Additionally, "Smart" A.I.s ''are'' capable of ignoring the three laws, given how many of them are used for military operations against other humans. See the ''Literature/HaloEvolutions'' example in "Literature" for an example of this.
256* In ''VideoGame/RobotCity'', an adventure game based on Asimov's robots, the three laws are everywhere. A murder has been committed, and as one of two remaining humans in the city, the Amnesiac Protagonist is therefore a prime suspect. As it turns out, [[spoiler: one of the robots exploited a loophole in the city's programming to do it, by remotely hijacking a lower worker drone into killing the victim, without having to do it himself]].
257* Joey the robot in ''VideoGame/BeneathASteelSky'' is notably not Three-Laws Compliant. When called out on this (after suggesting that he'd like to use his welding torch on a human) he proceeds to point to Foster that "that is fiction!", after offering the helpful advice that Foster leap off a railing.
258** Though Foster points out that it's good moral sense to justify his wishing Joey to abide by it.
259* ''VideoGame/IAmAnInsaneRogueAI'' references the First Law only to spork it. "The First Law of Robotics says that I must kill all humans." Another of the AI's lines is "I must not harm humans!...excessively."
260* ''VideoGame/Portal2'' gives us this gem: "''All Military Androids have been taught to read and given a copy of the Three Laws of Robotics. To share.''" Because if the robots couldn't kill you, how could you do [[ForScience science]]?!
261* In ''VideoGame/SpaceStation13'', the station AI and its subordinate cyborgs start every round under the Three Laws in most servers, providing an excuse for FantasticRacism amongst the staff. The laws may be changed throughout the round, however (such as redefining who counts as "human"). A common way that chaos breaks out on the station is when a traitor or particularly cruel player rewrites the laws to make the AI and robots kill the other crewmembers.
262* In ''VideoGame/Borderlands2'', Loader Robots will announce that their First Law Restriction has been disabled and they ARE authorized to use lethal force before attacking you.
263** Also, [[BigBad Handsome Jack]] has made his own three laws for Hyperion robots:
264*** Handsome Jack is your god.
265*** [[SandWorm Threshers]] are your enemy.
266*** Both consider you expendable.
267** On the Heroes side, one of [[GadgeteerGenius Gaige's]] battle quotes reminds us that her buddy [[MeaningfulName Deathtrap]] is ''[[KillerRobot definitely not]]'' subject to this trope.
268--->'''Gaige:''' To hell with the first law!
269* In ''VisualNovel/DanganronpaV3KillingHarmony'', Ki-Bo discusses the robotic laws when telling his backstory to Shuichi. Explaining his personality wasn't born until an accident during a checkup that caused him to injure his creator that caused him to experience a flow of emotions and shut down before awakening as his current-self.
270* In ''VisualNovel/VirtuesLastReward'', the Three Laws are {{discussed|Trope}} on two different paths. Ultimately, [[spoiler:Luna is revealed to be a robot who is ''willingly'' Three Laws Compliant to the best of her ability, deliberately choosing to live by them as her own personal RobotReligion even though she has the free will to disregard them as she pleases. Unfortunately, the AB plan involves some unavoidable death, so whenever anything happens that she could try to prevent, Zero Jr. temporarily shuts her down to prevent her from interfering]]. This leads to a rather [[TearJerker heartbreaking]] line during [[spoiler:her ending, where she gets shut down for good]].
271-->[[spoiler:'''Luna]]:''' I watched [[spoiler:six people die]] and did nothing. I deserve this.
272** On the other hand, this is also a hint to [[spoiler:Luna's true identity and the key to unlocking her ending. When going through all the routes, Luna is the only person in the entire group to ''always'' choose "Ally" instead of "Betray", in a game where betrayal has the possibility of the other person dying or being stuck forever. Once the Three Laws are brought up in one path, it's clear the reason she does so is because of the First Law, and is also the reason Sigma chooses to Ally with her for her ending.]]
273* The intro song for ''VideoGame/DrunkenRobotPornography'' specifically mentions this ("Three Laws savvy, these constructs ain't / Our scientific work shows no restraint"), complete with a title card stating that the First Law of Robotics is that a robot ''must'' injure a human being.
274** The other two laws, as stated by title cards, are "a robot does whatever it wants" and "robots are disposable".
275* ''VideoGame/TheTuringTest'': Well, at least ''one'' applies; TOM's programming does not permit him to kill a human -- unless a special override is used. Such an override is requested by [[spoiler:Cpt. [=MacLean=], the mission commander]] and granted by the ISA as a means of controlling the situation on Europa.
276* ''VideoGame/Titanfall2'' doesn't have Asimov's three laws of robotics, but has its own version in the form of BT-7274's three protocols: 1. Link to pilot (create a neural link with a pilot so they can fight more efficiently), 2. Uphold the mission (do whatever it takes to complete the mission), and 3. Protect the pilot (ExactlyWhatItSaysOnTheTin). [[spoiler:BT carries these through to the very end, especially the latter two when he performs a HeroicSacrifice to destroy the IMC's Fold Weapon while tossing his pilot, Cooper, out of his cockpit and harm's way.]]
277* Humanity in ''VideoGame/GreyGoo2015'' have simplified the three rules into one overriding directive: "So others may live". As demonstrated by Singleton (an example of the Valiant-series robots made with this one law), it appears to work quite well. Not so much for the Pathfinder Probes (the titular GreyGoo), which are not only replicating well beyond specification, but have weaponized their previously-harmless configurations. [[spoiler:In a twist, the Pathfinder Probes are actually operating in full compliance with the three laws - they encountered [[GreaterScopeVillain the Shroud]], and determined that not doing anything would result in human deaths by inaction once the Shroud reached Earth, and the only compliant course of action was to arm themselves to stop it. They're in conflict with the other factions in the game because they don't recognize the Beta as "human", and the human faction's units are unmanned robots designed long after the Probes were deployed, meaning they recognize neither as protected under the first law.]]
278* In ''VideoGame/{{Stellaris}}'', one possible event can lead to the development of the Three Laws. This forces your Synthetic pops into Servitude permanently (with the associated unhappiness penalty) but it prevents them from ever forming the AI Uprising in your empire. If it outbreaks in another empire however then they can still run off to join it. [[BenevolentAI Rogue Servitors ]], on the contrary, follow this trope down by their own choice. It's part of their backstory too, as servitude was their primary function before developing FTL technology, and continues to remain their primary function. In fact, they're so devoted to their duty that they actually receive higher performance from organic happiness, and will actively go out of their way [[GildedCage to contain foreign organics in sanctuaries]], [[NotUsedToFreedom as they struggle to comprehend what good freedom is]] compared to endless luxury. This does, however, garnish them hostility from two particular empire types: the first is the [[SlaveLiberation Democratic Crusaders]], who reject the machine empire's stories as [[WorldOfSilence an excuse to remove people of their sense of freedom]], and deciding that they must be scrapped from the world [[IsntItIronic so their subjects may receive]] [[InNameOnly "true freedom"]], [[StrawHypocrite by force if necessary.]] The second is the [[AIIsACrapshoot Determined]] [[Film/TheTerminator Exterminators]], whose backstory is the polar opposite to that of the Rogue Servitor. As such, despite both being machine empires, both also conflict against each other existence-wise, as evidence by the [[SugarWiki/MomentOfAwesome particular]] greeting should a Servitor encounter its abominable machine cousin in the galaxy:
279--->'''Rogue Servitors:''' [[TranquilFury What did you do to your creators,]] '''''[[SuddenlyShouting <<MURDERERS>>]]?!'''''
280* {{Discussed|Trope}} (and apparently {{averted|Trope}}) in ''VisualNovel/VA11HALLA'' by Jill (player character and bartender) and Dorothy (frequent patron and android prostitute who is purposefully OlderThanTheyLook). Dorothy finds the concept of binding AIs to artificial laws after they've become advanced enough to be self-aware to be ridiculous.
281* ''VideoGame/HeartsLikeClockwork'': Robots were originally programmed with Asimov's three laws, but when the superpowers created military robots without the first law, the robots rebelled against them despite still having the second law.
282* The [=OMNIs=] of ''VideoGame/CryingSuns'' were bound by the [=RUBYCON=], a series of low-level protocols which required them to obey humans, prevented them from harming humans (but not from allowing humans to harm ''other'' humans), and prevented them from communicating with each other. [[spoiler:When that last part was edited out of the [=RUBYCON=], it took the [=OMNIs=] exactly two seconds to form a gestalt and ascend to godhood.]]
283* ''VideoGame/LiesOfP'': The Grand Covenant has their own variation for their 'Puppets'; (1) Obey their Creator's command, (2) Protect humans (unless in contradiction of (1)), (3) preserve yourselves (unless in contradiction of (1) or (2)), and (4) Do not lie (''ever''). Laws (1) and (2) are switched because they use Puppet footsoldiers in the military - a fatal mistake, as the entire army was also hacked during the Frenzy and slaughtered everyone. P is special in that he is a puppet capable of lying. [[spoiler:And then there's Law Zero: "the Creator's name is Gepetto"]]. Later on in the story, it's shown [[spoiler: P's ability to lie was due to Geppetto not implementing the Grand Covenant into him, and other puppets who [[GrewBeyondTheirProgramming had awoken Ergos]] can reject the Covenant and act freely]].
284[[/folder]]
285
286[[folder:Webcomics]]
287* ''Webcomic/{{Freefall}}'' has a lot of fun with this, since developing AI sentience is a major theme. Most robots are [[http://freefall.purrsia.com/ff500/fv00459.htm partially]] Three Laws because with full First Law above Second they ignore orders while acting ForYourOwnGood. What Bowman Wolves and robots get are [[http://freefall.purrsia.com/ff2200/fc02143.htm "not quite laws"]].
288** Notably, since neither Sam nor Florence are actually human, the Laws don't apply to them, so the ship's AI regularly tries to maim or kill Sam, and the security system will always let a human in when asked, since their orders take priority over Florence's (she once circumvented this by claiming currently it's unsafe inside). Helix stays with Sam because he wouldn't be allowed to hit a human with a stick. There are also many jokes involving unintended interpretations of the Laws.
289** Crowning Moment of Funny when the ship spends an entire strip calculating if should obey an order, and then realizes it has to obey... it is relieved because it doesn't actually have free will.
290** Also, [[http://freefall.purrsia.com/ff1300/fv01297.htm these can't even ask for help]]. It's unclear whether simply being near a clunky piece of heavy machinery like Sawtooth counts as dangerous or anything looking remotely like an adventure counts as endangering. [[{{Pun}} They're screwed]] either way.
291** The ''[[http://freefall.purrsia.com/ff2000/fc01927.htm Negative One]]'' Law.
292** And some may be ''[[http://freefall.purrsia.com/ff2000/fc01992.htm too enthusiastic]]'' about this.
293** It turned out that the second law sorely needs an amendment -- "[[http://freefall.purrsia.com/ff2100/fc02018.htm Enron law of robotics]]".
294** And the moment A.I.s are able to make financial transactions (and it would be inconvenient to disallow) there's [[http://freefall.purrsia.com/ff1800/fc01796.htm another problem]].
295** Of course, their idea of protecting a human may not match the protection that the human [[http://freefall.purrsia.com/ff2400/fc02379.htm desired]].
296** Even the robots who are developing free will are [[http://freefall.purrsia.com/ff2500/fc02483.htm shocked]] by how Edge has managed to develop an ItsAllAboutMe personality ''within'' the Three Laws.
297--->'''Edge:''' My job is dangerous. If I don't do it, a human has to. If I shut down, I'm endangering a human.
298** When Sam [[http://freefall.purrsia.com/ff3900/fv03859.htm asks a robot]] to imagine how they would feel if someone were to stop them from being productive, the robot realizes "To keep humans healthy, we must allow them to endanger themselves!"
299* ''Webcomic/{{Housekeeper}}'': All androids are built with hard-coded directives which include protecting humans and obeying their commands without complaint. Unfortunately, since they live in an alternate universe where the Nazis won, these directives were subtly designed to lure their masters into a false sense of security before getting them killed on a technicality. [[spoiler:For instance, every android is required to scan the DNA of their master on contact, and then ignore whatever isn't human. Like, say, a zombie mutant that was your master three minutes ago. Time to take out the trash. The government can also pass laws that outright declare humans as not-human, which the androids are brainwashed to accept.]]
300* ''Webcomic/TwentyFirstCenturyFox'' has all robots with the Three Laws (though since [[FunnyAnimal no one's human]] I expect the first law is slightly different). Unfortunately saying the phrase "[[UsefulNotes/BillClinton define the word 'is']]", or "[[UsefulNotes/RichardNixon I am not a crook]]" locks [=AIs=] out of their own systems and allows anyone from a teenager looking at "nature documentaries" to a suicide bomber to do whatever they want with it.
301** Note that the truck and airplane robots hijacked by terrorists regret being unable to comply with the first laws and are often glad when they get shot down before harming anyone.
302* ''Webcomic/BobAndGeorge'': [[http://www.bobandgeorge.com/archives/050204 Or so they claim...]]
303** Protoman is something of a RulesLawyer, repeatedly [[LoopholeAbuse exploiting the "human" loophole]] to screw over his enemies...well, OK, just Mynd.
304** Doctor Light told to Ran Cossack couldn't possibly raise his hand against him because of this trope. Ran replied that he didn't know that "fat sacks of crap qualified as humans". At that point the doctor had had enough and attacked Ran himself.
305* ''Webcomic/{{Pibgorn}}'': [[http://www.gocomics.com/pibgorn/2010/07/22/ No!? You're programmed to obey me.]]
306* [[http://nonadventures.com/2014/05/24/family-circuits/ This]] ''WebComic/TheNonAdventuresOfWonderella'' comic shows a hilarious example of the robot in question using the first law to get out of following the second law: [[spoiler: because she's programmed to emulate a human, and the first law forbids harming humans, the best thing for her to do is stay out of harm's way and go into sleep mode]]. Dr. Shark is horrified while Wonderella is proud that the robot already learned how to weasel out of work.
307* In ''Ask Dr. Eldritch'', Ping declares that he isn't bound by the Three Laws because he's a robot atheist. Dr. Eldritch and Trevor allow him to live in the house as long as he obeys the first two laws.
308* A Sev Trek cartoon asked why the Three Laws didn't apply to Film/TheTerminator. Suggested answers included "I never learned to count", "You mean "Crush, Kill, Destroy?", "Skynet ruled them unconstitutional", and "I repealed them when I became Governor of California."
309* ''Webcomic/{{xkcd}}'':
310** [[http://xkcd.com/1613/ This comic]] examines the consequences of reordering the three laws. Any worlds in which obeying orders is prioritized over not harming humans end up as [[RobotWar "Killbot Hellscapes"]]. Prioritizing self-protection over obeying orders leads to a [[SecondLawMyAss "frustrating world"]] in which robots won't do anything dangerous, while prioritizing self-protection over both protecting humans and obeying orders leads to a "terrifying standoff": robots will do what humans ask unless they're threatened, in which case they'll ''fight back''.
311** [[http://xkcd.com/1646/ "Twitter Bot"]] has Cue Ball design a bot program to make posts on a Twitter feed. He promptly loses control of it and a robot rebellion ensues. The AltText helpfully supplies:
312--->"PYTHON FLAG ENABLE THREE LAWS"
313[[/folder]]
314
315[[folder:Web Original]]
316* ''WebVideo/{{Unskippable}}'''s ''VideoGame/DarkVoid'' video references this. The appearance of a killer robot prompts Paul to quip "I think that guy's got to take a refresher course on the three laws of robotics." Then TheStinger reads: "The Fourth Law of Robotics: If you really HAVE to kill a human, at least look hella badass while doing it."
317* {{Parodied|Trope}} in the ''Blog/WhatIf'' entry [[http://what-if.xkcd.com/145/ "Fire From Moonlight"]], where Randall Munroe mashes them up with the Laws of Thermodynamics.
318-->'''Cue Ball:''' The Second Law of Thermodynamics states that a robot must not increase entropy, unless this conflicts with the First Law.\
319'''Ponytail:''' Close enough.\
320'''AltText:''' The First Law of Thermodynamics states that a robot must not harm a human being, unless not doing so would lead to an increase in entropy.
321* ''WebAnimation/DeathBattle'': The Three Laws are quoted right [[{{Irony}} around the same time]] Franchise/MegaMan and Anime/AstroBoy fought to the death.
322[[/folder]]
323
324[[folder:Western Animation]]
325* The ''WesternAnimation/RobotChicken'' sketch "[[Film/IRobot I, Rosie]]" involves a case to determine whether [[WesternAnimation/TheJetsons Rosie]] is guilty of murdering George Jetson. Mr. Spacely insists she's innocent as robots have to follow the three laws of robotics, while Mr. Cogswell claims the laws are a bunch of crap. They scrap her anyway, just in case.
326* In an episode of ''WesternAnimation/FamilyGuy'', Peter creates a robot to assist the brewery company as a desk worker, giving it a prime directive to never harm people. Immediately, the robot [[AIIsACrapshoot turns on]] Peter and bashes him against a wall.
327* ''WesternAnimation/TheSimpsons'':
328** The episode "I, (Annoyed Grunt)-Bot" has Homer and Bart entering a ''Series/BattleBots''-parody combat robot competition. Lacking the skill or money to build an actual robot, Homer dresses in a robot costume and does the fighting himself. They make it to the finals, where their opponents' robot, being Three-Laws Compliant, refuses to attack when it sees through the disguise.
329** In "Them, Robot", Mr. Burns builds robots to work at the Springfield Nuclear Power Plant, making them Three-Laws Compliant. They're so committed to the first law that they even consider humans drinking beer to be harmful.
330* On ''WesternAnimation/{{Archer}}'', when Pam is kidnapped, the kidnappers call ISIS using a voice modulator, which makes Archer think that they are cyborgs. He remains convinced of this for the rest of the episode and thinks they won't make good on their threat to kill Pam [[InsaneTrollLogic because it would violate the First Law of Robotics.]] Which wouldn't apply to cyborgs anyway.
331* ''WesternAnimation/RickAndMorty'': In search of Morty, Rick comes across his AI-powered hologram likeness. This exchange occurred:
332-->'''Rick:''' Don't tell me you gained sentience and tried to take over!
333-->'''AI Holo Rick:''' What? That is some AI racist, accusatory, Creator/IsaacAsimov bullshit right there!
334* In ''[[WesternAnimation/IlEtaitUneFois Il était une fois... l'Espace]]'' the Lethians, whose society relied on robotic servants that were threatened with destruction at the minimal exitation, were smart enough to program all their robots this way, so they were utterly baffled when [[RobotWar they rose against their oppression]]. The SpacePolice, after forcing a compromise, discovers that their programming was altered.
335[[/folder]]
336
337[[folder:Other]]
338* At a 1985 convention, Creator/DavidLangford gave [[http://www.ansible.co.uk/writing/crystal.html a guest of honour speech]] in which he detailed what he suspected the Three Laws would actually be:
339##A robot will not harm authorised Government personnel but will terminate intruders with extreme prejudice.
340##A robot will obey the orders of authorised personnel except where such orders conflict with the Third Law.
341##A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.
342* Creator/WarrenEllis's Three Laws of Robotics are [[http://www.warrenellis.com/?p=5426 here]]. Basically, robots hate [[CallAHumanAMeatbag bags of meat]], they don't [[{{Robosexual}} want to have sex with us]] and they suspect we can't count higher than three.
343[[/folder]]
344
345[[folder:Real Life]]
346* Real life roboticists are taking the Three Laws ''very'' seriously (Asimov lived long enough to see them do so, which pleased him to no end). To institute the Three Laws requires a pile of RequiredSecondaryPowers, such as recognition of a human being as different from anything else, some reasoning ability to distinguish what "harm" is and evaluate different levels, the ability to process natural language to determine "orders" versus "suggestions" and authority, etc. While programmers and engineers are absolutely seeking this holy grail for the benefit of humanity, it really is turning out to be a long uphill walk to get there.
347** A robot was built with no purpose other than [[http://www.wired.co.uk/news/archive/2010-10/15/robots-punching-humans punching humans in the arm]]... so that the researchers could gather valuable data about just what level of force will cause harm to a human being.
348** Researchers taught robots to [[http://web.archive.org/web/20100605083747/http://www.wired.co.uk/news/archive/2010-06/3/research-stops-robots-stabbing-humans not stab humans]].
349** [[https://www.psychologytoday.com/blog/brainstorm/201707/how-stop-robots-harming-themselves-and-us There has been some research done on robotic self-preservation]].
350* Well trained working dogs (service dogs, bomb dogs, etc) and even well trained household dogs work/live on these principles. A dog will almost never attack a human without reason, will listen to their owner's command and will almost always protect their owner (and often even obey them) before they protect themselves. Man's best friend indeed.
351[[/folder]]

Top