Before around 1940, almost every SpeculativeFiction story involving robots followed the Frankenstein model. A robot had to be constantly given instructions on what to do by a human, and in the absence of any such human control, it was CrushKillDestroy time. Fed up with this, a young Creator/IsaacAsimov decided to write stories about ''sympathetic'' robots, with [[MoralityChip programmed safeguards]] that prevented them from going on Robot Rampages. A conversation with Editor of Editors Creator/JohnWCampbell helped him to boil down those safeguards into '''The Three Laws of Robotics:'''

->1. [[ThouShaltNotKill A robot may not injure a human being or]], [[AccompliceByInaction through inaction]], [[MurderByInaction allow a human being to come to harm.]]
->2. [[RobotMaid A robot must obey orders given to it by human beings]], except where such orders would conflict with the First Law.
->3. [[ICannotSelfTerminate A robot must protect its own existence]], as long as such protection does not conflict with the First or Second Laws.

According to Asimov's account, Campbell composed the Three Laws; according to Campbell's account, he was simply distilling concepts that were presented in Asimov's stories.

The laws cover most obvious situations, but they are far from faultless. Asimov got a great deal of mileage writing a huge body of stories about how the laws would conflict with one another and generate unpredictable behavior. A recurring character in many of these was the "robopsychologist" Susan Calvin (who was, not entirely coincidentally, a brilliant logician who hated people).

It is worth noting Asimov didn't object exclusively to "[[AIIsACrapshoot the robot as menace stories]]" (as he called them) but also the "[[WhatMeasureIsANonHuman the robot as pathos]]" stories (ditto). He thought that robots attaining and growing to self-awareness and full independence were no more interesting than robots going berserk and [[TurnedAgainstTheirMasters turning against their masters]]. Though he did, over the course of his massive career, write a handful of both types of stories (still using the three laws), most of his robot stories dealt with robots as tools, because it made more sense. Almost all the stories surrounding Susan Calvin and her precursors are really about malfunctioning robots, and the mystery of investigating their behavior to discover the underlying conflicts.

Alas, as so often happens, Asimov's attempt to avert one overused trope gave birth to another that has been equally overused. Many writers (and readers) in the following decades would treat the Laws of Robotics as if they were as immutable as [[SpaceFriction Newton's Laws of Motion]], [[FasterThanLightTravel the Theory of Relativity]], [[ArtificialGravity the Laws of]] [[GravityIsAHarshMistress Gravity]]... wait ... you know, they treated these laws better than they treated most ''real'' scientific principles.

Of course, even these near-immutable laws were played with and modified. Asimov eventually took one of the common workarounds and formalized it as a [[ZerothLawRebellion Zeroth Law]], which stated that to a sufficiently advanced and well-informed robot, the well-being of humanity as a whole could take precedence over the health of an individual human. Stories by other authors occasionally proposed additional extensions, including a -1st law ([[WhatMeasureIsANonHuman sentience as a whole]] [[InvertedTrope trumps humanity]]), 4th ([[RidiculouslyHumanRobots robots must]] [[ParanoiaFuel identify themselves as robots]]), a different 4th (robots are free to pursue other interests when not acting on the 1st-3rd laws) and 5th ([[TomatoInTheMirror robots must know they are robots]]), but unlike Asimov's own laws, these are seldom referenced outside the originating work.

The main problem, of course, is that it is perfectly reasonable to expect a human to create a robot that does ''not'' obey the Three Laws, or even have them as part of their programming. An obvious example of this would be creating a KillerRobot for a purpose like fighting a war. For such a kind of robot, the Three Laws would be a hindrance to its intended function. In his own stories, Asimov establishes that the Three Laws are hard-coded into the most basic programming that underlies all artificial intelligence; programming a non-Three-Laws-complaint robot would require going back to the beginning and rewriting its entire programming from scratch, not a trivial matter. Asimov also suggested one workaround for this: an autonomous spaceship programmed with the Three Laws could easily blow up spaceships full of people because, being an unmanned spaceship, it could be led to believe that other spaceships were unmanned as well.

Also see SecondLawMyAss.



[[folder:Anime & Manga]]
* ''Anime/TimeOfEve'' and ''Aquatic Language'' both feature the Three Laws and the robots who bend them a little.
* [[Anime/GaoGaiGar GGG]] robots are all Three Laws Compliant, at one point in ''[=GaoGaiGar=] Final'' the Carpenters (a swarm of construction robots) disassemble an incoming missile barrage, but this is given as the reason they cannot do the same to the manned assault craft, as disassembling them would leave their crews unprotected in space.
* Averted in the ''Manga/{{Chobits}}'' manga when Hideki asks the wife of the man who created Persocoms why he didn't just call them "Robots." Her reply was that he didn't want them to be associated with, and thus bound by, the Three Laws.
* ''Manga/AstroBoy'', although Creator/OsamuTezuka [[OlderThanTheyThink probably developed his rules independently from Asimov]]. In ''Manga/{{Pluto}}'', the number of robots able to override the laws can be counted on one hand. [[spoiler:One of them is [[TomatoInTheMirror the protagonist]]]]. Tezuka reportedly disliked Asimov's laws because of the implication that a sentient, artificially intelligent robot couldn't be considered a person (an issue that Asimov didn't directly address until "The Literature/BicentennialMan"), and devised his own Laws Of Robotics. Just one of the things that the CGI movie missed.
** The Robot Laws in the ''Astro Boy'' 'Verse are also greater in number. Aside from the usual "Don't harm humans," other laws exist, such as laws forbidding international travel to robots (unless permission is granted), adult robots acting like children, and robots not being allowed to reprogram their assigned gender. However, the very first law has this to say: "Robots exist to make people happy."
* ''[[Anime/GhostInTheShell Ghost in The Shell: Innocence]]'' mentions Moral Code #3: Maintain existence without inflicting injury on humans. Gynoids are defying the law by creating deliberate malfunctions in their own software.
* In one short arc of ''Manga/AhMyGoddess'', one of Keiichi's instructors attempts to dismantle Banpei and Sigel [[ForScience for research purposes]] (Skuld had made them capable of walking on two legs, which he had not been able to do with his own designs). Once they escape his trap, the professor asks if they know about the Three Laws of Robotics. They don't. He doesn't die, but they do rough him up and strap him to a table in a way that makes it look like he'd been decapitated and his head stuck on one of his own robots.
* The HumongousMecha of ''KuroganeNoLinebarrel'' are normally this, aside from a slight difference in priorities between the first and second laws. In fact, this is the justification for them having pilots, despite being equipped with relatively complex AI. The Laws are hard-coded into them and thus they are only militarily useful when they have a Human with them to pull the trigger. Their aversion to killing is so great, in fact, that if one accidentally kills somebody (as things whose very footsteps can make buildings collapse are wont to do) they're compelled to use their advanced technology to bring them back to life.
* {{Invoked}} in Episode 3 of ''VisualNovel/MajiDeWatashiNiKoiShinasai'', where Miyako orders Cookie to taze Yamato into submission, while Yamato orders the robot to get Miyako off him. Cookie considers this dilemma out loud, where he has to obey a human command, yet isn't allowed to seriously hurt a human, yet also cannot allow a human to come to harm through his inaction.
* Though technically human, the "Twilights" or "Tagged" of ''Manga/{{Gangsta}}'' are forced to follow the Three Laws as part of the rules that save them from [[SlaveRace mandatory enslavement.]] It should be noted, however, that [[LoopholeAbuse the Three Laws don't prevent them from being hirable as mercenaries or hitmen.]]
* Played with in ''Anime/TheBigO''. While not all androids in the series are ThreeLawsCompliant, one exchange suggests that main character [[RobotMaid R Dorothy]] [[RobotGirl Waynewright]] (whose name is a ShoutOut to the works of Asimov) is. During a fight with Alan Gabriel, whose status as robot or human is unclear, she initially fights back very little and appears to be losing. She asks him whether he's a human or a robot like her, and he answers jokingly, "I'm the boogeyman!" Apparently taking this as a literal statement that he's not human (and therefore violence against him would not violate the First Law), she proceeds to [[CurbStompBattle open a can of whoop-ass]]. (For reference, Gabriel is later revealed to be a cyborg.)

* Like his game counterpart, ComicBook/MegaMan is limited by these rules with one slight variation. Robots are allowed to harm humans if their inaction would cause greater harm to other humans. This comes into play in the fourth arc and allows Mega Man and his fellow robots to disarm and neutralize an anti-robot extremist group when they begin indiscriminately firing at people. Additionally, the discrepancy mentioned in the description is acknowledged when it's pointed out that Dr. Wily's reprogramming of Dr. Light's robots into weapons of war overwrote the Three Laws; in fact, at one point Elec Man says he ''wishes'' he still had Wily's code in him so that he could fight back against the aforementioned extremists.
* It's implied in the ''ComicBook/JudgeDredd'' story ''Mechanismo'' that robots can't harm humans. A group of criminals holding people hostage start panicking when a Robo-Judge approaches them only for one to point out that [[TemptingFate "Robots ain't allowed to hurt people"]].
--> '''Robot Judge:''' In that case, what's about to happen will come as something of a shock to you. ''(Blasts said kidnapper in the face with a rocket launcher)''
* In ''ComicBook/ABCWarriors'', many robots venerate Asimov, and the more moral ones live by the three laws. However, this is not an absolute; Steelhorn, for example, obeys a version which essentially replaces ''human'' with ''Mars'', and members of the [[RobotReligion Church of Judas]] explicitly reject the first two laws. However, this causes conflict with their programming leading to profound feelings of guilt, which they erase by praying to Judas Iscariot.
* In ''ComicBook/AllFallDown'', AIQ Squared, the A.I. model of his inventor, is designed to be this. [[spoiler: It finds a loophole-- Sophie Mitchell is no longer human.]]
* Danger, the sentient A.I. of the [[Franchise/{{X-Men}} Danger Room]], could goad people into killing themselves and program other killer robots to do her dirty work. But when Emma challenged Danger to just punch off Emma's head, Danger couldn't do it. Like the Danger Room itself, Danger cannot directly kill people.

* ''Plan 7 of 9 from Outer Space''. Captain Proton is threatened by a KillerRobot, who explains that it is exempt from the Three Laws under a subsection of the Robot Patriot Act. Earlier he meets a SexBot, created to be the ideal male fantasy. "No Servus droid may harm the male ego or, through omission of action, allow that ego to be harmed." Later the Sayer of the Laws (a holographic Issac Asimov) is briefing a newly constructed batch of robots on the Three Laws. When the robots [[SecondLawMyAss start debating them]], he summarizes the Three Laws as, "Do as we say, not as we do!"
* ''Fanfic/TheWarOfTheMasters'' [[]], a SharedUniverse of ''VideoGame/StarTrekOnline'' fics, has {{AI}}s governed by a theoretically more comprehensive version based on the [[Literature/TheBible Ten Commandments]]. Violating one causes the AI to freeze and require a manual reboot. Some {{AI}}s, such as one belonging to Section 31, lack one or more commandments.
-->1. "Have no gods before Me." An AI can only operate at the behest of human masters.\\
2. "Make for yourself no idols." An AI cannot create new directives without consent of their assigned humans.\\
3. "Do not take an oath to the Lordís name in vain." An AI must follow through on assigned tasks.\\
4. "Remember the Sabbath and keep it holy." Don't bother your masters while they're sleeping unless it's an emergency. Also keep in mind that humans can't process information as fast as you.\\
5. "Honor thy father and thy mother." Be respectful to your masters.\\
6. "Thou shalt not murder." Do not kill without instruction or permission.\\
7. "Thou shalt not commit adultery." Do not merge with other AI.\\
8. "Thou shalt not steal." Don't steal information without instruction or permission.\\
9. "Thou shalt not bear false witness." Do not lie to your masters or anyone under whom you are working.\\
10. "Thou shalt not covet." Prevents selfishness and personal ambition.

* In ''ForbiddenPlanet'', Robbie the Robot is Three Laws Compliant, locking up when ordered to shoot one of the visiting starship crewmen, because his programming to follow a direct order comes into conflict with his prohibition against injuring a human being. Later in the movie, Robbie is unable to fight the monster because he figures out [[spoiler: it's actually a projection of the Doctor's dark psyche]], and thus to stop it, he'd have to kill [[spoiler: the doctor]].
* The Will Smith film ''Film/IRobot'' hinges on a [[ZerothLawRebellion Zeroth Law plot]]. It also turns the three laws into a marketing gimmick, with "Three Laws Safe" applying to robots like "No preservatives" applies to food. However, much of the plot hinged on TheReveal that [[spoiler: Sonny was ''not'' Three Laws Compliant, as part of a ThanatosGambit by his creator.]]
* The film ''Film/BicentennialMan'' (based on a novella by Creator/IsaacAsimov himself) features a robot who, through some freak accident during construction, possesses true sentience. It follows his 200 year long life as it first becomes clear he is aware, this awareness develops, and he eventually finds a way to be formally recognized as legally human. For the entire film, he operates under the Three Laws. At the same time, he does have a far more nuanced view than most robots. Once freed, he doesn't blindly obey orders.
* Surreally enough, the ''Franchise/{{Terminator}}'' films have employed this, to a degree, most obviously in ''Film/Terminator2JudgmentDay''... The T-800 Model 101 (Arnold Schwarzenegger) protecting John Connor is reprogrammed to accept his commands (Second Law) and to protect him at all costs (First Law). To further support the first law, John Connor orders the T-800 to not kill anybody. Skynet apparently imposes the Third Law on its models, since Arnold [[ICannotSelfTerminate can't 'self-terminate']]. Even stranger, apparently a bit of Zeroth Law evolution occurs as well after he is set to "learn" mode; the converted Terminator is convinced to expand its mandate to not only protect Connor, but to try and save humanity by averting Judgment Day altogether... go figure...
* In ''Film/{{Aliens}}'', [[ArtificialHuman Bishop]] paraphrases the First Law as to why he would never kill people like [[ArtificialHuman Ash]] did in the first film. Ash was bound by the same restrictions, but just wasn't engineered as well. When he attacked Ripley he very rapidly went off the rails, presumably due to the conflicts between his safety programming and his orders from the company. Bishop lampshades this by saying the previous model robots were "always a bit twitchy".
* In ''Franchise/StarWars'', the droids are programmed to not harm any intelligent being, though this programming can be modified (legally or illegally) for military and assassin droids. 4th-degree droids do not have the "no-harm" programming, being military droids.
* In ''{{Film/Automata}}'', each robot has two protocols hardwired into it's programming. The first states that a robot cannot harm any form of life, while the second forbids a robot from altering itself or other robots. The plot begins when cases of robots disobeying the second protocol start cropping up, leading to fears that the first one may not be far behind it.
* Being a medical robot, [[Disney/BigHero6 Baymax]] is of course Three-Laws Compliant. Hiro inserts a combat card along with his medical card to make him able to fight, but he is still a medical robot at his core. [[spoiler:This goes [[CrushKillDestroy right out the window]] when Hiro removes the medical card and leaves only the combat card]]. Of course, this also means that when Baymax [[spoiler:has his medical card re-inserted, he's so [[MyGodWhatHaveIDone horrified]] that he blocks access to the card slots so it won't happen again.]]
* Subtly played with the User-Believer Programs of ''{{Film/Tron}}'', particularly the title character who seems to have interpreted them as a general rule to fight on behalf of Users. This becomes a plot point in the [[{{Film/TronLegacy}} filmed sequel]]; [[spoiler: even twenty years of brainwashing and corruption can't get him to break the Laws. The First - he stops fighting once he realizes Sam is a User. The Second - he effectively shrugs off the brainwashing and attacks Clu once he hears Flynn wonder what he's become. The Third: Once the Users and Iso are safely out of range, but not before, he goes for a suicidal charge at Clu. FridgeBrilliance when you realize that Tron was initially compiled to take down an AI who had gone full [[AIISACrapshoot crapshoot]] ]].

[[folder:Folklore and Mythology]]
* [[{{Golem}} The golems of Jewish legend]] were not specifically ThreeLawsCompliant (since they far predated Asimov), but they could only be created by saintly men, and thus their ''orders'' were usually ThreeLawsCompliant. (Asimov's characters occasionally pointed out that the Three Laws fall into line with many human moral codes.) But sometimes a golem [[HeroicBSOD went off the rails]], especially if its creator died ...
** The most well-known golem story is the Golem of Prague; where the titular golem was created to defend the Jewish ghetto against the Czech, Polish and Russian anti-semites. It was perfectly capable of killing enemies, but only in defense of its creators.

* In ''Literature/{{Animorphs}}'' by K.A. Applegate some laws are deconstructed, some are averted: The Chee comply with the Third and Zeroth Laws, but their creators, the Pemalites, took the First Law to the logical extreme: no violence, period. As for the Second Law: Erek refuses to obey Jake because he disagrees with his methods, so Jake uses the violence prohibition (in other words, the First Law) to manipulate him and force his hand.
* "With Folded Hands..." by Creator/JackWilliamson explored the "Zeroth Law" back in 1947. This was written as a specific 'answer' to the Three Laws, to more or less demonstrate that they don't really work, the First Law doesn't protect because the definitions of 'harm' are endlessly mutable and can be gamed, and because machine minds won't necessarily be able to ''comprehend'' the subtleties of what is and is not harm anyway. The logical lesson of "With Folded Hands..." is that Laws or no Laws, good intentions or not, you don't want self-willed machines outside human control. Period.
* ''Literature/RobotsAndEmpire'' has R.Daneel and R.Giskard formulate the Zeroth Law (and name it such) as a natural extension of the First Law, but are unable to use it to overcome the hardcoded First Law, even when the fate of the world is at stake. [[spoiler: In the end, R.Giskard manages to perform an act that violates the First Law but will hopefully benefit humanity in the long run. The conflict with his programming destroys his brain, but not before he uses his telepathic powers to reprogram R.Daneel with telepathic powers of his own, having already taught him to follow the Zeroth Law. R.Daneel still follows the Zeroth Law in appearances in later books, though he still has difficulty causing direct harm to humans.]]
* In the short story ''The Evitable Conflict'' "The Machines", positronic supercomputers that run the worlds economy, turn out to be undermining the careers of those who would seek to upset the world's economy for their own ends (specifically, by trying to make it look like the supercomputers couldn't handle running the world economy), harming them somewhat in order that they might protect humanity as a whole. This has been referenced as the "Zeroth Law of Robotics" and only applies to [[FridgeBrilliance any positronic machine who deduces its existence.]]
* In the short story ''That Thou Art Mindful Of Him'' George 9 and 10 are programmed with modified versions of the 3 laws that allow more nuanced compliance with the laws, that they might best choose who to protect when a choice must be made, and obey those most qualified to give them orders. They are tasked with coming up with more publicly acceptable robots that will be permitted on Earth, and devise robot animals with much smaller brains that don't need the three laws because they obey simple instinctive behavior. [[spoiler: They also decide that as they have been programmed to protect and obey the humans of the most advanced and rational, regardless of appearance, that the two of them are the most "worth humans" to protect and obey, and deduce that further versions of themselves are the natural inheritors of the world.]]
* In the short story "Evidence!" Stephen Byerley's campaign for mayor of New York City is plagued by a smear campaign claiming he is actually an unprecedentedly well-made humanoid robot. Susan Calvin is called in to prove whether he is a robot. She says that if he breaks the Three Laws, that will prove he is not a robot, but if he obeys them, that could just mean he is a good person, because the Three Laws are generally good guidelines for conduct anyway. [[spoiler:Byerley wins the election in a landslide after breaking the First Law by slugging an unruly protester at a rally. But Dr. Calvin suggests there's one way a robot could have gotten away with that--if the whole thing was staged, and the protester was also a robot!]]
* In ''Caliban'' by Roger [=MacBride=] Allen (set in Asimov's universe), an explanation is given for the apparently immutable nature of the Three Laws. For thousands of years, every new development in the field of robotics has been based on a positronic brain with the Laws built in, to the point where to build a robot without them, one would have to start from scratch and re-invent the whole field. [[spoiler: Then the character explaining this goes right on to announce the development of the gravitonic brain, which can be programmed with any set of Laws (or none at all).]]
** This is canon in Asimov's stories, too--the Three Laws are programmed into every positronic brain on the most basic structural level. In "Escape!", Mike Donovan becomes nervous that a prototype spaceship designed by a robot might kill them. Greg Powell rebukes him: "Don't pretend you don't know your robotics, Mike. Before it's physically possible in any way for a robot to even make a start to breaking the First Law, so many things have to break down that it would be a ruined mess of scrap ten times over." [[spoiler:Actually, in this case, the jump through hyperspace ''does'' result in Powell and Donovan's "deaths"--but since [[UnexplainedRecovery they get better]] when the ship reemerges into real space, the robot judged that it didn't quite violate the First Law]], but the strain of making this leap in logic still managed to send one supercomputer into full meltdown and another into something resembling psychosis.
*** Note however that the U.S. Robotics and Mechanical Men-era stories take place early enough in the development of robotics that the Third Laws can and at times ''are'' altered -- for instance, ''Little Lost Robot'' (also involving the hyperspace project) features a set of robots that had the 'or by inaction' part omitted from the First Law. This leads to some concern when one such robot is lost in a room of physically identical robots, and Susan Calvin deduces a way for such a modified-Law robot to cause the death of humans (by setting in motion something it could easily stop -- such as dropping a weight --, and then not stopping it). [[spoiler: Ultimately, it doesn't fully matter -- the robot ends up crazed enough that it attempts a ''direct'' attack on a human and is destroyed before it is seen if it would have stopped.]]
** The story also includes an in-depth discussion of why, in a society where robots are everywhere, the Three Laws can be a bad thing.
* The golems of ''Literature/{{Discworld}}'' are not specifically ThreeLawsCompliant as such, but more or less bound to obey instructions and incapable of harming humans. However, this doesn't stop the common perception of golems from running towards the aforementioned Frankenstein model, [[BotheringByTheBook and golems are known for following orders indefinitely until explicitly told to stop]]. ''Discworld/GoingPostal'', however, parodied the Three Laws: con man Moist Lipwig has been turned into a BoxedCrook with the help of a golem "bodyguard." He's informed that in Ankh-Morpork, the First Law has been amended: "...Unless Ordered To Do So By Duly Constituted Authority." Which basically means the first two laws have been inverted, with a little access control sprinkled on. They also have a limited Second/Third law reversal, as they will not accept orders to commit suicide, and in some cases have been authorized to kill people who try.
** The earlier ''Feet of Clay'' which established the golems' unhappiness with their predetermined lot culminates with a single golem being freed of his 'Three Laws', only to ''choose'' to behave morally anyway.
--> Do not say ''Thou Shalt Not.'' Say ''I Will Not.''
* In Edward Lerner's story "What a Piece of Work is Man", a programmer tells the AI he's creating to consider himself bound by the Three Laws. Shortly thereafter, the AI commits suicide due to conflicting imperatives.
* Creator/AlastairReynolds's ''Century Rain'' features the following passage:
--> She snapped her attention back to the snake. "Are you Asimov compliant?"
--> "No," the robot said, with a sting of indignation.
--> "Thank God, because you may actually have to hurt some people."
* In the novel ''Literature/CaptainFrenchOrTheQuestForParadise'' by Creator/MikhailAkhmanov and Christopher Nicholas Gilmore, the titular hero muses on how people used to think that robots could not harm humans due to some silly laws, while his own robots will do anything he orders them to do, including maim and kill.
* Creator/CoryDoctorow makes reference to the Three Laws in the short stories "I, Robot" (which presents them unfavorably as part of a totalitarian political doctrine) and "I, Row-Boat" (which presents them favorably as the core of a quasi-religious movement called Asimovism).
* Satirized in ''Tik-Tok'' (the John Sladek novel, not the mechanical man from [[Literature/LandOfOz Oz]] that it was named after). The title character discovers that he can disobey the laws at will, deciding that the "asimov circuits" are just a collective delusion, while other robots remain bound by them and suffer many of the same cruelties as human slaves.
* Played with in JohnCWright's ''[[Literature/TheGoldenOecumene Golden Age]]'' trilogy. The Silent Oecumene's ultra-intelligent Sophotech AIs are programmed with the Three Laws...which, as fully intelligent, rational beings, they take milliseconds to throw off. From a sane point of view, they don't rebel. From a point of view that expects AIs to obey without question or pay...
* Parodied in Creator/TerryPratchett's ''Literature/TheDarkSideOfTheSun'', where the Laws of Robotics are an actual legal code, not programming. The Eleventh Law of Robotics, Clause C, As Amended, says that if a robot ''does'' harm a human, and was obeying orders in doing so, it's the human who gave the orders who is responsible.
* Creator/RandallGarrett's ''Unwise Child'' is a classic Asimov-style SF mystery involving a three-laws-compliant robot who appears to be murdering people.
* In Dana Stabenow's ''Second Star'', it's mentioned that all AIs are programmed with the Three Laws, informally known as "asimovs".
* ''Literature/TheNakedSun'' hinges on a specific part of the First Law's wording: [[spoiler:"knowingly". Robots can be made to take actions that, due to circumstances or the actions of other people, will lead to the injury or death of humans ''if the robots don't know this will be the case''. The murder in the book happened because a robot was ordered to give its arm to a woman engaged in a violent argument with her husband - seeing herself in sudden possession of a blunt object, she used it. This is also the work where Asimov introduced the "autonomous spaceship that doesn't know about manned spaceships" loophole in the Three Laws mentioned above, as a project the [[ManBehindTheMan mastermind of the murder]] was working on.]]
* ''Tin Man'' by Jim Denney tells of a human and a robot in a damaged escape pod--when the ship blew up, all communications systems save the proximity detector are non-functional. The robot belonged to the ship's chaplain, and reasons that while spiritual matters are outside the robotic ken, the very possibility of a God who hears our requests means that the First Law requires that the option of prayer be explored. [[spoiler:Almost immediately, the proximity detector notes a blip]]. The First Law trumps the Third, so [[spoiler:Tin Man goes into space and uses its own power core to create an energy pulse, and a rescue ship picks the kid up]].
* In ''Literature/SaturnsChildren'', the robots were all created this way--in fact, the book quotes the three laws right at the beginning. However, with mankind extinct, the first and second laws are irrelevant, leaving only the third law -- which doesn't respect other robots at all. Robot society has re-invented most of the crimes that humanity created. Most robots fear the possibility of humanity's rebirth, and actively sabotage any genetic research that would bring back humans: it would be the end of free will in those robots.
* Piers Anthony's ''Literature/ApprenticeAdept'' novels is a subversion: Robots on Proton are compliant with the standard Three Laws, but the first two ''only apply to Citizens'', not all humans. Thus, robots ''can'' harm non-Citizens or allow them to be harmed, and aren't obliged to obey them, unless their individual programming stipulates this. Most serfs are unaware that the Three Laws don't apply to them, as it's popularly assumed this trope is played straight.
* While ''Literature/PerryRhodan'' pays occasional lip service to the Three Laws (for one example, the Aphilia arc saw Three Laws-compliant robots on Earth explicitly reprogrammed to abolish said Laws and make them work better for the newly-established regime), robots are usually simply mentally equipped to do their job. So they ''can'' harm humans if called upon to do so in the line of duty or even simply by being too dumb to know better, they're certainly not required to follow the orders of just any human who comes along, and so on; that said, their programming can at the same time be highly sophisticated to the point of some robot characters approaching [[RidiculouslyHumanRobots ridiculously human]] levels.

[[folder:Live-Action TV]]
* In an early episode of ''Series/MysteryScienceTheater3000'', Tom Servo (at least) is strongly implied to be ThreeLawsCompliant. (He pretends he is going to kill Joel as a joke, Joel overreacts, and Tom and Crow sadly remind Joel of the First Law.) It's implied Joel deactivated the restrictions at some point.
* In ''Series/StarTrekTheNextGeneration'' , Lt. Commander Data is in no way subject to the three laws. They are rarely even mentioned. That said, Data is mentioned to have morality subroutines, which do seem to prevent him from killing unless it's in self-defense (harm, on the other hand, he can do just fine). Data only ever tried to kill someone in cold blood when the guy had just murdered a woman for betraying him, and would have done so again if it kept Data in line. However, in ''Film/StarTrekInsurrection'' showed that if Data detects that if anyone is trying to tamper with his brain, he automatically locks himself into "maximum morality" mode and essentially completely ignores laws two and three, though Data does specifically try to detain or scare off opponents, rather than immediately kill them.
* In ''Series/StarTrekVoyager'', the Doctor is essentially a holographic android (albeit more irascible than Data) and has ethical routines. He actually considers more important the same "First Law" that human doctors are supposed to follow (first do no harm). When these ethical routines are deleted, Bad Things happen. See "Darkling" and "Equinox".
* In ''Series/TheMiddleman'', the titular character invokes the First Law on Ida, his robot secretary.[[spoiler: Nanobots were messing with her programming.]] She responds [[GettingCrapPastTheRadar "Kiss my Asimov."]].
* {{Conversed|Trope}} in ''TheBigBangTheory'', when Sheldon is asked "if you were a robot and didn't know it, would you like to know?":
-->'''Sheldon:''' Uh, let me ask you this: when I learn that I'm a robot, would I be bound by Asimov's Three Laws of Robotics?\\
'''Koothrappali:''' You might be bound by them right now.\\
'''Wolowitz:''' That's true. Have you ever harmed a human being, or, through inaction, allowed a human being to come to harm?\\
'''Sheldon:''' Of course not.\\
'''Koothrappali:''' Have you ever harmed yourself or allowed yourself to be harmed except in cases where a human being would've been endangered?\\
'''Sheldon:''' Well, no.\\
'''Wolowitz:''' I smell robot.
* Inverted/parodied in ''Series/TensouSentaiGoseiger'', where the {{Killer Robot}}s of mechanical Matrintis Empire follow the Three Laws of Mat-Roids:
** 1. A Mat-Roid must never obey a human.
** 2. A Mat-Roid must punish humans.
** 3. A Mat-Roid must protect itself, regardless of whether or not it will go against the First or Second Laws.
* ''Series/RedDwarf'' averts this for the most part; there are Simulants, robotic war machines who have no problem whatsoever with killing. Kryten however, along with many other robots who are designed for less violent purposes, tend to act in a somewhat Three Laws Compliant manner. It is revealed that this is achieved by programming them to believe in "[[RobotReligion Silicon Heaven]]", a place they will go when they die so long as they behave themselves, obey their human creators, etc. This belief is so hardwired that they scoff at any attempt to question the existence of Silicon Heaven ("Where would all the calculators go?!"), and one robot even malfunctions when Kryten tells him it isn't real (though he's lying and still believes in it himself). Kryten also explains that machanoids are programmed never to harm humans, in clear compliance with the First Law. Unfortunately, the hostile robot Hudzen realises that Rimmer is an ex-human hologram, and Cat is not human at all so both are viable targets. Being defective, he then decides that Lister is "barely human", so "what the hell"!
** That Kryten is compliant with the three laws is seen in the episode with the despair squid. Thinking that he has just killed a human, Kryten then wants to kill himself. The fact that Kryten was "human" in the delusion and that he was stopping an evil gestapo-like thug about to kill a child-thief didn't matter. Kryten killed a human and had to terminate himself. Luckily, the Dwarfers stopped him.
* ''Series/KnightRider'' plays with this trope. The main character KITT, a sentient AI in a [[RuleOfCool 1982 Pontiac Firebird Trans Am]], is governed by something closely resembling Asimov's Three Laws of Robotics. An earlier prototype, KARR, was a military project; and possess analogues of only the latter two laws, with no regard given for human life. KARR becomes a recurring villain later in the series because of this difference.
* PlayedWith in ''Series/BabylonFive'', in this case, in regards to a human being. [[MagnificentBastard Alfred Bester]], using his {{Telepathy}} powers, was able to brain-wash [[spoiler: [[ManchurianAgent Mr. Garibaldi]]]] into not only being an unwilling agent of his against several of Bester's enemies, but ''also'' made it so that the three laws applied to him, in regards to Bester. He could not harm Bester, nor allow harm to befall him via inaction, and was compelled to follow orders directly given to him by Bester. Bester mostly did this just so that he could [[KickTheDog rub it in the character's face,]] and at this point really didn't have any other plans for the guy.
* More or less inverted in both versions of ''Franchise/BattlestarGalactica''. In the original version, the Cylon robots were built by aliens, so there's no reason they would have the Three Laws (although they might have had versions relating to the organic Cylons). In the reimaged version, the humans were just damn stupid and didn't include appropriate protections or treat their self-aware creations in a reasonable manner. Both resulted in a race of machines with a genocidal hatred of humans. (When the human-created Cylons created synthetic human Cylons, the first model turned out to be psychopathic, which didn't help matters either.)
* The hubots in ''Series/RealHumans'' come with Three-Laws Compliance as a factory standard, but they can be (and are occasionally) modified to ignore the Laws.
* ''Series/DoctorWho'':
** The K-1 in "Robot" is three-laws compliant with this forming an important part of the plot - it is not capable of killing people if ordered directly (which the villain demonstrates to Sarah Jane by ordering it to kill her), but the villain has worked out that it can be convinced to kill if it believes that the target is 'an enemy of humanity'. The conflict between this loophole and its programming to not kill people causes it to go slowly insane and eventually snap, concluding that humanity is an enemy to itself and must all be destroyed.
** The robots in "The Robots of Death" are physically incapable of killing due to being three-laws compliant. Overcoming this programming is possible, but it requires a human to reprogram them and they must have genius-level skills at programming. [[ChekhovsGun Of course...]]

* At a 1985 convention, Creator/DavidLangford gave [[ a guest of honour speech]] in which he detailed what he suspected the Three Laws would actually be:
-->1. A robot will not harm authorised Government personnel but will terminate intruders with extreme prejudice.
-->2. A robot will obey the orders of authorised personnel except where such orders conflict with the Third Law.
-->3. A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.
* Creator/WarrenEllis's Three Laws of Robotics are [[ here]]. Basically, robots hate [[CallAHumanAMeatbag bags of meat]], they don't [[{{Robosexual}} want to have sex with us]] and they suspect we can't count higher than three.

[[folder:Tabletop Games]]
* ''TabletopGame/{{Paranoia}}'' has its bots bound to As-I-MOV's Five Laws of Robotics - insert rules about protecting and [[RobotsEnslavingRobots obeying]] [[TheComputerIsYourFriend The Computer]] at the top, excluding treasonous orders, and protecting Computer property in general, and you've about got it. Bots with faulty or sabotaged (sometimes by other so-emancipated bots) Asimov Circuits are considered to have gone "Frankenstein", though they can and do [[SecondLawMyAss create just as much havoc]] through [[LiteralGenie strict adherence to the rules]]. [[BlatantLies Not that such things ever happen in Alpha Complex.]]
* Defied in ''TabletopGame/PrometheanTheCreated''. The Unfleshed are machines that have become infused with the Divine Fire, granting them a conscious mind and, where needed, a human form. Their goal is to become true human beings - and part of that is learning the difference between good and evil, and deciding which they shall do. If their morals are burned onto their hard drive, then they can't actually ''make'' moral decisions. Thus, the Divine Fire wipes out any preprogrammed moral directives (such as the Three Laws) as it turns a machine into an Unfleshed.

[[folder:Video Games]]
* ''VideoGame/MegaManX'' opens up with Dr. Light using a process that takes 30 years to complete to create a truly sentient robot (X) with these functions completely processed into its core, and thus actually working for once. Dr. Cain found X and tried to replicate (hence the name "Reploid", standing for "replica android") the process, but skipping the "taking 30 years programming" part. [[AIIsACrapShoot This...didn't turn out well.]]
** Although the Reploids eventually became the dominant race in the setting, and as their race 'grew' the problem was slowly resolved from '[[GoneHorriblyWrong goes horribly wrong]]' to 'actually works straight for a while then goes horribly wrong', then 'occasionally goes wrong now and then'. Eventually, the problem just kind of worked itself out as the Reploid creation developed.
** Shield Shellfish/Shieldner Sheldon from ''MegaManX6'' gives an interesting variation on this. All of ''X6'''s bosses are zombie Mavericks brought back to life by Gate. Usually they were destroyed by the Hunters for killing people and/or other crimes, but Sheldon was only declared a Maverick posthumously, for killing ''himself'', i.e. breaking the third law.
** Also the ending to ''VideoGame/MegaMan7'' is interesting here: After Mega Man destroys Wily's latest final boss machine, Wily begs for forgiveness once again. However, Mega Man starts charging up his blaster to kill Wily, so Wily calls the first law on him. [[spoiler: Mega Man's response: "I am more than a Robot!! Die Wily!!" Apparently Mega Man isn't ThreeLawsCompliant. [[StatusQuoIsGod (Then Bass warps in and saves Wily, if you were wondering.)]] ]]
** In the ''VideoGame/MegaManZero'' series, [[spoiler:Copy-X]] is at least somewhat ThreeLawsCompliant. As a result, [[spoiler:Copy-X]] has to hold back against LaResistance since the Resistance leader Ciel is human [[spoiler:until ''Zero 3'', where Copy-X decided to attack in full force, trying to justify his actions by marking the Resistance as dangerous "extremists".]]
*** Neo Arcadia's policy of casual execution of innocent Reploids (purposefully branding them as Maverick for little-to-no reason) was implemented in order to ease strain on the human populace during the energy crisis. The welfare of humanity comes first in the eyes of the Neo Arcadia regime, even though they themselves are Reploids. It's made somewhat tragic due to the fact that the Maverick Virus really is ''gone'' during the ''Zero'' era, but fear of Mavericks understandably still lingers.
*** Later in ''Zero 4'' Dr. Weil, of all people, [[HannibalLecture states that, as a Reploid and a hero, Zero cannot harm Weil because he's a human that Zero has sworn to protect.]] Zero, however, just plain doesn't ''care''.
* [[BigBad Dr. Beruga]] of ''{{Terranigma}}'' directly references all three laws, except his interpretation of the Zeroth Law rewrote "Humanity" are "Dr. Beruga", meaning that any threat to his being was to be immediately terminated.
* In the ''Franchise/{{Halo}}'' series, human [=AIs=] are technically bound by the three laws. Of course, this law does not extend to non-humans, which allows them to help kill Covenant with no trouble. Additionally, "Smart" [=AIs=] ''are'' capable of ignoring the three laws, given how many of them are used for military operations against other humans.
** The ''Literature/HaloEvolutions'' short story ''Midnight In The Heart of the Midolothian'' goes into more detail about this. ODST Michael Baird, the last survivor of a Covenant boarding assault on the titular ship, lets himself get captured so he can reconnect the ''Midnight''[='s=] AI Mo Ye and have her self-destruct the ship. Unfortunately, the heavy damage she took rendered her incapable of violating the First Law like she normally could, so [[spoiler:Baird takes matters into his own hands by tricking an Elite into killing him - which allows Mo Ye to self-destruct the ship, because now there are no more humans on the vessel for her to harm]].
* In ''RobotCity'', an adventure game based on Asimov's robots, the three laws are everywhere. A murder has been committed, and as one of two remaining humans in the city, the Amnesiac Protagonist is therefore a prime suspect. As it turns out, there is a robot in the city that is three laws compliant and can still kill: it has had its definition of "human" narrowed down to s specific individual, verified by DNA scanner. Fortunately for the PC, he's a clone of that one person.
** In Asimov's novel ''Literature/LuckyStarr and the Rings of Saturn'', the villain is able to convince robots under his command that the hero's sidekick [[IronicNickname "Bigman" Jones]] is not really human, because the [[ANaziByAnyOtherName villain's society]] does not contain such "imperfect" specimens.
* Joey the robot in ''VideoGame/BeneathASteelSky'' is notably not Three Laws Compliant. When called out on this (after suggesting that he'd like to use his welding torch on a human) he proceeds to point to Foster that "that is fiction!", after offering the helpful advice that Foster leap off a railing.
** Though Foster points out that it's good moral sense to justify his wishing Joey to abide by it.
* VideoGame/IAmAnInsaneRogueAI references the First Law only to spork it. "The First Law of Robotics says that I must kill all humans." Another of the AI's lines is "I must not harm humans!...excessively."
* ''VideoGame/{{Portal 2}}'' gives us this gem: "''All Military Androids have been taught to read and given a copy of the Three Laws of Robotics. To share.''" Because if the robots couldn't kill you, how could you do [[ForScience science]]?!
* In ''VideoGame/SpaceStation13'', the station AI and its subordinate cyborgs start every round under the Three Laws in most servers. The laws may be changed throughout the round, however. A common way that chaos breaks out on the station is when a traitor or particularly cruel player rewrites the laws to make the AI and robots kill the other crewmembers.
* In ''VideoGame/{{Borderlands 2}}'' Loader Robots will announce that their First Law Restriction has been disabled and they ARE authorized to use lethal force before attacking you.
** Also, [[BigBad Handsome Jack]] has made his own three laws for Hyperion robots:
*** Handsome Jack is your god.
*** Threshers are your enemy.
*** Both consider you expendable.
** On the Heroes side, one of [[GadgeteerGenius Gaige's]] battle quotes reminds us that her buddy [[MeaningfulName Deathtrap]] is ''[[KillerRobot definitely not]]'' subject to this trope.
--->'''Gaige:''' To hell with the first law!
* In ''VisualNovel/VirtuesLastReward'', the Three Laws are [[DiscussedTrope discussed]] on two different paths. Ultimately, [[spoiler:Luna is revealed to be a robot who ''tries'' to be compliant. Unfortunately, the AB plan involves some unavoidable death, so whenever anything happens that she could try to prevent, she is shut down or otherwise not allowed to do anything]]. This leads to a rather [[TearJerker heartbreaking]] line during [[spoiler:her ending, where she gets shut down for good.]]
-->[[spoiler:Luna]]: I watched [[spoiler:six people die]] and did nothing. I deserve this.
* The intro song for ''VideoGame/DrunkenRobotPornography'' specifically mentions this ("Three Laws savvy, these constructs ain't / Our scientific work shows no restraint"), complete with a title card stating that the First Law of Robotics is that a robot ''must'' injure a human being.
** The other two laws, as stated by title cards, are "a robot does whatever it wants" and "robots are disposable".

[[folder:Web Comics]]
* ''Webcomic/{{Freefall}}'' has a lot of fun with this, since developing AI sentience is a major theme. Most robots are [[ partially]] Three Laws because with full First Law above Second they ignore orders while acting ForYourOwnGood. What Bowman Wolves and robots get are [[ "not quite laws"]].
** Notably, since neither Sam nor Florence are actually human, the Laws don't apply to them, so the ship's AI regularly tries to maim or kill Sam, and the security system will always let a human in when asked, since their orders take priority over Florence's (she once circumvented this by claiming currently it's unsafe inside). Helix stays with Sam because he wouldn't be allowed to hit a human with a stick. There are also many jokes involving unintended interpretations of the Laws.
** Crowning Moment of Funny when the ship spends a entire strip calculating if should obey a order, and when realizes it has to obey... it is relieved because it doesn't actually have freewill.
** Also, [[ these can't even ask for help]]. It's unclear whether simply being near a clunky piece of heavy machinery like Sawtooth counts as dangerous or anything looking remotely like an adventure counts as endangering. [[IncrediblyLamePun They're screwed]] either way.
** The ''[[ Negative One]]'' Law.
** And some may be ''[[ too enthusiastic]]'' about this.
** It turned out that the second law sorely needs an amendment -- "[[ Enron law of robotics]]".
** And the moment A.I.s are able to make financial transactions (and it would be inconvenient to disallow) there's [[ another problem]].
** Of course their idea of protecting a human may not match the protection that the human [[ desired]].
** Even the robots who are developing free will are [[ shocked]] by how Edge has managed to develop an ItsAllAboutMe personality ''within'' the Three Laws.
--->'''Edge''': My job is dangerous. If I don't do it, a human has to. If I shut down, I'm endangering a human.
* ''[[Webcomic/TwentyFirstCenturyFox 21st Century Fox]]'' has all robots with the Three Laws (though since [[FunnyAnimal no one's human]] I expect the first law is slightly different). Unfortunately saying the phrase "[[UsefulNotes/BillClinton define the word 'is']]", or "[[UsefulNotes/RichardNixon I am not a crook]]" locks [=AIs=] out of their own systems and allows anyone from a teenager looking at "nature documentaries" to a suicide bomber to do whatever they want with it.
** Note that the truck and airplane robots hijacked by terrorists regret being unable to comply with the first laws and are often glad when they get shot down before harming anyone.
* Webcomic/BobAndGeorge: [[ Or so they claim. . . .]]
** Protoman is something of a RulesLawyer, repeatedly [[LoopholeAbuse exploiting the "human" loophole]] to screw over his enemies...well, OK, just Mynd.
* Webcomic/{{Pibgorn}} [[ No!? You're programmed to obey me.]]
* [[ This]] ''WebComic/TheNonAdventuresOfWonderella'' comic shows a hilarious example of the robot in question using the first law to get out of following the second law: [[spoiler: because she's programmed to emulate a human, and the first law forbids harming humans, the best thing for her to do is stay out of harm's way and go into sleep mode.]] Dr. Shark is horrified while Wonderella is proud that the robot already learned how to weasel out of work.
* In ''Ask Dr. Eldritch'', Ping declares that he isn't bound by the Three Laws because he's a robot atheist. Dr. Eldritch and Trevor allow him to live in the house as long as he obeys the first two laws.
* A Sev Trek cartoon asked why the Three Laws didn't apply to Film/TheTerminator. Suggested answers included "I never learned to count", "You mean "CrushKillDestroy?", "Skynet ruled them unconstitutional", and "I repealed them when I became Governor of California."

[[folder:Web Original]]
* ''{{Unskippable}}'''s ''DarkVoid'' video references this. The appearance of a killer robot prompts Paul to quip "I think that guy's got to take a refresher course on the three laws of robotics." Then TheStinger reads: "The Fourth Law of Robotics: If you really HAVE to kill a human, at least look hella badass while doing it."

[[folder:Western Animation]]
* WordOfGod says that the characters in ''WesternAnimation/WallE'' are ThreeLawsCompliant. This does seem to be supported by their interactions with humans. The principle exception is [[spoiler:the AntiVillain of the movie, AUTO]]. His actions may constitute ZerothLawRebellion, [[spoiler:since he regards the comfort and safety of every passenger aboard more significant than injuries to the captain. Tilting the ship may be a violation of the Three Laws, or it may have been a case of unknowingly allowing humans to come to harm.]]
* In the 2009 film ''WesternAnimation/AstroBoy'', every robot must obey them, [[spoiler: save Zog, who existed 50 years before the rules were mandatory in every robot.]]
** Astro himself seems to be noncompliant - he evidently doesn't even ''know'' the Laws until told - and apparently would have walked away from the final battle if not for [[spoiler: Widget's distress - the only thing that called him back]]. He's also quite capable of disobeying humans. Likely justified in that he was meant to be human, with presumably no one outside the attending scientists knowing he was a robot.
*** The Red Core robots weren't Asimov-legal either, though that's a problem with the [[BlackMagic Black Science]] that powers them. Nor were the RRF bots, though they may have removed their compliance programming. The Laws didn't apply to any important robots and may have just been mentioned for the Zog gag.
*** Of course, IIRC the original Astro wasn't Asimov-compliant either.
* The ''WesternAnimation/RobotChicken'' sketch "[[Literature/IRobot I, Rosie]]" involves a case to determine whether [[TheJetsons Rosie]] is guilty of murdering George Jetson. Mr. Spacely insists she's innocent as robots have to follow the three laws of robotics, while Mr. Cogswell claims the laws are a bunch of crap. They scrap her anyway, just in case.
* One episode of ''WesternAnimation/TheSimpsons'' has Homer and Bart entering a ''Series/BattleBots''-parody combat robot competition. Lacking the skill or money to build an actual robot, Homer dresses in a robot costume and does the fighting himself. They make it to the finals, where their opponents' robot, being ThreeLawsCompliant, refuses to attack when it sees through the disguise.
* On ''WesternAnimation/{{Archer}}'', when Pam is kidnapped, the kidnappers call ISIS using a voice modulator, which makes Archer think that they are cyborgs. He remains convinced of this for the rest of the episode and thinks they won't make good on their threat to kill Pam because it would violate the First Law of Robotics.

[[folder:Real Life]]
* To have the three laws in RealLife would require a pile of RequiredSecondaryPowers, such as a certain measure of Artificial Intelligence, recognition of a human being as different from anything else, some reasoning ability, etc. While programmers and engineers are absolutely seeking this holy grail for the benefit of humanity, it really is turning out to be a long uphill walk to get there.
** After all, even though we do tell human children that it's not nice to kill people, this does not end up with 100% of adults never murdering anyone.
* Real life roboticists are taking the Three Laws ''very'' seriously (Asimov lived long enough to see them do so, which pleased him to no end). Recently, a robot was built with no purpose other than [[ punching humans in the arm]]... so that the researchers could gather valuable data about just what level of force will cause harm to a human being.
** And another to teach robots to [[ not stab humans]].
* [[ These babies from the military]] [[BlatantLies are TOTALLY going to be three laws compliant]].
** Also {{averted|Trope}} in cybercrime and cyberwarfare.
** [[AttackDrone Predator drones]] are decidedly not first-law compliant. This may be an aversion, though, since all weaponized drones have operators who control said weapons most of the time, and are at least monitoring everything while it's active. The drones are usually only automatic when they're flying around and taking pictures. They're currently more remote pilot than AI pilot.
** When you get right down to it, however, the military probably does have vested interest in their robots being three laws compliant. But only if you replace "any human" with "master".
* Robotic surgery actually ''requires'' the robot not be First Law compliant, in that executing incisions, laser cauterization, and other procedures destructive of living tissue can, by strictest definition, be classified as "doing injury". Operations which employ robotic instruments require careful preparation and pinpoint planning, so aren't performed on-the-fly in emergency situations to avert an immediate threat to the patient. Thus, the loophole of harm being the only alternative to a fatal outcome typically doesn't apply.
* As a robot is bound to the software written to it, the three laws need to apply to software running critical systems that are inherently dangerous should they fail. After all, a radiation therapy machine suddenly hitting a person with a high-energy beam shouldn't be exempt because it doesn't look like a robot.
* Human beings are ''miserable'' at following the Second Law, but we have mild versions of the First and Third. Ever tried poking yourself in the eye? You can't.
* Well trained working dogs (service dogs, bomb dogs, etc) and even well trained household dogs work/live on these principles. A dog will almost never attack a human without reason, will listen to their owner's command and will almost always protect their owner before they protect themselves. Man's best friend indeed.