History Main / ThreeLawsCompliant

7th Feb '16 12:39:38 PM valos
Is there an issue? Send a Message
Added DiffLines:
** Or in other words, as the same dilemma would be put in human terms: "[[HarmfulHealing First, do no harm.]]"
28th Jan '16 10:34:52 AM TeraChimera
Is there an issue? Send a Message
* The Will Smith film ''Film/IRobot'' hinges on a [[ZerothLawRebellion Zeroth Law plot]]. It also turns the three laws into a marketing gimmick, with "Three Laws Safe" applying to robots like "No preservatives" applies to food. However, much of the plot hinged on TheReveal that [[spoiler: Sonny was ''not'' Three Laws Compliant, as part of a ThanatosGambit by his creator.]]
to:
* The Will Smith film ''Film/IRobot'' hinges on a [[ZerothLawRebellion Zeroth Law plot]]. It also turns the three laws into a marketing gimmick, with "Three Laws Safe" applying to robots like "No preservatives" applies to food. However, much of the plot hinged on TheReveal that [[spoiler: Sonny was ''not'' Three Laws Compliant, as part of a ThanatosGambit by his creator.]] Twisted when Three Laws Noncompliant Sonny is more moral than Three Laws Compliant VIKI, choosing to save humans not because it's in his programming, but because it's the right thing to do.]] ** It also deconstructs the idea that Three Laws Compliance will automatically solve most problems with robots: before the movie, Spooner was in a car accident that sent him and another car into a river. The driver of the other car was killed on impact, but the passenger, a 12-year-old girl named Sarah, survived. A robot jumped into the river to help, but it calculated that Spooner had a greater chance of survival than Sarah, and so chose to save him in spite of his protests. Spooner feels that, three laws or not, a human would've known that it was better to go after Sarah than him.
28th Jan '16 10:11:37 AM Willbyr
Is there an issue? Send a Message
* The HumongousMecha of ''KuroganeNoLinebarrel'' are normally this, aside from a slight difference in priorities between the first and second laws. In fact, this is the justification for them having pilots, despite being equipped with relatively complex AI. The Laws are hard-coded into them and thus they are only militarily useful when they have a Human with them to pull the trigger. Their aversion to killing is so great, in fact, that if one accidentally kills somebody (as things whose very footsteps can make buildings collapse are wont to do) they're compelled to use their advanced technology to bring them back to life.
to:
* The HumongousMecha of ''KuroganeNoLinebarrel'' ''Manga/LinebarrelsOfIron'' are normally this, aside from a slight difference in priorities between the first and second laws. In fact, this is the justification for them having pilots, despite being equipped with relatively complex AI. The Laws are hard-coded into them and thus they are only militarily useful when they have a Human with them to pull the trigger. Their aversion to killing is so great, in fact, that if one accidentally kills somebody (as things whose very footsteps can make buildings collapse are wont to do) they're compelled to use their advanced technology to bring them back to life.
7th Jan '16 10:20:08 PM PaulA
Is there an issue? Send a Message
->1. [[ThouShaltNotKill A robot may not injure a human being or]], [[AccompliceByInaction through inaction]], [[MurderByInaction allow a human being to come to harm.]] ->2. [[RobotMaid A robot must obey orders given to it by human beings]], except where such orders would conflict with the First Law. ->3. [[ICannotSelfTerminate A robot must protect its own existence]], as long as such protection does not conflict with the First or Second Laws.
to:
->1. # [[ThouShaltNotKill A robot may not injure a human being or]], [[AccompliceByInaction through inaction]], [[MurderByInaction allow a human being to come to harm.]] ->2. # [[RobotMaid A robot must obey orders given to it by human beings]], except where such orders would conflict with the First Law. ->3. # [[ICannotSelfTerminate A robot must protect its own existence]], as long as such protection does not conflict with the First or Second Laws.

* ''Literature/RobotsAndEmpire'' has R.Daneel and R.Giskard formulate the Zeroth Law (and name it such) as a natural extension of the First Law, but are unable to use it to overcome the hardcoded First Law, even when the fate of the world is at stake. [[spoiler: In the end, R.Giskard manages to perform an act that violates the First Law but will hopefully benefit humanity in the long run. The conflict with his programming destroys his brain, but not before he uses his telepathic powers to reprogram R.Daneel with telepathic powers of his own, having already taught him to follow the Zeroth Law. R.Daneel still follows the Zeroth Law in appearances in later books, though he still has difficulty causing direct harm to humans.]] * In the short story ''The Evitable Conflict'' "The Machines", positronic supercomputers that run the worlds economy, turn out to be undermining the careers of those who would seek to upset the world's economy for their own ends (specifically, by trying to make it look like the supercomputers couldn't handle running the world economy), harming them somewhat in order that they might protect humanity as a whole. This has been referenced as the "Zeroth Law of Robotics" and only applies to [[FridgeBrilliance any positronic machine who deduces its existence.]] * In the short story ''That Thou Art Mindful Of Him'' George 9 and 10 are programmed with modified versions of the 3 laws that allow more nuanced compliance with the laws, that they might best choose who to protect when a choice must be made, and obey those most qualified to give them orders. They are tasked with coming up with more publicly acceptable robots that will be permitted on Earth, and devise robot animals with much smaller brains that don't need the three laws because they obey simple instinctive behavior. [[spoiler: They also decide that as they have been programmed to protect and obey the humans of the most advanced and rational, regardless of appearance, that the two of them are the most "worth humans" to protect and obey, and deduce that further versions of themselves are the natural inheritors of the world.]] * In the short story "Evidence!" Stephen Byerley's campaign for mayor of New York City is plagued by a smear campaign claiming he is actually an unprecedentedly well-made humanoid robot. Susan Calvin is called in to prove whether he is a robot. She says that if he breaks the Three Laws, that will prove he is not a robot, but if he obeys them, that could just mean he is a good person, because the Three Laws are generally good guidelines for conduct anyway. [[spoiler:Byerley wins the election in a landslide after breaking the First Law by slugging an unruly protester at a rally. But Dr. Calvin suggests there's one way a robot could have gotten away with that--if the whole thing was staged, and the protester was also a robot!]] * In ''Caliban'' by Roger [=MacBride=] Allen (set in Asimov's universe), an explanation is given for the apparently immutable nature of the Three Laws. For thousands of years, every new development in the field of robotics has been based on a positronic brain with the Laws built in, to the point where to build a robot without them, one would have to start from scratch and re-invent the whole field. [[spoiler: Then the character explaining this goes right on to announce the development of the gravitonic brain, which can be programmed with any set of Laws (or none at all).]] ** This is canon in Asimov's stories, too--the Three Laws are programmed into every positronic brain on the most basic structural level. In "Escape!", Mike Donovan becomes nervous that a prototype spaceship designed by a robot might kill them. Greg Powell rebukes him: "Don't pretend you don't know your robotics, Mike. Before it's physically possible in any way for a robot to even make a start to breaking the First Law, so many things have to break down that it would be a ruined mess of scrap ten times over." [[spoiler:Actually, in this case, the jump through hyperspace ''does'' result in Powell and Donovan's "deaths"--but since [[UnexplainedRecovery they get better]] when the ship reemerges into real space, the robot judged that it didn't quite violate the First Law]], but the strain of making this leap in logic still managed to send one supercomputer into full meltdown and another into something resembling psychosis. *** Note however that the U.S. Robotics and Mechanical Men-era stories take place early enough in the development of robotics that the Third Laws can and at times ''are'' altered -- for instance, ''Little Lost Robot'' (also involving the hyperspace project) features a set of robots that had the 'or by inaction' part omitted from the First Law. This leads to some concern when one such robot is lost in a room of physically identical robots, and Susan Calvin deduces a way for such a modified-Law robot to cause the death of humans (by setting in motion something it could easily stop -- such as dropping a weight --, and then not stopping it). [[spoiler: Ultimately, it doesn't fully matter -- the robot ends up crazed enough that it attempts a ''direct'' attack on a human and is destroyed before it is seen if it would have stopped.]]
to:
* Creator/IsaacAsimov's own works: ** ''Literature/RobotsAndEmpire'' has R.R. Daneel and R.R. Giskard formulate the Zeroth Law (and name it such) as a natural extension of the First Law, but are unable to use it to overcome the hardcoded First Law, even when the fate of the world is at stake. [[spoiler: In [[spoiler:In the end, R.R. Giskard manages to perform an act that violates the First Law but will hopefully benefit humanity in the long run. The conflict with his programming destroys his brain, but not before he uses his telepathic powers to reprogram R. Daneel with telepathic powers of his own, having already taught him to follow the Zeroth Law. R. Daneel still follows the Zeroth Law in appearances in later books, though he still has difficulty causing direct harm to humans.]] * ** In the short story ''The "The Evitable Conflict'' "The Machines", Conflict", the positronic supercomputers that run the worlds economy, world's economy turn out to be undermining the careers of those who would seek to upset the world's economy for their own ends (specifically, by trying to make it look like the supercomputers couldn't handle running the world economy), harming them somewhat in order that they might protect humanity as a whole. This has been referenced as the "Zeroth Law of Robotics" and only applies to [[FridgeBrilliance any positronic machine who deduces its existence.]] * existence. ** In the short story ''That "That Thou Art Mindful Of Him'' Him", George 9 and 10 are programmed with modified versions of the 3 three laws that allow more nuanced compliance with the laws, that they might best choose who to protect when a choice must be made, and obey those most qualified to give them orders. They are tasked with coming up with more publicly acceptable robots that will be permitted on Earth, and devise robot animals with much smaller brains that don't need the three laws because they obey simple instinctive behavior. [[spoiler: They [[spoiler:They also decide that as they have been programmed to protect and obey the humans of the most advanced and rational, regardless of appearance, that the two of them are the most "worth humans" to protect and obey, and deduce that further versions of themselves are the natural inheritors of the world.]] * ** In the short story "Evidence!" "Evidence", Stephen Byerley's campaign for mayor of New York City is plagued by a smear campaign claiming he is actually an unprecedentedly well-made humanoid robot. Susan Calvin is called in to prove whether he is a robot. She says that if he breaks the Three Laws, that will prove he is not a robot, but if he obeys them, that could just mean he is a good person, because the Three Laws are generally good guidelines for conduct anyway. [[spoiler:Byerley wins the election in a landslide after breaking the First Law by slugging an unruly protester at a rally. But Dr. Calvin suggests there's one way a robot could have gotten away with that--if the whole thing was staged, and the protester was also a robot!]] * In ''Caliban'' by Roger [=MacBride=] Allen (set in Asimov's universe), an explanation is given for the apparently immutable nature of the Three Laws. For thousands of years, every new development in the field of robotics has been based on a positronic brain with the Laws built in, to the point where to build a robot without them, one would have to start from scratch and re-invent the whole field. [[spoiler: Then the character explaining this goes right on to announce the development of the gravitonic brain, which can be programmed with any set of Laws (or none at all).robot.]] ** This is canon in Asimov's stories, too--the Three Laws are programmed into every positronic brain on the most basic structural level. In "Escape!", Mike Donovan becomes nervous that a prototype spaceship designed by a robot might kill them. Greg Powell rebukes him: "Don't pretend you don't know your robotics, Mike. Before it's physically possible in any way for a robot to even make a start to breaking the First Law, so many things have to break down that it would be a ruined mess of scrap ten times over." [[spoiler:Actually, in this case, the jump through hyperspace ''does'' result in Powell and Donovan's "deaths"--but since [[UnexplainedRecovery they get better]] when the ship reemerges into real space, the robot judged that it didn't quite violate the First Law]], but the strain of making this leap in logic still managed to send one supercomputer into full meltdown and another into something resembling psychosis. *** Note however ** ''Literature/TheNakedSun'' hinges on a specific part of the First Law's wording: [[spoiler:"knowingly". Robots can be made to take actions that, due to circumstances or the actions of other people, will lead to the injury or death of humans ''if the robots don't know this will be the case''. The murder in the book happened because a robot was ordered to give its arm to a woman engaged in a violent argument with her husband - seeing herself in sudden possession of a blunt object, she used it. This is also the work where Asimov introduced the "autonomous spaceship that doesn't know about manned spaceships" loophole in the U.S. Robotics Three Laws mentioned above, as a project the [[ManBehindTheMan mastermind of the murder]] was working on.]] * In ''Caliban'' by Roger [=MacBride=] Allen (set in Asimov's universe), an explanation is given for the apparently immutable nature of the Three Laws. For thousands of years, every new development in the field of robotics has been based on a positronic brain with the Laws built in, to the point where to build a robot without them, one would have to start from scratch and Mechanical Men-era stories take place early enough re-invent the whole field. (This is canon in Asimov's stories, too--the Three Laws are programmed into every positronic brain on the most basic structural level.) [[spoiler:Then the character explaining this goes right on to announce the development of robotics that the Third Laws gravitonic brain, which can and at times ''are'' altered -- for instance, ''Little Lost Robot'' (also involving the hyperspace project) features a be programmed with any set of robots that had the 'or by inaction' part omitted from the First Law. This leads to some concern when one such robot is lost in a room of physically identical robots, and Susan Calvin deduces a way for such a modified-Law robot to cause the death of humans (by setting in motion something it could easily stop -- such as dropping a weight --, and then not stopping it). [[spoiler: Ultimately, it doesn't fully matter -- the robot ends up crazed enough that it attempts a ''direct'' attack on a human and is destroyed before it is seen if it would have stopped.Laws (or none at all).]]

* ''Literature/TheNakedSun'' hinges on a specific part of the First Law's wording: [[spoiler:"knowingly". Robots can be made to take actions that, due to circumstances or the actions of other people, will lead to the injury or death of humans ''if the robots don't know this will be the case''. The murder in the book happened because a robot was ordered to give its arm to a woman engaged in a violent argument with her husband - seeing herself in sudden possession of a blunt object, she used it. This is also the work where Asimov introduced the "autonomous spaceship that doesn't know about manned spaceships" loophole in the Three Laws mentioned above, as a project the [[ManBehindTheMan mastermind of the murder]] was working on.]]
29th Dec '15 7:32:28 PM NozzDogg
Is there an issue? Send a Message
Added DiffLines:
* Ancient Martian robomen in ''TabletopGame/RocketAge'' have a number of different three law for their robots. However, sometimes these laws may allow robots to kill, or ignore those in danger depending on the roboman's purpose.
23rd Dec '15 9:26:14 AM Kayube
Is there an issue? Send a Message
Added DiffLines:
* [[http://xkcd.com/1613/ This]] ''Webcomic/{{xkcd}}'' comic examines the consequences of reordering the three laws. Any worlds in which obeying orders is prioritized over not harming humans end up as [[RobotWar "Killbot Hellscapes"]]. Prioritizing self-protection over obeying orders leads to a [[SecondLawMyAss "frustrating world"]] in which robots won't do anything dangerous, while prioritizing self-protection over both protecting humans and obeying orders leads to a "terrifying standoff"- robots will do what humans ask unless they're threatened, in which case they'll ''fight back''.
17th Dec '15 7:24:44 PM Abodos
Is there an issue? Send a Message
The main problem, of course, is that it is perfectly reasonable to expect a human to create a robot that does ''not'' obey the Three Laws, or even have them as part of their programming. An obvious example of this would be creating a KillerRobot for a purpose like fighting a war. For such a kind of robot, the Three Laws would be a hindrance to its intended function. In his own stories, Asimov establishes that the Three Laws are hard-coded into the most basic programming that underlies all artificial intelligence; programming a non-Three-Laws-complaint robot would require going back to the beginning and rewriting its entire programming from scratch, not a trivial matter. Asimov also suggested one workaround for this: an autonomous spaceship programmed with the Three Laws could easily blow up spaceships full of people because, being an unmanned spaceship, it could be led to believe that other spaceships were unmanned as well.
to:
The main problem, of course, is that it is perfectly reasonable to expect a human to create a robot that does ''not'' obey the Three Laws, or even have them as part of their programming. An obvious example Obvious examples of this would be creating a KillerRobot for a purpose like fighting a war. war, or a SkeleBot9000 with flesh covering designed to deceive humans about its true nature for espionage purposes. For such a kind of robot, robots, the Three Laws would be a hindrance to its intended function. In his own stories, Asimov establishes that the Three Laws are hard-coded into the most basic programming that underlies all artificial intelligence; programming a non-Three-Laws-complaint robot would require going back to the beginning and rewriting its entire programming from scratch, not a trivial matter. Asimov also suggested one workaround for this: an autonomous spaceship programmed with the Three Laws could easily blow up spaceships full of people because, being an unmanned spaceship, it could be led to believe that other spaceships were unmanned as well.
17th Dec '15 7:19:27 PM Abodos
Is there an issue? Send a Message
Alas, as so often happens, Asimov's attempt to avert one overused trope gave birth to another that has been equally overused. Many writers (and readers) in the following decades would treat the Laws of Robotics as if they were as immutable as [[SpaceFriction Newton's Laws of Motion]], [[FasterThanLightTravel the Theory of Relativity]], [[ArtificialGravity the Laws of]] [[GravityIsAHarshMistress Gravity]]... wait ... you know, they treated these laws better than they treated most ''real'' scientific principles.
to:
Alas, as so often happens, Asimov's attempt to avert one overused trope gave birth to another that has been equally overused. Many writers (and readers) in the following decades would treat the Laws of Robotics as if they were as immutable as [[SpaceFriction Newton's Laws of Motion]], [[FasterThanLightTravel the Theory of Relativity]], [[ArtificialGravity the Laws of]] [[GravityIsAHarshMistress Gravity]]... wait ... you know, they treated these laws [[ArtisticLicensePhysics better than they treated treated]] most ''real'' scientific principles.
13th Dec '15 7:19:58 PM nombretomado
Is there an issue? Send a Message
* Danger, the sentient A.I. of the [[Franchise/{{X-Men}} Danger Room]], could goad people into killing themselves and program other killer robots to do her dirty work. But when Emma challenged Danger to just punch off Emma's head, Danger couldn't do it. Like the Danger Room itself, Danger cannot directly kill people.
to:
* Danger, the sentient A.I. of the [[Franchise/{{X-Men}} [[Franchise/XMen Danger Room]], could goad people into killing themselves and program other killer robots to do her dirty work. But when Emma challenged Danger to just punch off Emma's head, Danger couldn't do it. Like the Danger Room itself, Danger cannot directly kill people.
10th Nov '15 6:05:14 PM nombretomado
Is there an issue? Send a Message
* ''[[Webcomic/TwentyFirstCenturyFox 21st Century Fox]]'' has all robots with the Three Laws (though since [[FunnyAnimal no one's human]] I expect the first law is slightly different). Unfortunately saying the phrase "[[UsefulNotes/BillClinton define the word 'is']]", or "[[RichardNixon I am not a crook]]" locks [=AIs=] out of their own systems and allows anyone from a teenager looking at "nature documentaries" to a suicide bomber to do whatever they want with it.
to:
* ''[[Webcomic/TwentyFirstCenturyFox 21st Century Fox]]'' has all robots with the Three Laws (though since [[FunnyAnimal no one's human]] I expect the first law is slightly different). Unfortunately saying the phrase "[[UsefulNotes/BillClinton define the word 'is']]", or "[[RichardNixon "[[UsefulNotes/RichardNixon I am not a crook]]" locks [=AIs=] out of their own systems and allows anyone from a teenager looking at "nature documentaries" to a suicide bomber to do whatever they want with it.
This list shows the last 10 events of 286. Show all.