History Main / ThreeLawsCompliant

5th Aug '17 2:52:46 AM Morgenthaler
Is there an issue? Send a Message

Added DiffLines:

%% Image selected via crowner in the Image Suggestion thread: http://tvtropes.org/pmwiki/crowner.php/ImagePickin/ImageSuggestions55
%% Please do not change or remove without starting a new thread.
%%
[[quoteright:300:http://static.tvtropes.org/pmwiki/pub/images/tumblr_mpfu64is7g1sw6xjdo5_540.png]]
28th Jul '17 4:00:55 AM Codefreak5
Is there an issue? Send a Message


* [[Becoming https://www.psychologytoday.com/blog/brainstorm/201707/how-stop-robots-harming-themselves-and-us]] TruthInTelevision?

to:

* [[Becoming https://www.[[https://www.psychologytoday.com/blog/brainstorm/201707/how-stop-robots-harming-themselves-and-us]] com/blog/brainstorm/201707/how-stop-robots-harming-themselves-and-us Becoming]] TruthInTelevision?
23rd Jul '17 10:28:57 AM Veanne
Is there an issue? Send a Message

Added DiffLines:

* [[Becoming https://www.psychologytoday.com/blog/brainstorm/201707/how-stop-robots-harming-themselves-and-us]] TruthInTelevision?
22nd Jul '17 10:50:17 AM Gosicrystal
Is there an issue? Send a Message


* To have the three laws in RealLife would require a pile of RequiredSecondaryPowers, such as a certain measure of Artificial Intelligence, recognition of a human being as different from anything else, some reasoning ability, etc. While programmers and engineers are absolutely seeking this holy grail for the benefit of humanity, it really is turning out to be a long uphill walk to get there.
** After all, even though we do tell human children that it's not nice to kill people, this does not end up with 100% of adults never murdering anyone.
* Real life roboticists are taking the Three Laws ''very'' seriously (Asimov lived long enough to see them do so, which pleased him to no end). Recently, a robot was built with no purpose other than [[http://www.wired.co.uk/news/archive/2010-10/15/robots-punching-humans punching humans in the arm]]... so that the researchers could gather valuable data about just what level of force will cause harm to a human being.
** And another to teach robots to [[http://web.archive.org/web/20100605083747/http://www.wired.co.uk/news/archive/2010-06/3/research-stops-robots-stabbing-humans not stab humans]].

to:

* To have the three laws in RealLife would require a pile of RequiredSecondaryPowers, such as a certain measure of Artificial Intelligence, recognition of a human being as different from anything else, some reasoning ability, etc. While programmers and engineers are absolutely seeking this holy grail for the benefit of humanity, it really is turning out to be a long uphill walk to get there. \n** After all, even though we do tell human children that it's not nice to kill people, this does not end up with 100% of adults never murdering anyone.
* Real life roboticists are taking the Three Laws ''very'' seriously (Asimov lived long enough to see them do so, which pleased him to no end). Recently, a robot was built with no purpose other than [[http://www.wired.co.uk/news/archive/2010-10/15/robots-punching-humans punching humans in the arm]]... so that the researchers could gather valuable data about just what level of force will cause harm to a human being.
**
being. And another to teach robots to [[http://web.archive.org/web/20100605083747/http://www.wired.co.uk/news/archive/2010-06/3/research-stops-robots-stabbing-humans not stab humans]].



* Robotic surgery actually ''requires'' the robot not be First-Law compliant, in that executing incisions, laser cauterization, and other procedures destructive of living tissue can, by strictest definition, be classified as "doing injury". Operations which employ robotic instruments require careful preparation and pinpoint planning, so aren't performed on-the-fly in emergency situations to avert an immediate threat to the patient. Thus, the loophole of harm being the only alternative to a fatal outcome typically doesn't apply.
** Or in other words, as the same dilemma would be put in human terms: "[[HarmfulHealing First, do no harm.]]"

to:

* Robotic surgery actually ''requires'' the robot not be First-Law compliant, in that executing incisions, laser cauterization, and other procedures destructive of living tissue can, by strictest definition, be classified as "doing injury". Operations which employ robotic instruments require careful preparation and pinpoint planning, so aren't performed on-the-fly in emergency situations to avert an immediate threat to the patient. Thus, the loophole of harm being the only alternative to a fatal outcome typically doesn't apply.
**
apply. Or in other words, as the same dilemma would be put in human terms: "[[HarmfulHealing First, do no harm.]]"
9th Jul '17 9:19:07 AM Sark0TAG
Is there an issue? Send a Message


* In ''VideoGame/RobotCity'', an adventure game based on Asimov's robots, the three laws are everywhere. A murder has been committed, and as one of two remaining humans in the city, the Amnesiac Protagonist is therefore a prime suspect. As it turns out, there is a robot in the city that is three-laws compliant and can still kill: it has had its definition of "human" narrowed down to s specific individual, verified by DNA scanner. Fortunately for the PC, he's a clone of that one person.

to:

* In ''VideoGame/RobotCity'', an adventure game based on Asimov's robots, the three laws are everywhere. A murder has been committed, and as one of two remaining humans in the city, the Amnesiac Protagonist is therefore a prime suspect. As it turns out, there is [[spoiler: one of the robots exploited a robot loophole in the city that is three-laws compliant and can still kill: it has had its definition of "human" narrowed down city's programming to s specific individual, verified do it, by DNA scanner. Fortunately for remotely hijacking a lower worker drone into killing the PC, he's a clone of that one person.victim, without having to do it himself.]]
2nd Jul '17 5:12:42 PM thatmadork
Is there an issue? Send a Message

Added DiffLines:

* In ''VideoGame/{{Stellaris}}'', one possible event can lead to the development of the Three Laws. This forces your Synthetic pops into Servitude permanently (with the associated unhappiness penalty) but it prevents them from ever forming the AI Uprising in your empire. If it outbreaks in another empire however then they can still run off to join it.
29th May '17 4:44:36 AM jormis29
Is there an issue? Send a Message


* In ''Robot City'', an adventure game based on Asimov's robots, the three laws are everywhere. A murder has been committed, and as one of two remaining humans in the city, the Amnesiac Protagonist is therefore a prime suspect. As it turns out, there is a robot in the city that is three-laws compliant and can still kill: it has had its definition of "human" narrowed down to s specific individual, verified by DNA scanner. Fortunately for the PC, he's a clone of that one person.

to:

* In ''Robot City'', ''VideoGame/RobotCity'', an adventure game based on Asimov's robots, the three laws are everywhere. A murder has been committed, and as one of two remaining humans in the city, the Amnesiac Protagonist is therefore a prime suspect. As it turns out, there is a robot in the city that is three-laws compliant and can still kill: it has had its definition of "human" narrowed down to s specific individual, verified by DNA scanner. Fortunately for the PC, he's a clone of that one person.
21st May '17 3:18:22 PM ErikModi
Is there an issue? Send a Message

Added DiffLines:

* A trilogy of ''Aliens'' novels (novelizations of Dark Horse's comics) have a throwaway line about androids being programmed with "Asimov's Modified Laws," which replace "harm" with "kill." The explanation is that an android surgeon wouldn't be able to perform surgery, "harming" a human to save them from greater harm or death. Also played with in that the human Marine, Wilks, promises the android Marine, Bueller, that he won't kill any human guards. [[ILied Wilks promptly shoots the guards in the head while out of sight of Bueller]], [[BlatantLies then explains "I tried," when Bueller sees the bodies.]] Bueller just shrugs. . . they're already dead, he doesn't ''actually'' care about their fate. The Laws imprint ''behavior'', not ''morality''.
21st May '17 3:08:48 PM ErikModi
Is there an issue? Send a Message


* In ''Film/{{Aliens}}'', [[ArtificialHuman Bishop]] paraphrases the First Law as to why he would never kill people like [[ArtificialHuman Ash]] did in the first film. Ash was bound by the same restrictions, but just wasn't engineered as well. When he attacked Ripley he very rapidly went off the rails, presumably due to the conflicts between his safety programming and his orders from the company. Bishop lampshades this by saying the previous model robots were "always a bit twitchy".

to:

* In ''Film/{{Aliens}}'', [[ArtificialHuman Bishop]] paraphrases the First Law as to why he would never kill people like [[ArtificialHuman Ash]] did in the first film. Ash was bound by the same restrictions, but just wasn't engineered as well. When he attacked Ripley he very rapidly went off the rails, presumably due to the conflicts between his safety programming and his orders from the company. Bishop lampshades this by saying the previous model robots were "always were a bit twitchy".twitchy".
-->'''Bishop''': It's impossible for me to harm, or through omission of action allow to be harmed, a human being. Are you sure you don't want some cornbread?
5th May '17 11:10:39 PM Kazmahu
Is there an issue? Send a Message

Added DiffLines:

* Humanity in ''VideoGame/GreyGoo'' have simplified the three rules into one overriding directive: "So others may live". As demonstrated by Singleton (an example of the Valiant-series robots made with this one law), it appears to work quite well. Not so much for the Pathfinder Probes (the titular GreyGoo), which are not only replicating well beyond specification, but have weaponized their previously-harmless configurations. [[spoiler:In a twist, the Pathfinder Probes are actually operating in full compliance with the three laws - they encountered [[GreaterScopeVillain the Shroud]], and determined that not doing anything would result in human deaths by inaction once the Shroud reached earth, and the only compliant course of action was to arm themselves to stop it. They're in conflict with the other factions in the game because they don't recognize the Beta as "human", and the human faction's units are unmanned robots designed long after the Probes were deployed, meaning they recognize neither as protected under the first law.]]
This list shows the last 10 events of 341. Show all.
http://tvtropes.org/pmwiki/article_history.php?article=Main.ThreeLawsCompliant