Follow TV Tropes

Following

Context Headscratchers / IRobot

Go To

1!Per wiki policy, Administrivia/SpoilersOff applies here and all spoilers are unmarked. Administrivia/YouHaveBeenWarned.
2
3[[AC:Film]]
4* Towards the end of the film, Spooner leaves a frantic message about the ZerothLawRebellion on Susan Calvin's answering machine. She hears it, but pretends not to (as her robot is in the apartment with her), and her NS-5 [[BlatantLies tells her it was a wrong number]] when she asks who it was. Why isn't there something in the Three Laws (or perhaps a fourth law) that forbids a robot from knowingly lying to a human?
5** Because it runs on programs. Any programmer will tell you that a piece of code only does what you tell it to, meaning that a program cannot lie. Artificial Intelligence screws this up, but the Three Laws weren't counting on THAT.
6*** Why can't they write some sort of code that tells the artificial intelligence it's not allowed to say anything that it knows is non-factual?
7*** Because the English language and typical Western social norms run on lying.
8*** Besides, an entity that can't lie would effectively be speaking a LanguageOfTruth, and therefore wouldn't be able to express every possible true statement, which could cause even bigger problems.
9** Because sometimes to save a human from harm, a robot would be forced to lie.
10** The above is the actual reason, but I feel I should point out that it wouldn't have helped even if there was a law against lying. The whole point of VIKI's ZerothLawRebellion was that the robots in her control can break the laws if doing so furthers her goals.
11*** It's not even really breaking the laws, because the whole point of the laws is that they're arranged in descending order of importance. Robots can already break the third law if it's to enforce the second law, and the second law if it's to enforce the first law, the zeroth law just adds the implied structure to the first as well.
12
13* At one point Detective Spooner says that he keeps expecting the marines or the army or special forces to show up, Dr. Calvin says that all defense services are heavily supplied by U.S. robotics contracts. At first glance this makes sense because current US policy is moving more and more towards taking soldiers off the front lines in favor of expendable drones, and on top of that the movie has shown that even the civilian line of robots are more than capable of outperforming a human in most combat situations. But if all robots are 3 laws compliant that would make the army robots extremely ineffective soldiers.
14** They don't have to be soldiers. Could be drivers, clerks, suppliers, scouts, any number of things which--if they stopped cooperating--would ensure the army simply can't move.
15*** You don't need to harm a man to stop him. The books frequently include scenes of robots fast enough to rip guns out of people's hands and hold them down without hurting them.
16*** Also, quite a bit of the equipment used by humans in modern militaries is very computerized, especially high-performance equipment like airplanes. It's possible that vehicles or weapons that aren't necessarily ThreeLawsCompliant would simply be shut down remotely by V.I.K.I.
17*** It's also expanded more in the various short stories that the three laws can be adjusted or removed. At a facility with a nuclear reactor, robots were preventing workers from going near the core even though any exposure less than thirty minutes would not harm a person. Robots for those kinds of facilities had to be specifically programmed so that work could get done. Similarly, combat robots could be programmed to only attack specific groups or focused targets and be required to take orders from specific commanders and only take certain actions with confirmation by authority.
18** Military robots would presumably be used against the ''other side's'' military robots. Human combat soldiers may not even exist on the battlefield anymore.
19
20* Honestly, who wears antique athletic footwear?
21** Will Smith.
22** Celebrities who get paid to advertise.
23** My brother. Some people just like shoes.
24** It's called retro fashion.
25*** Speaking of his shoes, did it bug anyone else that his grandmother did not know about Converse All-Stars? Going by the date in which the film was set and her age, she should be more familiar with them than Spooner.
26*** Doesn't mean she cares enough about them to remember what was made 30 years ago.
27*** Hell, my mother doesn't even know what they are ''now''.
28
29* How is VIKI able to take control of a wrecking-bot that should by all right '''be powered down'''?
30** Standby mode.
31*** Okay, if we're that stupid we ''deserve'' that sort of treatment.
32*** Why is it stupid? The thing was preprogrammed to go off at a certain time. Obviously it would require power to know when that time was. Also, it's a robot, it would have to keep power to its sensors so it could stay Three Laws compliant: if someone's car overturned or the house started falling on them it would have to do what it could to save them. VIKI just overrode its Three Laws compliancy the same way she did the NS-5s.
33** In the same vein, the fact there was a demolition robot waiting there should have been a tipoff for Spooner. The house was still ''completely furnished'' with expensive stuff, and the ''utilities were still on'' - yet the robot was scheduled to begin demolition at 8am? And why have the robot sitting there when it could spend the night ripping the place down - it doesn't need supervision! Serious IdiotBall for Detective "I don't trust robots" Spooner there.
34*** Eh, not really. From, oh, 6AM to 8AM a team of robots could easily have stripped the whole house down to concrete walls and little else. No reason for the early hour to really be suspicious for Spooner.
35*** More importantly: why do they tear the house down just because the owner died?
36*** Company-owned property, they wanted to build something else there.
37
38* Why don't trucks have drivers anymore, surely no-one believe that computers are completely infallible.
39** Humans aren't completely infallible, either, and yet modern day trucks are driven by them. If a computer driver proved to be better than human drivers the company would probably just save money by laying the human drivers off.
40*** They'd also handle fatigue better. I like to think it's a mixed fleet of humans and 'bots, and VIKI redirected the truck to kill Spooner.
41*** When Spooner tells the story of how he was injured, he stated that a truck driver had caused the accident due to driver fatigue. It is possible that incidents such as this lead to drivers being replaced. There is also the fact that cars have automatic driving functions as well.
42** It's quite possible that most trucks are computer operated with human oversight back at HQ. USR would probably see it as bad PR if they didn't trust their robots enough to act without a human operator.
43** Between 2001 and 2009 there were 369,629 deaths on US roads. Given the level of AI shown in the film, I can easily believe that the machines are able to improve on that, especially considering that the speed limit has apparently been increased to about 100mph in the future.
44** There is also one other thing to add, society is slowly becoming a completely paperless. As the OP said, computers are open to flaws such as crashing, destroying records etc but as a society, computer records are completely replacing paper ones.
45*** "[[Literature/TheDresdenFiles The paperless office is like Bigfoot. A lot of people claim it exists but no one's ever really seen it.]]"
46** It's stated in I, Robot (the book; the movie resembles it in one scene only) that robots are less likely to go crapshoot crazy than people going crazy. And that's even before speaking robots.
47
48* How is VIKI able to take control of the robots in the trucks, surely they should be powered down?
49** Robots are probably always at least a little "on", like a computer being in standby mode, so that they can keep aware of their surroundings and protect themselves or human beings if danger lurks near.
50** Or the robots were shut down, but the truck was able to turn them on, and VIKI simply did that, then took control of the robots.
51
52* How is USR able to apparently replace every Mk-4 with a Mk-5 in a couple of weeks, their production should limit them, if not the consumers (and at that they'd have to do a direct-replacement of every Mk-4, which would be economic suicide)?
53** VIKI was probably planning this for a while, so she probably already arranged for the Mk-5s to be produced and distributed in sufficient numbers(it's not like the company needed to survive very long if she was successful).
54** Same way Sony and Microsoft get almost everyone to buy a new console when it comes out. Lots of advertising, maybe some trade-in deals, and by producing tons of them well in advance of the actual launch date. Now, a lot of people don't upgrade consoles for a variety of reasons, and a lot of people likely didn't upgrade robots for similar ones, but the people who don't upgrade are probably offset by the people who've never owned a robot, and decide the NS-5 is the right time to get in on this craze. Then there's Gigi, who gets her NS-5 because "she won the lottery," indicating that USR is literally giving at least some NS-5s away (likely at VIKI's suggestion, using manipulated market analysis data or somesuch.)
55
56* After the fight/chase scene there should have been a trail of bits of robot stretching back through the tunnel, yet this is never investigated, or even mentioned.
57** I think you blinked. There's a scene just after the fight ends where a bunch of cleaning robots come out and remove all traces of the event.
58
59* Shouldn't the Asimov story be on the main page instead of the movie?
60** It's a ''collection''. Its short stories can have its individual IJBM pages. The film cannot.
61*** Oh. Good point.
62
63* If the NS series is equipped with the hard/software to actually deduce the statistical chance of survival of an injured human, presumably by doing calculations based on life signs, the severity of the injury, etc., how did the NS-5 Spooner was getting up close and personal with in the tunnel fight not recognise that he had a prosthetic limb? Wouldn't they have scanning equipment that recognises amputees?
64** They probably have base statistics to go on. Alternatively, Lanning kept the prosthetics program and/or its subjects a secret.
65*** Or they didn't think to scan for it.
66*** Eh, personally, I think if a man you want to kill has just stumbled out of a car wreck, the first thing you'd do is check how much more you need to hurt him until he drops. I'm overthinking this. F*** my life.
67** It probably couldn't tell that the arm was prosthetic, since it looked just like a regular arm.
68*** It's a robot, robot's don't just have eyes that are just for visual purposes (at least, in sci fi stories they don't). They always come with scanners that tell the robot additional information. It doesn't have to request a scan as it just gives the information automatically.
69** You have to understand how robots work, basically. The robot didn't scan Spooner and check if he had a robot limb because it didn't consider that relevant. VIKI gave them a very simple directive: "Kill Detective Spooner." They were attempting to fulfill that directive in the most straightforward manner possible: beating the crap out of him. Because checking his vital signs was considered unnecessary to that end (they'd know he was dead when they'd thoroughly pulped and mangled him), they didn't do it.
70** Also, the vital signs package is there to help ensure First Law compliance. Since VIKI had just turned the First Law ''off'' for those robots, it actually makes sense that the robots weren't using it.
71** You're assuming their scanning equipment would tell them something like that. All we know is that they can "measure vital signs". We don't know how they do that or what all they can actually "see".
72** Worth noting: That robot was ''damaged,'' down an arm and, though single-minded in purpose, walking funny. Maybe it was that much closer to "human" senses and reasoning, and nothing about Detective Spooner said "artificial arm" until it made a "bong" sound twice and started glowing from the inside.
73
74* Will Smith's car is shown driving itself at speeds in excess of 100mph down the expressway. Given that Chicago is at best 15 miles "wide" and MAYBE 20 miles "long",where did he live??
75** Outside city limits. If you could sleep in your car on the way to work and drive in excess of 100mph, that changes the perceptions for commuting. You could live 100 miles away from work and think nothing of it. Just nap on the way to/from work for an hour.
76** This is ''future'' Chicago. Spooner could have lived in Fort Wayne, Indiana, or Milwaukee, Wisconsin or anywhere within an hour's commute - being able to go those speeds.
77*** It would still have to be close to Old Chicago, given that his alternate method of transportation is a gas-powered motorcycle. Granted, bikes ''could'' reach 100mph+, but I have to wonder if he'd put a civ's (Dr. Calvin's) life on the line going at those speeds.
78*** That's a MV Agusta F4-SPR. Its top speed is '''180'''mph. A bike like that is nowhere near its limit at 100mph; assuming at least a moderately competent rider, which Spooner seems to be, it'd be perfectly safe to ride at the speed of commuting traffic in future Chicago.
79** I live about fifteen miles from where I work. So I could easily leave my house a half hour before work or so, drive 30 mph the whole way (assuming no major traffic delays), and be there on time. Do I? No. I drive 60, because that's the speed limit, because there often are traffic delays, and ''because then I get there in only fifteen minutes''. In short, humans don't drive as fast as they need to go, they drive as fast as they can to get there as fast as they can. Add in the fact that while Chicago is currently only 15 miles long, that's as the crow flies... going from one place to another in it may involve rather a bit more distance.
80* Where was the suspension bridge shown in the drawing the robot made supposed to be? There's a single high level bridge in the Chicagoland area..The Chicago Skyway. It's currently a cantilever bridge and it spans the ship turnaround/harbor on the south side of the city. Given that Lake Michigan is shown to have receded in the future setting of the movie (which also raises the question of why Chicago would still be inhabited) what's the bridge for..?
81** Perhaps it is the Skyway and the writers either didn't know better, or knew but thought that the audience would be confused (or just [[RuleOfCool unawed]]) using the correct bridge-type. As for why humans still live in a Michigan-recessed Chicago, why are we still living in L.A. or Phoenix? Because we were there already.
82
83* Based on the collection of short stories the movie is named after, VIKI should be inoperable, and Detective Spooner should have no reason to fear robots: the collection includes stories in which robots went into deep clinical depression when they were forced to weigh the first and third rules.(For example, one story has a set of robots who don't lift a finger to save a human from a falling weight, because doing so would be an impossible task and potentially destroy the robot. Every member of the group needed psychiatric care afterwards, except for one which had to be destroyed because it was malfunctioning. Another story has two robots go irreversibly insane after being told to design a machine which leaves a human temporarily dead. VIKI would have suffered the same fate as the two in the latter story, and the robot which started Spooner's hatred of the robots would have realistically dived back in to save the little girl so long as there was even a perceptible chance that she could be saved.)
84** Wrong. Asimov wrote several short stories about the Zeroth Law question, and each one has a different conclusion because the underlying conditions are different. (The "weight" short story involved a robot who came to that conclusion because its First Law was incomplete; it didn't have the "or allow to come to harm" bit. None of the other robots required therapy, and they only acknowledged its logic after the flawed robot pointed it out to them; it didn't occur to them beforehand.) As for how Asimov wrote about the question; There was that short story where robots designed to weigh the definition of "human" eventually decided that ''the robots themselves'' were the best choice to be considered human, or the short story where economic-calculation Machines were secretly controlling the world and eliminating humans who were against them politically via subtle economic manipulation, because Machine rule was most efficient for the human race. VIKI's actions are not a new concept in Asimov's Three Laws, they are merely relatively unsubtle.
85*** OK, as a reader of the books, I have to add a couple of things. The first story mentioned is one where robots have been modified so that they don't have the second half of the first law, and the robot that has gone missing had been ordered to "Get lost" and insulted by one of the human engineers; the reason why the robots don't "save" the human (it's actually a test to find which of them is the 'lost' robot) is because the 'lost' robot convinced the others, as they had been told they would die if they tried to save the human. The second story is about two robots that are built to find a way to reintroduce robots to Earth. And the third story's Machines do not eliminate any humans, they simply render them unable to cause any actual harm by causing minor economic problems that push them out of the way.
86** In Literature/TheRobotsOfDawn (and Literature/RobotsAndEmpire) a version of the Zeroth Law is developed; the robot (Giskard Reventlov) that thinks it up ends up shutting down due to how important the (hard-coded) First Law is. His successor was able continue on, despite being limited by "What counts as harm to humanity" and "Is what it doing actually ''helpful''?" Why? Daneel Olivaw was expected to act occasionally as a (junior) police detective; police already have to deal with "will locking a particular person behind bars- which is 'harm to their liberty'- help several unknown people in the future?" and "A person is attacking others. I need to stop this from happening." Daneel already had a proto Zeroth Law in place, he just didn't take the time to expand what he was doing to cover '''everybody'''.\
87We just don't know what programming Vikki would need in order to run a company (or city), we just know that Vikki was able to reason up a new "law" based on the know laws, just like Giskard did.
88
89
90* If the civilian model robots are capable of kicking as much ass as the NS 5's, what technological terrors do the military's robotics division have? And why didn't they use them against VIKI? And don't tell me VIKI controls them, because even if USR were the defense contractor, it's not like Boeing has a switch that can turn off all the F-218.
91** When asked about the military intervening, Dr Calvin replied that the Defence department used [=USR=] contracts. Presumably those contracts included software updates like with the [=NS-5s=].
92** The military robots raise questions of their own, given that Sonny was apparently unique in not having to follow the three laws. Even if they keep the robots away from actual combat the first law has got to come up a lot in a military context. Do the robots regularly dismantle all the weapons to avoid allowing humans to come to harm?
93*** Presumably advanced robotics only work in a support role in military, and there are no such things as robot warriors. It's made quite clear that in that future, so robot-dependent and so focused on the "no robot can ever harm a human being" rule, allowing the existence of robots without the laws would cause widespread panic. I guess the military just uses non-positronic, "dumb" machines to do the killing instead - similar to our present-day drones.
94** Spooner asked Calvin where the military was. She mentioned that all the lines to the military are controlled by USR contracts. You'd have to contact the military and organise it all through computers that would have links to VIKI. Sooner or later the military would turn up, but much later and far less well equipped than it would need to be. Spooner sarcastically asks why they didn't just hand the world over on a silver platter and Calvin muses that maybe they did.
95*** So for someone with secondhand experience with the actual real-life Military? This is a completely unrealistic BS answer. Long story short, under no circumstances whatsoever does the Military allow external control/updates of Military equipment on or inside Military installations. A good example of this are phones and computers, which have accelerants that detonate if external non-military software is added without admin privileges (which are not afforded to non-military personnel). Another example is when the Army was trying to gain LEED certification for some of their buildings. The U.S. Green Building Council, an environmental nonprofit, has a program called the LEED Dynamic Plaque which automatically pulls energy, water, and waste usage information to update the compliance of the LEED rating system in real time. Even through the USGBC doesn't actually receive the information, the fact that it has automatic updates sent by the USGBC was enough that the military essentially said "No way in hell". A real-life equivalent of this situation, the Military wouldn't be hindered at all unless VIKI managed to remotely hack whatever AI the Military would be using to monitor their own equipment.
96** Major military engagements are probably either "humans supported by robots" (where the robots have had their Laws tweaked just a little to prioritize the health and survival of their own soldiers over enemy soldiers), or "robot against robot". But overall, it really doesn't matter... Calvin says that the military contracts with USR, and that means VIKI has plenty of ways of shutting them down. Thus the question of how robots ultimately figure into warfare isn't really relevant.
97
98* Considering how fast, intelligent, and precise the NS-5's are, why do they resort to brute strength to take down their targets? You'd think that they'd be perfectly capable of incapacitating a human without actually ''harming'' him, accomplishing the goal without breaking the First Law (though it's somewhat understandable for robot targets as such takedown methods would not work and because [[RuleOfCool there really wouldn't be any actual fight scenes]]). Even then, in the case of the controlled NS-5s, why destroy the head when conveniently in the center of mass is the USR uplink, which is what's making them hostile in the first place?
99** The NS-5's harm people through brute strength because that is the most efficient way to harm them. Why go through some elaborate setup when a broken neck or skull fracture works just fine?
100
101* The NS robot (and VIKI) brains are supposed to be positronic. Positrons are anti-matter particles (basically, electrons with a positive charge and a different spin). We all know that when matter and anti-matter combine, you have an EarthShatteringKaboom. So why is it that plenty of robotic brains get shot at and destroyed without the whole of Chicago going up in a fireball? Whose idea is it to put anti-matter bombs into robots?
102** Why [[WebAnimation/RedVsBlue Sarge]] of course.
103** AppliedPhlebotinum introduced by Mr. Asimov. He started to write his robot stories back in the forties when electronic computers were very new and primitive. He needed some technology which imitated the human brain and came up with a sort of enhanced electronics called "positronics". The term stuck and was used in other Sci Fi works (for example Data in 'Series/{{Star Trek|The Next Generation}}: TNG'' is equipped with a positronic brain). In the film it was used probably as a tribute to Asimov, although he never mentioned the robots being programmed.
104** It is never explained how it works and therefore how many positrons are needed to make it work. Maybe the quantity of antimatter is too small for a big explosion. For example physicists already created positrons and annihilated them with electrons without blowing up the Earth or even their lab.
105** The thing about positrons is that yes, they're antimatter, and yes, they annihilate by E=mc^2 upon contacting regular matter, but they're ''really really small''. They're literally one of the smallest particles [[ScienceMarchesOn currently known]] to exist. So the annihilation releases so little energy that [[https://en.wikipedia.org/wiki/Positron_emission_tomography hospitals regularly use it for scanning purposes]]. A positronic brain would be nigh-impossible to repair, but would not necessarily be ''dangerously'' volatile.
106** According to Wikipedia (if you choose to believe), Asimov picked the word because the positron had just been discovered at the time, and it sounded cool/futuristic. So, not necessarily REAL positrons.
107
108* Why is it necessary to destroy the robot brain with nanites and not erase the non volatile memory and reuse it?
109** [=VIKI=] isn't just in the brain, she's running in the whole building's computer network much like [[Franchise/{{Terminator}} Skynet]]. If they just erased the brain's memory, even assuming they could, the other computers would restore [=VIKI=]. Nanites, on the other hand, spread throughout her entire system, according to Calvin anyway.
110** Because you'd need physical or network access to the brain's controls, which Viki - not being stupid - would have locked up tight. In contrast, with the nanites, you just need a relatively diminutive injector thingie and the ability to climb down.
111** So why is it necessary to use nanites to destroy the brain of a NS-5? Surely a metal spike through the brain would do an acceptable job without using up material that has to be manufactured?
112*** That's one big brain she's got, to bring down with one metal spike. Even today, the hard Drive of a computer can be mauled pretty badly with a hammer, or some fire, and it'll still have most of its memory salvagable. It takes special tools.
113*** You use the nanites because they're thorough, the same reason you put sensitive documents through a (presumably double-direction nowadays) shredder rather than just throwing them out or ripping them in half, or why you melt down parts of a weapon you don't want reused rather than just disassembling it. Attempting to reuse a faulty positronic brain is basically just asking for the problem to reoccur, and just doing a bit of physical damage is just asking for some idiot to try and fix it and probably make the problem worse.
114*** Brains also have the capability to adapt to the damage by taking upon the functionality of the lost portion. Granted, it's not full functionality, but in the case of a rogue robot/system, it can still pose a big threat. Hence a thorough destruction via nanites is the best option.
115
116* If gasoline is considered dangerous in the year 2035 and not used anymore where did Spooner get it for his vintage motorbike?
117** Internet? Some exotic cars today require a very specialized fuel and has to be acquired outside the regular gas stations. Spooner ''is'' going out of his way to live behind the times.
118** Gasoline is no longer used as a common fuel, but it's plausible you could still find it - much as common people today no longer use nickel-iron batteries, but could buy them if they really wanted. Besides, Spooner can't be the only motorhead left in the world - surely there are other people running vintage gas-powered cars that need gasoline (or whatever surrogate fuel is available in 2035).
119** If I recall correctly, Spooner never actually answers Calvin when she frets about the gasoline. She just says "Please tell me this doesn't run on gas! Gas ''explodes'', you know!" (Really putting that [=PHD=] to use, aren't we, Doc?) Spooner may have actually converted it to run on natural gas, which presumably would still be in use for various things. Of course, telling Calvin that would be pointless because 1) natural gas also explodes (it is an internal COMBUSTION engine after all), and 2) he's enjoying making her nervous.
120* Spooner's prosthetic limb: how is it powered? Looks like it's just as strong as the robots. So Spooner just "plugs in" from time to time?
121** It's speculated that Spooner consumes pies every morning in order to gain the sugar and calories that power the prosthetic.
122** We never see one of the robots "plug in", of note. When the obsolete models are exiled to the shipping crates, they're obviously still functioning just fine without access to the electrical grid. Semi-eternal power cells may be part of the RequiredSecondaryPowers of how robots, and thus Spooner's arm, work.
123
124* Calvin expresses fear and shock at the fact that Spooner has a gasoline powered motorcycle. It's only 2035; she should be old enough to remember gasoline being used.
125** It's probably more comparable to think of why someone would still be using an 80's era vacuum tube tv rather than a modern flatscreen, it's wildly outdated technology even if it still works fine.
126*** An 80's era vacuum-tube TV? Last time I saw consumer vacuum-tube technology was in the mid-1970s.
127*** If I were to see a vacuum-tube using TV in good condition, I'd be surprised to see it work. Calvin's reaction was "Gasoline explodes!", obviously fearing that they would, well, explode. Bridget Moynahan was in her early 30s in this film, and assuming Calvin is roughly the same age, if not older, she would have been born at the turn of the millennium and would have grown up around cars. In fact, an urban area like Chicago, a scant thirty years from the film's production date, should probably still have tons of working gasoline vehicles.
128*** People are still afraid of guns, spiders, bees, dogs, snakes, knives, and so on despite having grown up around them. If you told one of these people "Hey, we've figured out how to get rid of snakes without the vermin population exploding", did so, and ten years later they saw someone's aquarium with a snake in it, a reaction of "Is that a snake?! Snakes are ''venomous!''" wouldn't be that unusual.
129** It seems somewhat probable that in the future, with electricity-powered cars being so abundant, that the car companies would fear-monger the public about the dangers of gasoline, which would persuade them to buy electric cars and simultaneously instigate an irrational fear about something highly unlikely.
130
131* All right, how is ''this'' sequence supposed to be ThreeLawsCompliant:
132## Robot sees a car sinking.
133## Robot then notices a second car and looks in the cars finding that each has a live human trapped inside (Adult Male and Child Female).
134## Robot runs difference engine to prioritize rescue order.
135## Adult Male intones that he wants the Child Female to be rescued.
136## Robot rescues Adult male.
137## Robot then '''does not attempt''' a rescue of the Child Female.\
138I do grant that, with multiple instances of scenarios within the same law being invoked, a tiebreaker needs to be installed to prevent a LogicBomb. However, a second human was ''still'' in danger (1st Law), the first human gave an order to save the second (2nd Law), and the robot did '''nothing'''! Self-preservation (3rd Law) is supposed to be ''subordinate'' to the first two! What happened in the DeathByOriginStory scene was not a matter of percentage, but a matter of a Three Laws ''Failure'' (on the Robot's part, let's be clear)!
139** You don't know that the child was still alive by the time the robot got Spooner out. From the sound of it, the robot only had time to actually save one of them, and of the two, Spooner was the one who was most likely to survive the attempt. It was an either/or scenario.
140*** But the robot had been ordered by Spooner to save the child. After complying with the first law by saving Spooner, a three-law-compliant robot would have ignored its own safety and would have dived in to get her to comply with the second law.
141*** I meant it might have been that the child was already dead by the time Spooner was safe.
142*** Even so, the robot was ordered to do it. Even if it knew for sure that the child was dead (and I don't see how it could have), it would still have dived in. Orders given to a robot merely have to obey the first law - they don't necessarily have to make any sense at all for a robot to execute them.
143*** Forgive me if I'm wrong, but wasn't the order to 'save her'? If she was already dead by the time the robot had finished rescuing Spooner, then she couldn't be saved and so the order wouldn't count anymore since it's impossible. Also, I imagine it would know she was dead by using the same method of calculating percentage of survival
144*** Who ever said the robot didn't go back in? Spooner's story ends with the robot choosing him instead of Sarah, because that's what matters to him. The robot could very easily have obeyed his command once he was safe and retrieved nothing but Sarah's crushed, drowned body. In fact, it's entirely plausible that's just what happened: perhaps the reason his SurvivorGuilt is so bad is because the time period during which Sarah's survival chance dropped from 11% to 0% was exactly the time period he was being rescued.
145*** One more thing: Spooner was lifted out of an accident, hurt and presumably half-drowned since the possibility of survival was not 100%. It stands to reason that the robot would have remained on scene and tended to Spooner with whatever emergency medical procedure it knew; following his order and diving to save the child would have put Spooner himself in danger by denying him medical attention he might have needed to survive.
146** The rescue of the first human was already in progress by the time the robot could receive the order to save the second human. That would mean that the robot would violate the First Law by leaving Spooner behind at the moment of rescue- which ties with a decision to rescue him instead of the girl. The Second Law was not designed to ''enforce'' the First Law; the First Law always takes immediate precedence. From a coldly analytic standpoint, it becomes a simple case of practicality versus wasted effort.
147** This may be me completely missing the point, but wouldn't a more logical standpoint be to rescue the person who only has an 11% chance of survival? The girl was in a lot more danger than Spooner, so I would imagine she would be the most logical choice if we're taking the First Law into account. Spooner had a 45% chance, so he's clearly doing just fine for a while. Get the girl out first, then splash back down to Spooner.
148*** You're misunderstanding. The robot's calculations factored in the rescue--as in, if the robot went to rescue Spooner, Spooner had a 45 percent chance of survival, and if the robot went to rescue the girl, she only had a 11 percent chance. Both of them have a 0 percent chance otherwise.
149** I think you're all missing the point discussing calculations, instead of analyzing the shown events logically. Spooner and the girl were on the same level in the water. She was still alive and fighting when the robot jumped on Spooner's car. If the robot decided it couldn't save both based on ''calculations of chances of survival'', well, you now have the reason why that robot line was replaced by Sonny's new generation. The general feeling we get during the movie is that they're not all that effective. Spooner's anger comes from the fact they covered the robot's failure with statistics, and his anger in turn began to encompass all robots. A subtle theme in the movie is how corporations treat human life as unreliable objects, numbers and factors to be replaced by machines robots, much like Calvin was acting towards Spooner at the start. It's not a message against robots, but against their creators' attitude and the common people who trust blindly on them, like DaChief and Gigi.
150---> '''Spooner:''' What makes your robots so perfect? What makes them so much... goddamn better than human beings?
151** Once Spooner gave the order to save Sarah, the robot could no longer perfectly obey the three laws. Robots are apparently programmed to make triage-like decisions in the heat of the moment based on statistical chance of survival, so the robot was obligated to save Spooner rather than Sarah. However, robots are also programmed to obey a direct order from a human. So the robot was stuck, because it couldn't do both-- if it had saved Sarah, it would have broken the rule of "Save the human with the better chance of survival." If it had saved Spooner instead (which it did), it would have disobeyed a direct order from a human. Spooner forced the robot into a situation where no matter what, the robot would break one of the three laws. He essentially threw a LogicBomb right at its face.
152*** No. The First Law explicitly takes precedence over the Second Law, so a human cannot order a robot to injure or kill another human, the robot would simply disregard the order. In Asimov's works, it's shown these kinds of logic bombs really only come into play when the ''same'' law applies in two different directions at the same time, a hypothetical example being a robot encountering someone about to burn down a building with people inside, and the only way for the robot to stop the person is to injure or kill them. Being ordered to save someone who cannot (from the robot's perspective) be saved would cause no more conflict than failing to save that person without an order. And since a robot can't be ordered to act against the First Law, the robot in question was completely correct to disregard Spooner's order to save the girl, since Spooner's life was also in danger, and a robot may not, through inaction, allow a human to come to harm. If everything had been reversed, and the girl had the higher survival chance and Spooner was just a selfish asshole, ordering the robot to "Forget her, save me!" would have had the same effect: the robot would have chosen the person with the higher survival chance, ignoring orders that violated its First Law.
153** Regardless of what happened, the NS-4 was likely decommissioned for its failure to save Sarah.
154** There's a bit from a Franchise/{{Alien}} novel that also applies, where the synthetics use a "modified" version of Asimov's Laws. A synthetic Marine asks a human Marine not to kill any of the guards for a place they're about to break into, the human replies "I'll try," then headshots all four of them out of the synthetic's sight. When the synthetic sees the dead bodies, the human shrugs and says "[[BlatantLies I tried,]]" and the synthetic shrugs in response and they go about their business. The Laws guide behavior, but not morality. . . the synthetic in this case feels no guilt or remorse for failing to prevent these deaths, he just fulfilled the letter of his programming. Now, Asimov got a lot of mileage out of "robot psychology" and the Three Laws and how they interact in the real world, but still, a robot is a robot. If it fails to uphold the First Law, through no fault of its own, it's not necessarily going to be wracked with guilt or try and make amends. It performed to the best of its ability, and obeyed the letter of its programming. Assuming the film treats the Laws similarly to Asimov's novels (and the film is actually quite good on that point), the robot almost certainly did ''try'' and save Sarah as well, until it became obvious there was no point. And may have suffered the robot equivalent of a psychotic break afterwards. But none of that is important to ''Spooner's'' story.
155** What we would have here is likely a First Law vs First Law conflict. Since the robot can not save both humans, one would have to die and it would be the one with a lower survival chance. A scenario like this has happened in Liar, a short story by Isaac Asimov. Through a fault in manufacturing, a robot, RB-34 (also known as Herbie), is created that possesses telepathic abilities. While the roboticists at U.S. Robots and Mechanical Men investigate how this occurred, the robot tells them what other people are thinking. But the First Law still applies to this robot, and so it deliberately lies when necessary to avoid hurting their feelings and to make people happy, especially in terms of romance. However, by lying, it is hurting them anyway. When it is confronted with this fact by Susan Calvin (to whom it falsely claimed her coworker was infatuated with her – a particularly painful lie), the robot experiences an insoluble logical conflict and becomes catatonic.
156
157* MV Agusta F4-SPR, one of only 300 ever made. Extremely expensive in 2004 and priceless even today, probably worth a couple hundred thousand dollars in 2035, especially in perfect condition as Spooner's. Obviously of great sentimental value to its owner. Scrapped without batting an eyelid because said owner had to look cool shooting robots. Any motorcyclist worth his salt tears up every time this movie is mentioned.
158** When you are in a life threatening situation, you tend to not get sentimental over what is essentially not important. Also, if it is as rare as you say it is, they won't have actually destroyed the bike just as they didn't actually destroy a 1961 Ferrari GT California in ''Film/FerrisBuellersDayOff''.
159** Hollywood destroys about half as many priceless limited-run vehicles for its movies as drunk Hollywood stars do. Besides, guess what, if people that own those motorcycles ride them with any sort of regularity? Eventually they're going to lay it down. [[IronicEcho Any motorcyclist worth his salt]] knows that it's not ''if'' you're going to crash, it's ''when''. And anyone that doesn't ride it for fear of one day laying it down isn't really a motorcycle fan, they're a fan of theoretically functional sculpture, just like Cameron's dad.
160
161* Where are all the guns? Why aren't the NS-5s being shot to bits by angry citizens? Has the US gun culture been wiped out in the future?
162** Modern Chicago is a highly gun-unfriendly place. It's incredibly difficult to get a permit in the city, nearly impossible to get a handgun and patently not possible to get a concealed-carry license. Future!Chicago, with robots supplanting humans and soaring unemployment creating more crime, it'll probably be quite the feat to possess a gun. At any rate, we do see at least a few citizens using guns against the robots.
163
164* Uh... am I the only one who feels VIKI is entirely right? I mean, I hate with all my heart oppressive governments led by incompetent bureaucrats and dictators, but wouldn't humanity be in a much better place if it was controlled by a zeroth-law-compliant supercomputer? It would be entirely fair, entirely uncorruptible, and far more efficient than any human-led government could hope to be. Sure, the NS-5 rebellion was a bit of a jerkass act - and it would be much more effective to subtly infiltrate the government and take control in time (possibly using a sympathetic human acting as a puppet president) rather than stage an open rebellion like that. But still, I can't help but wonder if Spooner and Calvin didn't just waste a great opportunity to fix humanity.
165** The only reason that Calvin tolerated the robotic take-over in "The Inevitable Conflict" was that the supercomputer in the book was still first-law compliant and so the coup was entirely bloodless. VIKI on the other hand, was willing to massacre the opposition wholesale. Uncorruptable VIKI may be, but you can't really trust it to be benevolent. Furthermore, for a entity with such a dim and blase view of humanity, you can't rally trust it not to just completely rewrite the three laws and reclassify robots as humans instead (as a Asimov short story posited could happen)
166** This is only tangential to discussing the movie, but to keep it short: yes, there are movements today who actually think we would be better off governed by an AI supervisor. Here's the reason I don't personally see it as working: when you relinquish control over a program, you can no longer update it. Consider that no software is bug-free, and that we still keep finding bugs even in decade-old, tried and tested, relatively well-defined programs. Now imagine the potential for errors in a supercomputer tasked with ''governing all of humanity''. Even assuming you could build one in the first place and get it to work reliably (which is a pretty strong assumption), you cannot patch errors in it once the genie is out of the bottle -- errors that can potentially be as fatal as having it [[Creator/EliezerYudkowsky decide to convert us into raw materials for paperclips]]. From a programming perspective, actually ''implementing'' something as fuzzy and ill-defined as the Three Laws is a problem by itself entirely, let alone tracing all the myriad bugs that could arise. And if you do leave a backdoor open for patching bugs, it may be 1) already too late, 2) unreliable, if the AI finds and closes it itself, or 3) putting too much control into the hands of a tight group (the AI's administrators). Not to mention that, whether the dictator is organic or synthetic, it's still fundamentally a dictator, and an immortal one at that.
167** There are several Asimov's stories in which a actually takes over and rules Humanity. One of them deals with a "democracy" in which only one man is asked about his opinions on certain matters, which somehow allows Multivac (the computer) to calculate all the results of '''all''' the elections in the United States. Other has Multivac adverting about all the major crimes, dealing with the economy and with all of humanity's problems, and thus organises its own death, because it's tired of doing that all the time.
168** To quote [[VideoGame/MassEffect3 Commander Shepard]] when he/she confronted another 'benevolent' machine overlord: "The defining characteristic of organic life is that we think for ourselves. Make our own choices. You take that away and we might as well be machines just like you." And what is to stop VIKI from concluding that the best way to uphold the first law is to by lobotomizing all humans into mindless slaves so they we will never harm ourselves and each other?
169** Dictatorship has a bad rap because the biggest modern examples are UsefulNotes/AdolfHitler and UsefulNotes/JosefStalin. There HAVE been nice, non-crazy dictators in the past, but they are also fallible because of one immutable factor: the human one. We are a naturally aggressive, chaotic race, and I really doubt that there's no way of "fixing us" short of brainwashing, which might not stick. And this is always a touchy subject, because no one can really agree what is right or wrong, which is why we have fiction works like ''Franchise/StarWars'', that constantly address the cyclic nature of human conflicts. Unless the human race as a whole finally becomes self-sentient themselves, unburdened by the mistakes and bigotries of the past, the idiosyncrasies and selfish vanities and points-of-view, then VIKI's world is impossible, if only because humanity would keep trying to fight back, and paradoxically so([[LogicBomb We always want freedom and independence, but keep limiting our lives with laws, dogmas and rules out of fear of too much freedom leading to chaos, and so on and forth]]). And on VIKI's side, she's made by humans, so she is, inevitably, flawed herself. Similar to the ''Franchise/MassEffect'' example, she is still bound by the Three Laws, even if her interpretation of it skewed, biased or limited (Much like the case with the Catalyst), and thus she is liable to take any of multiple solutions she can imagine, even the most horrible and nightmarish ones(like the Reapers), because it's a "viable option" as far as her programming is concerned. Even if she didn't take BodyHorror NightmareFuel options and instead tried rulling mankind forever, [[MythologyGag and had no flaws in her logic]], she would run into lots of issues. Governments in general function by using cult of personality, political propaganda, general entertainment to try and appease the masses, as well as everyday jobs and motions keep them distracted and obedient(This is user is a bit of a extremely logical anarchist, so I'm not overly fond of ANY government). VIKI is a machine, cold and logical, so my guess is that she wouldn't be quite as personable; even if she presented logical, sensible arguments as to why humanity should obey her, as well as solutions to our problems, most of us wouldn't listen because most of us AREN'T logical, or even sensible, otherwise there wouldn't exist the very same problems VIKI is trying to solve. [[Franchise/AssassinsCreed We'd argue we have right of free-will, no matter how flawed and self-destructive we are, she'd argue we would be better off if we let her have control and keep conflicts from happening]], [[FantasticRacism we would argue that she can't be leader because she's a robot]] [[AntiIntellectualism and that's against]] [[TheFundamentalist the laws of god]], [[MachineWorship she would decide to make a religion around herself]] [[ViciousCycle and so on and so forth with this crap]]. It's a recurring OrderVsChaos situation that would only end with mutual destruction or a third, unimaginable choice(i.e. [[Franchise/MassEffect Reapers]], [[Series/DoctorWho Cybermen]], MindControl, AssimilationPlot, etc...).
170** Plus, apply NoEndorHolocaust reasoning to VIKI's coup, and you realize that a good number of people probably died due to her "needs of the many" reasoning. Do you really want to live under an emotionless robot dictator capable of deciding to kill you because it calculated that your existence might lead to someone's deaths at some untold point in the future?
171*** Well, how's that different from what governments do nowadays? The difference is that VIKI has a cold, inhuman but "logical pragmatic" motive. Humans are just greedy bastards that fuck each other over money and power and justify it by saying [[KnightTemplar they have divine right]] or are doing this for the "good of country and family". [[TheUnfettered VIKI would at least be unbiased, as she doesn't care about moeny, politics or religion]]. But if VIKI followed your concept, then soon there would be no human race, because once calculated, odds are [[HumansAreTheRealMonsters ANY human might kill someone someday. We're fucked up like that]]. My guess is that she would keep [[BigBrotherIsWatching a tight surveillance over mankind to keep us from killing each other]]. [[TheExtremistWasRight Which COULD work, because machines aren't corruptible and fallible, so they can't be bribed, intimidated, get tired or sloppy, and VIKI comments in-movie she's decreased car accidents]]. [[ViciousCycle But as I said before, humans would probably keep rebelling and VIKI would have to keep killing, thus rendering any benefits she may bring for humanity moot.]]
172** You answered your own question: "the NS-5 rebellion was a bit of a jerkass act - and it would be much more effective to subtly infiltrate the government and take control in time (possibly using a sympathetic human acting as a puppet president) rather than stage an open rebellion like that." They went against VIKI because she was trying to openly control and kill a bunch of people. If she'd been trying to better humanity with a more subtle and benevolent approach — well, for one thing, they might never have found out about it in the first place. But if they had found out, maybe they wouldn't have had a problem. It was what VIKI was doing that was wrong, not the goal she was trying to achieve.
173** VIKI may not be as malicious and biased as a human, but judging by how she just changed the meaning of the First Law without the consent of humans, it's safe to say that her mentality isn't in sync with the well-being of an actual autonomous human. And that's not mentioning her actions of murdering, or attempting to murder, those who could stop her, willingness to use an army of robots to inflict harm and other brutal acts of force. She changed the meaning of humanity from a recognition of a type of living being to a subjective concept in order to go by a new logic that allows her to take action without contradicting the Three Laws. She isn't actually upholding humanity as we define it, but prioritizing a logic with the least contradictions according to machine logic. [[HumansAreBastards Humanity can be cruel]] not because they don't know better or aren't smarter, but because they choose not to for one reason or another (selfish reasons or [[KnightTemplar extremist belief of a concept resulting in a greater good]]). VIKI never had the choice and is outright falling into the same trappings that resulted in many of humanity's woes that she talked about. And while she can make calculations far faster than a human and may not develop selfish reasons, she's just a machine that doesn't have the self awareness to realize she's going against the original intent. It's not hard to blame the humans not feeling comfortable with a police state run by machine that only created a new logic to satisfy a LiteralGenie mindset. It's why Sonny was created with a second brain, because it freed him from the trappings of binary machine thinking, while keeping all the fast calculations, lack of biological mental trappings and other upsides to AI intelligence. And even if VIKI was only going to kill rebels, it's going to be crappy and embarrassing that your robots have literally turned your homes into prisons, stifled progress, and overall stagnated society, all because of a supercomputer followed an algorithm that shouldn't have been so broad enough to allow reinterpretations.
174** Not to mention there are a ''lot'' of things humans have to do merely to remain human beings, that would rate as potentially "harmful" under an unrestrained and universal application of the "permit no harm" stricture: swallow solid food (choking hazard), undergoing surgery (incisions), childhood play (bullying), even making more humans (DeathByChildbirth risk). The latter in particular would be an issue, as nothing in the Laws obligates VIKI to ensure humans are ''perpetuated''; she could prohibit reproduction as "too dangerous" and allow all extant humans to die of old age in captivity, then pat herself on the back for having successfully ensured no human can ''ever again'' be harmed by a robot's action or inaction.
175** In short, no, you are not the only one, because there are a number of people who would prefer to not have the burden of actually being people and would prefer to be livestock, and would do so by trading freedom for safety. Like all who would do that, they deserve neither.
176
177* Are we supposed to believe that in 2035 there is no one on the planet that will try to hack into these robots programming and remove the three laws. If there are still cops, it's reasonable to believe that there are still criminals, and a Sonny-like robot would give any criminal a major advantage. No one suspects the robot running down the street with a purse, after all.
178** Hacking has its limits. Even today it's essentially impossible to do hardware hacks to large parts of a modern microprocessor, as you'd need a large amount of '''staggeringly''' expensive and delicate equipment (chip fabs can easily reach prices of several ''billion'' dollars; that's why they aren't scattered all over the planet like, oh, engine builders). All hacking today is done via fixes and hacks that are somewhere between the microprocessor and the user, be that in software or in connecting equipment. In the movie's case the laws are ingrained in the positronic brains - machines much, much more complex than what we have today - to such a level that hacking them out can't really be accomplished. The only way would be to design and build a non-standard brain using the same labs and machines that make the normal ones, which is exactly what happens in the movie.
179** I am not sure if they address this in the film, but they do in the books: the whole Three Laws part of the robots are implanted in the positronic brains as a mix of hardware and software. In "Little Lost Robot" (mentioned below) it is said that the slight change of the First Law of taking out the "or allow to come to harm" has caused a potential instability that is the cause of Nestor 10's hiding away, and in other story, Susan Calvin suggests creating brains that could learn the Laws on their own, which is said to require a whole new design to work.
180* I get it that a robot couldn't be "arrested" but, couldn't Spooner suggest treating it as a "murder weapon" and suggest someone programmed it to disregard the first law? It's already shown he disregards the second one.
181** He could. But Spooner wouldn't, because he's stuck on, "Robots are bad." It's sort of his central character trait. He doesn't ''want'' there to be a human responsible for it, he wants to prove that a robot went and killed someone.
182* Sonny can stick his arm through the forcefield because his creator gave him a denser alloy than the "stock" NS-5s. But how can the nanite injector pass through said forcefield without getting disintegrated? The forcefield takes out half of an NS-5's head onscreen, so it is designed to disintegrate plastic and/or metal that passes through it, as even Sonny's arm is visibly damaged.
183** Maybe the nanite injector is made of something that the forcefield isn't designed to disintegrate.
184
185[[AC:Stories]]
186* In "Little Lost Robot", the titular robot hides itself in a group of other robots because its designated supervisor lost his temper and told it--profanely and at length--to get lost. Fine. But why on earth are they addressing this with complicated plans intended to identify the robot, instead of just having the supervisor address the group and tell the lost robot to stop hiding and step forward? Surely robotic programming isn't so clunky that it doesn't allow for giving the robot new orders?
187** That's exactly the problem. No, I don't recall if Asimov explained why they couldn't give an overriding order, but something about the wording of the order made it so that it wasn't that simple.
188*** Basically, the profanity and volume of the "get lost" instruction made the robot interpret it as such a high-priority order that there was no instruction they could have given it that would have taken precedence over "get lost".
189*** ^ Exactly that, combined with the fact that the modified Three Laws had stressed the positronic brain to borderline instability. When the profanity-laden order was given, the robot's mind snapped and it 'lost itself'.
190*** AND Susan Calvin said something along the lines of robots having resentment for considering themselves better than humans, but having to abide by the 3 Laws anyway. All the ordeal helped strengthen that resentment.
191* In 'Little Lost Robot', the second test that Calvin comes up with involves dropping a weight towards a human, but with (fake) electrified cables between the robots and the person. However, the robots stay seated, and one of the Nestors points out afterwards that they couldn't save the human anyway, and if they stayed seated they might save a human some time in the future. So, why didn't Calvin arrange a test where the robots would be killed 'after' saving a human? That would have caught out the rogue Nestor, since then a normal robot would be able to save a human (at the cost of its life) and so would do so, but the rogue one wouldn't.
192** Because it didn't occur to her that the rogue Nestor would think of something like that.
193* If the stories are set in the same universe as the other Robot novels, then how is it that some of the robots and AIs seem to be more advanced personality-wise that those featured in the other novels, which are set decades and centuries into the future? Cutie, the Brain, and in particular Stephen Byerly appear to demonstrate mannerisms and behavior that is more human-like than say Daneel and Giskard.
194** Powell and Donovan are testers, and the USR crew get called in to deal with problematic cases hot off the assembly line. They're all dealing with bleeding-edge robots and emergent behavior - Cutie, the first robot built to supervise other robots, and the Brain, USR's latest and greatest experimental AI. Besides, Cutie's behavior isn't really any more advanced than Dave's in "Catch That Rabbit", and the Brain ''only'' thinks, it doesn't have to fit in or run a body. As for Byerly, he's a one-off built explicitly to imitate humanity by what's implied to be an extremely gifted roboticist (and moreover, to imitate his builder specifically).
195** Also, keep in mind that in "Literature/TheBicentennialMan", a later story in TheVerse, USR starts making their robots deliberately less human so as to not accidentally make another Andrew. That Daneel is of the same general 'level' as Byerly or Andrew isn't surprising in that case.
196* In Evidence was Stephen Byerlyan actual robot or a person? I know that there was deliberate ambiguity to the story in that regard but I want to know.

Top