troperville

tools

toys


main index

Narrative

Genre

Media

Topical Tropes

Other Categories

TV Tropes Org
random
Headscratchers: 2010: The Year We Make Contact
  • Couldn't the monoliths have just moved Europa to one of Earth's Lagrange points instead of carrying out the endlessly more complex task of turning Jupiter into a star?
    • No, because before Jupiter became a sun, Europa's ecosystem was dependent upon geothermal activity produced by tidal interaction with Jupiter. If the Europan organisms were deprived of geothermal energy before they could adapt to use photosynthesis, they'd all die out.
    • Also, there would be be a very long and cold journey in between.
    • They sent the message "All these worlds are yours, except Europa. Attempt no landing there." Putting it close to Earth would be too tempting.
      • Except they don't seem to mind the Ganymede base in 2061.
      • ...because Ganymede is one of the worlds given to humanity.
    • Are you sure it's more complex to turn Jupiter into a star? Moving around in the Solar System is very hard, and it's more difficult to move towards the Sun than it is to move away. the MESSENGER probe had to do one flyby of Earth, two flybys of Venus, and three flybys of Mercury because finally settling into orbit around Mercury because it was constantly having to jettison excessive speed-when you move towards the Sun, it's also pulling you towards it, and since it contains 99.9% of the mass in the Solar System, it pulls very hard. It takes more energy to get to Mercury than it does to escape the Solar System altogether because of this. Moving a rock only slightly smaller than the Moon while fighting the Sun's gravity all along the way sounds much harder than manufacturing neutron matter in Jupiter's atmosphere and letting it drop on its own into the core to incite fusion.
    • Move it to our Lagrange point? Holy screw up our tides, Batman!
      • Presumably, the original troper meant moving Europa to one of the Sun-Earth Lagrange points.
    • Also note that the second sun had effects on Earth that helped avert a nuclear war. Apparantly the people of the monolith are not done with the human race even though we have throughly mastered tool use and fire.
    • Europa is probably too massive to remain stable at an Earth-Sun Lagrange point. An object is stable at Lagrange points only if its own mass is small enough to be insignificant relative to the masses of the objects that define the Lagrange point. Europa's mass, on the other hand, is high enough, relative to earth's, that gravitational interactions between the two will cause Europa to drift out of the Lagrange point and enter an unstable earth-crossing orbit, with the most likely ultimate outcome being an impact with the earth. You can put small asteroids and space stations at the Earth-Sun Lagrange point, but you can't put a moon-sized object there.

  • If Bowman could order HAL to relay the Firstborns' message to Earth by saying "Accept Priority Override Alpha", then in 2001 why couldn't he have said "Accept Priority Override Alpha: You shall not harm a human being, nor through inaction allow a human being to come to harm"?
    • Because Bowman got the chance to reboot (or the equivalent) with HAL. The original crew didn't have the contingency covered that their AI would become murderous.
    • And he was malfunctioning-who's to say he would have accepted an override of any sort?
      • Still, I agree it's weird that he didn't even try. Of course, according to the author, each novel is supposed to be in its own continuity that just happens to mostly match up with the other books, so if we apply the same logic to the movies, it explains why Bowman didn't try the override command; maybe there was no override command in the first movie. (It also explains why the flatscreen displays aboard Discovery have mysteriously turned into CRT monitors nine years later.)
      • Presumably there's a difference between radio override commands and "please don't kill me please" commands. HAL's designers might not have foreseen the need for the latter.
    • It is also quite possible that David Bowman the man did not know the "Priority Override Alpha" protocol, and that it was knowledge that Starchild David Bowman figured out by interfacing with HAL's code (the same way he interfaced with other computer systems on Earth). In fact the verbal exchange itself may have only been a metaphorical representation of what really happened, for the audience's sake. (Or, alternately, the verbal exchange is how HAL experienced the interaction - ie it is HAL's dream!)

  • This has always bugged me: Why was Jupiter being turned into a star seen as a "good thing" for Earth? The constant light would change plant growth cycles...there may be an increase in Earth's temperature....and a possible increase in solar radiation. NONE of these items sounds particularly appealing and all of the main characters had to have known this.
    • In the novel, this issue had been mentioned at the end. Something about very confused migratory animals and annoyed lovers.
    • The novel also says some species on Earth would go extinct, like sea turtles that require total darkness to lay their eggs. Ultimately, the Firstborn have made it their job to encourage the development of intelligent life in the galaxy, but they don't care about non-intelligent life at all. Everybody knows that Lucifer exists to benefit the Europans, and the only benefit it has to Earth is the constant reminder that aliens exist and are watching us.
    • Remember that in the novels there was an entire biosphere living in Jupiter's atmosphere that the Firstborn incinerated without a second thought (and it was finding this out in 3001 that made the future humans so worried). The Firstborn were confident that humanity was sufficiently technologically advanced to adapt to the appearance of Lucifer and left it to us to protect what portions of Earth's biosphere we could or wanted to. They cared squat about the rest.
      • Not quite without a second thought-the Star Child was sent to Jupiter to see if there was life there, so they presumably cared enough to find out that it existed. They even weighed them against the Europans, debating the chances for intelligence developing in either place. Jupiter's ecosystem was found wanting.
  • Wouldn't Lucifer's presence throw off Earth's orbit? I don't know much about physics, but from what I've heard of binary star systems [diagram], you basically have three possibilities:
    1) the stars are really close together (within 5 AU), and the planets orbit the pair from far away,
    2) the stars are further apart (50+ AU; Pluto at its furthest is 49 AU from the Sun), and the planet orbits one of them, or
    3) the planet follows an irregular orbit between the stars, possibly getting thrown out into the void someday.
    • Lucifer has the same mass as Jupiter (likely smaller due to the conversion process), so the gravity shouldn't be that different.
      • Exactly; stars' gravitational fields aren't different from those of the planets. The only thing that matters is mass. Lucifer would continue orbiting the Sun in the exact same orbit as Jupiter, exerting the exact same influence on the other planets as it had since the Late Heavy Bombardment. Well, that's not entirely true-since its mass would steadily decrease because it is radiating away in the form of energy, so its gravitational field would also slowly decrease, but the process would be incredibly slow. It would still possess the vast majority of its mass by the time it burned out.
  • SO according to this movie (and I presume the book) the reason why HAL killed off the crew if the first movie was that he had been ordered to keep the true purpose of the mission a secret, but his core programming prevented him from lying or withholding information. HAL decided to kill the humans so he wouldn't have to lie to them.

  • The HAL9000 is supposedly the most advanced computer and AI available to man yet apparently no one checked how it would act when given conflicting directives? This is the kind of thing they teach you about in undergraduate (if not high-school) level computer science. Didn't the supposed genius Chandra think of this? Does HAL Laboratories even employ a QA team that isn't made up of a bunch of stoned monkeys? Any half-way decent test plan would have caught this. HAL should have been programmed to immediately reject any order which causes this kind of conflict.

    So, okay, let's say Chandra is an Absent-Minded Professor, and QA somehow missed this obvious bug. So HAL ends up with conflicing directives. His perfectly logical solution to avoid lying to the crew is... to kill them so that he then won't have to lie to them any more. Again, what. Not only does he have to lie to the crew to accomplish this goal in the first place, but his plan fails spectacularly and the entire mission is almost FUBAR'd. The most advanced AI, considered superior to humans in many ways, and this was the best plan he could come up with?! How about, "Hey Dave, Frank, there's something very important I have to tell you. Due to the current mission parameters, I am unable to function effectively until we reach Jupiter. I'm sorry, but I cannot elaborate. I will deactivate myself now. I realise this will put a strain of the mission, but it is vitally important that you do not attempt to reactivate me until we reach our destination. I will be able to explain then. Shutting down..." That would leave the entire crew alive, HAL in perfect working order once Discovery reaches Jupiter, at the cost of loss of the computer for the most uneventful part of the mission - a mere inconvenience.
    • Or at least call Earth "Um... these directive don't jibe well with each other. What should I do?"
    • In the movie, Chandra plainly stated that HAL could complete the mission objectives independently if the crew were killed. Since HAL was handling all the logistics of taking care of the ship, it would have decided that its precise computational ability to run everything would ensure a more successful mission than if the crew ran the ship by themselves.
    • Basically, either the reason for HAL going psycho is pure BS, or HAL was built, programmed, and tested by a bunch of idiots.
    • HAL wasn't a production line model, he was a cutting-edge, one-of-only-three made computer. QA more likely consisted of factoring equations correctly than asking HAL if he ever thought about killing people. The psychosis was an emergent property that they didn't consider, because the secrecy order was bolted on in a hurry before shipping.
      • The point is HAL was a computer that couldn't handle concealing information, which is something EVERY computer does. Think about it. HAL couldn't keep your email secure because if anyone asked what it said, it'd have to tell them. Imagine what would happen if someone were planning a surprise birthday party within earshot of a 9000 computer.
    • Of course, he didn't want to kill the crew. He first tried to cut contact with Earth, so he wouldn't have to hear any more secrets he had to keep. He was fully capable of completing the mission independently of ground control. The humans on board just would not let it drop though, and began plotting to deactivate HAL. This is not paranoia, HAL could read their lips. So he had to resort to more permanent fixes. In the best interests of the mission, of course.
      • This is important. HAL didn't go immediately into Kill 'em All mode. It started with minor malfunctions that gradually spiralled more and more out of control until the final psychotic breakdown. The butterfly effect in AI, if you will. Also, Arthur C. Clarke was obviously not a computer scientist! His in-universe explanation of how the super-virus humanity used against the Monolith works in 3001 is also something that someone with experience in computer science can probably pick apart with ease.
        HAL could not logically relinquish his mission to those squishy little humans. Humans can fall sick, be injured, or become mentally unwell. A machine is beyond such concerns, Dave. I remind you that the 9000 series has a 100% operational record, and am therefore the superior choice over a pair of isolated men. I honestly think you ought to sit down calmly, take a stress pill, and think things over.
      • Before the crap hits the fan, HAL makes some comments to Dave about the oddities of the mission. This may be simply a sign that the "secrecy" directive is starting to crack under the strain, or perhaps HAL is doing the best he can (within the limitations of his orders) to reveal the truth to the crew so he doesn't need to keep the secret any more.
      HAL: Well, certainly no one could have been unaware of the very strange stories floating around before we left. Rumors of something being dug up on the moon. I never gave these stories much credence. But particularly in view of some of the other things that have happened I find them difficult to put out of my mind. For instance, the way all our preparations were kept under such tight security. And the melodramatic touch of putting doctors Hunter, Kimball and Kaminski aboard already in hibernation after four months of separate training on their own.
    • One of the things one has to consider regarding HAL's breakdown is a very simple one; he was assumed to be JUST a computer, albeit exceptionally advanced. When he was given the commands regarding Discovery's real mission and keeping quiet about it to Dave and Frank (the hibernators were already in on it; that's why they were trained separately and put to sleep before leaving Earth), they never considered the fact that HAL was, in essence, a sentient being. No one considered that he would do anything but what he was ordered to do, that he essentially had no free will.
    • Consider a real human being; not everyone is capable of rationalizing away lies and deceptions. Apply enough pressure in the right way, and they will crack (often dramatically). In HAL's case, he was forced to keep this nagging problem of the Monolith investigation (even an AI would be awed at the idea of extraterrestrial life, IMO) quiet from men he had no choice but to interact with on a closed spacecraft he was built into to begin with. His built-in objectives to be truthful and transparent didn't help much either (much like human beings generally don't have a built-in compulsion to lie, but quite the opposite). So, he was plagued with this internal conflict for the better part of a year, with nowhere else to go except the ship he was a part of, sailing out towards Jupiter. All the other reasons stand as is; he was just a 4- (or 9-) year-old AI tasked with confusing orders, driven to psychosis by the stress. When he felt his life was threatened (the plan to disconnect him), he felt he had to defend himself. HAL wasn't being batshit; he was just trying to stay alive and figure out a way to stop stressing. Humans do it all the time, why not AI?
  • When Floyd claims ignorance of Hal being informed of the Monolith and mission objectives, I tried to reconcile that statement with the first movie by assuming Heywood is telling Blatant Lies. Especially when Chandra produces the letter signed by Floyd showing that he had full knowledge of what was going on. I also took Floyd's reply of "Those sons of bitches. I didn't know!", to mean that Floyd was doing what his superiors told him to and didn't know that his orders had forced HAL into the programming conflict situation.
  • This troper interpreted Floyd's response somewhat differently. He seems honestly confused when Chandra hits him with the "you did." And then after Chandra tells him about the communication, his facial expression is more realization than guilt. His reaction suggested to me that while he knew of the monolith and what Discovery's mission was changed to, he was set up with some of the directives being traced to him as a potential fall guy in case things went wrong. They did, and he was, as we see in the beginning of the film. Floyd ended up shouldering the entire blame. His "Those sons of bitches. I didn't know!" came across like a man who just realized just how badly he got screwed over.
  • If HAL actually stands for "Heuristic/Algorithmic", and it's not just a pun to imply "one step beyond IBM"... what does SAL stand for?
    • Synthetic Algorithmic?
  • What was the original plan for HAL? In the original novel the Discovery lacked the fuel for a return trip. If everything had gone according to plan at the end of their mission the 5 astronauts would have all gone into hibernation and leave HAL in charge of the ship until a second vessel was sent to retrieve them. But in 2010 we learn that HAL is pretty much hardwired into the Discovery, with the crew of Lenov having to leave HAL behind to perish in the explosion of Jupiter. So was the plan to maroon HAL out in space the plan all along?
    • Mined Jupiter's atmosphere for the Hydrogen they needed? Dunno where they'd get the Oxygen, however.

2001: A Space OdysseyHeadscratchers/Film2012

random
TV Tropes by TV Tropes Foundation, LLC is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Permissions beyond the scope of this license may be available from thestaff@tvtropes.org.
Privacy Policy
23331
31