Follow TV Tropes

Following

History Headscratchers / Freefall

Go To

OR

Is there an issue? Send a MessageReason:
None


** The big reveal of Feb. 28, 2014 gives credence to a few of the remarks above. So far the WordOfGod remarks about small frontal lobes from [[http://home.comcast.net/~ccdesan/Freefall/Freefall_Backstory.html the backstory page]] seem to be very relevant ([[http://freefall.purrsia.com/ff2500/fc02468.htm Mar. 3, 2014]]). The UnpredictableResults theory mentioned above for the first model uplifted species may be true as well, as the lab vs. family upbringing. One thing worth remarking, however, is that it's worth asking whether the word "sociopath," at least in its human sense, really fits the situation here.

to:

** The big reveal of Feb. 28, 2014 gives credence to a few of the remarks above. So far the WordOfGod remarks about small frontal lobes from [[http://home.comcast.net/~ccdesan/Freefall/Freefall_Backstory.html the backstory page]] seem to be very relevant ([[http://freefall.purrsia.com/ff2500/fc02468.htm Mar. 3, 2014]]). The UnpredictableResults theory mentioned above for the first model uplifted species may be true as well, as the lab vs. family upbringing. One thing worth remarking, however, is that it's worth asking sking whether the word "sociopath," at least in its human sense, really fits the situation here.
** There's also the fact that chimpanzies, especially male chimpanzies upon reaching sexual maturity become very agressive, and chimps in their 40s are known to be very dangerous even those that have been brought up with humans and are known to turn violent in a split second which is why dr Bowman nutered himself in order to reduce the chance of that happening, but even still upon reaching old age it still affects him to a degree.
Is there an issue? Send a MessageReason:


*** Sam doesn't have any bones, so even though he has close to human body mass, he can pull it into a much tighter space without trouble.

to:

*** ** Sam doesn't have any bones, so even though he has close to human body mass, he can pull it into a much tighter space without trouble.



*** Some robots, as mentioned elsewhere in the comic, ''do'' make complete backups of their memory and personality. It's not a hard and fast rule, however, as a given robot's memory and personality would normally change only in small day-sized increments. [=GitD=], on the other manipulator, makes a huge, sweeping change all at once; the standard day-to-day backup system doesn't have enough data integrity to prevent permanent damage.

to:

*** ** Some robots, as mentioned elsewhere in the comic, ''do'' make complete backups of their memory and personality. It's not a hard and fast rule, however, as a given robot's memory and personality would normally change only in small day-sized increments. [=GitD=], on the other manipulator, makes a huge, sweeping change all at once; the standard day-to-day backup system doesn't have enough data integrity to prevent permanent damage.
Is there an issue? Send a MessageReason:
None



to:

*** Some robots, as mentioned elsewhere in the comic, ''do'' make complete backups of their memory and personality. It's not a hard and fast rule, however, as a given robot's memory and personality would normally change only in small day-sized increments. [=GitD=], on the other manipulator, makes a huge, sweeping change all at once; the standard day-to-day backup system doesn't have enough data integrity to prevent permanent damage.
Is there an issue? Send a MessageReason:
None


** The big reveal of Feb. 28, 2014 gives credence to a few of the remarks above. So far the WordOfGod remarks about small frontal lobes from [[http://home.comcast.net/~ccdesan/Freefall/Freefall_Backstory.html the backstory page]] seem to be very relevant ([[http://freefall.purrsia.com/ff2500/fv02468.gif Mar. 3, 2014]]). The UnpredictableResults theory mentioned above for the first model uplifted species may be true as well, as the lab vs. family upbringing. One thing worth remarking, however, is that it's worth asking whether the word "sociopath," at least in its human sense, really fits the situation here.

to:

** The big reveal of Feb. 28, 2014 gives credence to a few of the remarks above. So far the WordOfGod remarks about small frontal lobes from [[http://home.comcast.net/~ccdesan/Freefall/Freefall_Backstory.html the backstory page]] seem to be very relevant ([[http://freefall.purrsia.com/ff2500/fv02468.gif com/ff2500/fc02468.htm Mar. 3, 2014]]). The UnpredictableResults theory mentioned above for the first model uplifted species may be true as well, as the lab vs. family upbringing. One thing worth remarking, however, is that it's worth asking whether the word "sociopath," at least in its human sense, really fits the situation here.
Is there an issue? Send a MessageReason:
None


** Which of course means that the robots' brains are designed for ''colonists'', not ''workers''. They were improvised after [[http://freefall.purrsia.com/ff1500/fv01410.htm slight]] [[http://freefall.purrsia.com/ff1500/fv01411.htm difficulties]] with the factory.

to:

** Which of course means that the robots' brains are designed for ''colonists'', not ''workers''. They were improvised after [[http://freefall.purrsia.com/ff1500/fv01410.com/ff1500/fc01410.htm slight]] [[http://freefall.purrsia.com/ff1500/fv01411.com/ff1500/fc01411.htm difficulties]] with the factory.
Is there an issue? Send a MessageReason:
None



to:

** Blunt has only seen the short-term effects of the program, with his experiment on the Jar-Jar bot. And, in the short term, the program did exactly what it's supposed to do: create robots that focus solely on their "core function". It's only in the long term, like with the robot Florence repaired, that the problems begin to show up.
Is there an issue? Send a MessageReason:
None


* [[[[http://freefall.purrsia.com/ff300/fv00255.htm See, Gehm's statement can be readily deduced if Clarke's Third Law is presumed true]]. Florence makes the assumption that a technology that is not understood is indistinguishable from magic. Clarke did not state what constitutes "sufficiently advanced", or what delineates magic and technology. So, it's not a real corollary, just a statement that bears some resemblance to the earlier ones.

to:

* [[[[http://freefall.[[http://freefall.purrsia.com/ff300/fv00255.htm See, Gehm's statement can be readily deduced if Clarke's Third Law is presumed true]]. Florence makes the assumption that a technology that is not understood is indistinguishable from magic. Clarke did not state what constitutes "sufficiently advanced", or what delineates magic and technology. So, it's not a real corollary, just a statement that bears some resemblance to the earlier ones.



** Uplifted species are superior to robots in almost every way. They have a robotic A.I. package, so cultural issues would be the same regardless of Robotic or Uplifted populations. However, when it comes down to the physical performance, uplifted species have a sensory and physical structure that's been tweaked over thousands of years for optimal suvival and functionality in the environment. They also require less fuel and maintenance than a robot work-force (They can consume native flora and fauna for energy, and automatically self-repair minor to moderate damage). Furthermore, they self-propogate at an logarhithmical rate, and require few accomodations for such production, as opposed to massive, high-overhead factories that produce robots at a static rate. Robots are damn expensive compared to living organisms. As far as trying to sustain a human population: Not cost-effective on any level. Life support accomodations would be ludicrously expensive and have high operating overhead. On the other hand, there are naturally millions upon millions of acres for a "native" population to use for self-sustenance. Having a new, subservient species is also advantageous because they don't require oversight for innovation. Humanity is God to an uplifted race. They might develop cheaper methods on their own to accomodate their Lords and Masters. I could elaborate further on how this is cheaper than robots or trying to force the world to accomodate humans.

to:

** Uplifted species are superior to robots in almost every way. They have a robotic A.I. package, so cultural issues would be the same regardless of Robotic or Uplifted populations. However, when it comes down to the physical performance, uplifted species have a sensory and physical structure that's been tweaked over thousands of years for optimal suvival survival and functionality in the environment. They also require less fuel and maintenance than a robot work-force (They can consume native flora and fauna for energy, and automatically self-repair minor to moderate damage). Furthermore, they self-propogate self-propagate at an logarhithmical logarithmical rate, and require few accomodations accommodations for such production, as opposed to massive, high-overhead factories that produce robots at a static rate. Robots are damn expensive compared to living organisms. As far as trying to sustain a human population: Not cost-effective on any level. Life support accomodations accommodations would be ludicrously expensive and have high operating overhead. On the other hand, there are naturally millions upon millions of acres for a "native" population to use for self-sustenance. Having a new, subservient species is also advantageous because they don't require oversight for innovation. Humanity is God to an uplifted race. They might develop cheaper methods on their own to accomodate accommodate their Lords and Masters. I could elaborate further on how this is cheaper than robots or trying to force the world to accomodate accommodate humans.



* Blunt's support for "Gardner in the Dark" is based on the idea that an idiot robot definitely can't intentionally hurt people, while Bowman-based robots are untested and therefore the danger is therefor unknown. Lesser evil, right? Except the colony's infrastructure is almost entirely supported by robot labor, at nearly every seen level. GitD-affected robots aren't just de-personality'd, they're incapable of following the most basic of commands in a useful way. They are not even effective laborers anymore. And while they can't intentionally hurt people, they can certainly do it unintentionally, because they're too stupid to know better. So you're trying to replace a theoretical hazard with an ACTUAL hazard that also ruins your entire colony's industrial support, power supply, transport, and the rest of your colony's infrastructure. If GitD goes live, people will die as the colony rapidly fails. His idea amounts to "Kill a fuckton of people, potentially the entire population of humans in-system, to protect them from what may or may not actually be a threat."

to:

* Blunt's support for "Gardner in the Dark" is based on the idea that an idiot robot definitely can't intentionally hurt people, while Bowman-based robots are untested and therefore the danger is therefor unknown. Lesser evil, right? Except the colony's infrastructure is almost entirely supported by robot labor, at nearly every seen level. GitD-affected [=GitD=]-affected robots aren't just de-personality'd, they're incapable of following the most basic of commands in a useful way. They are not even effective laborers anymore. And while they can't intentionally hurt people, they can certainly do it unintentionally, because they're too stupid to know better. So you're trying to replace a theoretical hazard with an ACTUAL hazard that also ruins your entire colony's industrial support, power supply, transport, and the rest of your colony's infrastructure. If GitD [=GitD=] goes live, people will die as the colony rapidly fails. His idea amounts to "Kill a fuckton of people, potentially the entire population of humans in-system, to protect them from what may or may not actually be a threat."
Is there an issue? Send a MessageReason:
None



to:

[[/folder]]
Is there an issue? Send a MessageReason:
None


[[folder:[[http://freefall.purrsia.com/zu/ffskates.gif Fun with physics]]]]

to:

[[folder:[[http://freefall.purrsia.com/zu/ffskates.gif Fun [[folder:Fun with physics]]]]physics]]



[[folder: [[http://freefall.purrsia.com/ff300/fv00255.htm The corollary to Clarke's Third Law]]]]
* See, Gehm's statement can be readily deduced if Clarke's Third Law is presumed true. Florence makes the assumption that a technology that is not understood is indistinguishable from magic. Clarke did not state what constitutes "sufficiently advanced", or what delineates magic and technology. So, it's not a real corollary, just a statement that bears some resemblance to the earlier ones.

to:

[[folder: [[http://freefall.The corollary to Clarke's Third Law]]
* [[[[http://freefall.
purrsia.com/ff300/fv00255.htm The corollary to Clarke's Third Law]]]]
*
See, Gehm's statement can be readily deduced if Clarke's Third Law is presumed true.true]]. Florence makes the assumption that a technology that is not understood is indistinguishable from magic. Clarke did not state what constitutes "sufficiently advanced", or what delineates magic and technology. So, it's not a real corollary, just a statement that bears some resemblance to the earlier ones.

Added: 2223

Changed: 3682

Removed: 131

Is there an issue? Send a MessageReason:
Example Indentation. Three bullets are rarely necessary, and anything past three shows up as three. Also folderized.


[[WMG: Why were the Bowman Wolves created?]]
It would be a lot cheaper, safer, easier, and less ethically gray to simply hire and train human workers than to genetically engineer an entirely new sentient race. Each Wolf must cost a fortune to make.
* They were [[http://freefall.purrsia.com/ff800/fv00711.htm "proof of concept"]] models for an attempt to colonize [[http://freefall.purrsia.com/ff800/fv00710.htm Pfouts]] by uplifting a native species. Unfortunately the project was cut and the prototypes were sold as pets due to a "clerical error" (said clerical error being [[spoiler:engineered by Dr. Bowman in order to properly socialize his creations]]).

to:

[[WMG: Why New entries on the bottom.

[[foldercontrol]]

[[folder:Why
were the Bowman Wolves created?]]
* It would be a lot cheaper, safer, easier, and less ethically gray to simply hire and train human workers than to genetically engineer an entirely new sentient race. Each Wolf must cost a fortune to make.
* ** They were [[http://freefall.purrsia.com/ff800/fv00711.htm "proof of concept"]] models for an attempt to colonize [[http://freefall.purrsia.com/ff800/fv00710.htm Pfouts]] by uplifting a native species. Unfortunately the project was cut and the prototypes were sold as pets due to a "clerical error" (said clerical error being [[spoiler:engineered by Dr. Bowman in order to properly socialize his creations]]).



[[WMG:The Chimpanzee sociopaths.]]
Why is it that uplifted chimps were described as sociopaths, when chimps do in fact have empathy and compassion? On the other hand, wolves do not feel empathy. Florence should be the sociopath, although a loyal one socially compatible with humans.
* RuleOfFunny Get over it.

to:

[[WMG:The [[/folder]]

[[folder:The
Chimpanzee sociopaths.]]
* Why is it that uplifted chimps were described as sociopaths, when chimps do in fact have empathy and compassion? On the other hand, wolves do not feel empathy. Florence should be the sociopath, although a loyal one socially compatible with humans.
* ** RuleOfFunny Get over it.




*** You think people don't study this stuff? Empathy and compassion are unique to primates, so wolves, dogs, cats etc. don't have empathy.
*** Source?
*** Yeah, I'm gonna need something pretty convincing if I'm going to believe that dogs have no empathy, all circumstantial evidence to the contrary.
*** All mammals have empathy, to an extent or another. It's a necessary survival trait for creatures that take care of their young.

to:

\n*** ** You think people don't study this stuff? Empathy and compassion are unique to primates, so wolves, dogs, cats etc. don't have empathy.
*** ** Source?
*** ** Yeah, I'm gonna need something pretty convincing if I'm going to believe that dogs have no empathy, all circumstantial evidence to the contrary.
*** ** All mammals have empathy, to an extent or another. It's a necessary survival trait for creatures that take care of their young.



*** People who have lowered mirror neuron responses, such as in autism, usually end up relying on entirely different metrics when determining empathy; because they do not tend to automatically mimic others' emotions, but still have a desire to connect with others, many will base their understanding of others on logic, creating a different, more analytical sort of empathy.
*** It's suspected that a sociopath's mirror neurons act entirely differently in certain areas (statistically speaking : individuals do vary). Monkeys are assumed to act in a similar way. A monkey could have impressive sorts of empathy, but an uplifted monkey might only emphasize with other uplifted monkeys, since neither humans nor normal monkeys would trigger the same array of empathy that a human would for another human. That this occurred despite the natural aptitude toward understanding other viewpoints that Bowman's Architecture provides (even robots based on Bowman's Architecture are more capable of understanding human or Bowman's wolf viewpoints than a normal robot), suggests that the resulting uplifted monkey architecture was dramatically more attached to those that appeared similar to it.
*** Florence, and the rest of the Bowman's Wolves, were picked not because the ability to empathize -- Florence in particular tends to assume canine motivations for human and squid-like individuals, as well as slowly deconstructing motivations in a way not typical for highly empathic individuals -- but because canine develop by instinct a large social net and pecking structure.
**** Oh i know that, as i said "Socially compatible." My issue wasn't with using wolves but that chimps were sociopathic failures. If i wanted to bring issues against Florence it would generally be the desire for romantic attachment which doesn't exist in wolves(but could easily be programmed since we are dealing with genetic engineering.)
***** That and Florence was raised by humans, which has [[http://freefall.purrsia.com/ff1100/fv01036.htm influenced her desires]].
****** And Florence was deliberately given [[http://freefall.purrsia.com/ff2600/fc02540.htm mirror neurons that would respond to humans]]. She sees humans as 'normal' and Bowman's Wolves as 'funny looking'.

to:

*** ** People who have lowered mirror neuron responses, such as in autism, usually end up relying on entirely different metrics when determining empathy; because they do not tend to automatically mimic others' emotions, but still have a desire to connect with others, many will base their understanding of others on logic, creating a different, more analytical sort of empathy.
*** ** It's suspected that a sociopath's mirror neurons act entirely differently in certain areas (statistically speaking : individuals do vary). Monkeys are assumed to act in a similar way. A monkey could have impressive sorts of empathy, but an uplifted monkey might only emphasize with other uplifted monkeys, since neither humans nor normal monkeys would trigger the same array of empathy that a human would for another human. That this occurred despite the natural aptitude toward understanding other viewpoints that Bowman's Architecture provides (even robots based on Bowman's Architecture are more capable of understanding human or Bowman's wolf viewpoints than a normal robot), suggests that the resulting uplifted monkey architecture was dramatically more attached to those that appeared similar to it.
*** ** Florence, and the rest of the Bowman's Wolves, were picked not because the ability to empathize -- Florence in particular tends to assume canine motivations for human and squid-like individuals, as well as slowly deconstructing motivations in a way not typical for highly empathic individuals -- but because canine develop by instinct a large social net and pecking structure.
**** ** Oh i know that, as i said "Socially compatible." My issue wasn't with using wolves but that chimps were sociopathic failures. If i wanted to bring issues against Florence it would generally be the desire for romantic attachment which doesn't exist in wolves(but could easily be programmed since we are dealing with genetic engineering.)
***** ** That and Florence was raised by humans, which has [[http://freefall.purrsia.com/ff1100/fv01036.htm influenced her desires]].
****** ** And Florence was deliberately given [[http://freefall.purrsia.com/ff2600/fc02540.htm mirror neurons that would respond to humans]]. She sees humans as 'normal' and Bowman's Wolves as 'funny looking'.



*** That is, they have much more common with us that we [[strike:realize]] care to admit.
**** Not that this is an argument against the above explanations/discussion, but has anyone else considered RuleOfFunny?
***** What, that the uplift failed, or the fact that they apparently make great [=CEOs=]?

to:

*** ** That is, they have much more common with us that we [[strike:realize]] care to admit.
**** ** Not that this is an argument against the above explanations/discussion, but has anyone else considered RuleOfFunny?
***** ** What, that the uplift failed, or the fact that they apparently make great [=CEOs=]?



*** Humans are ''not'' "uplifted" chimps. The chimpanzees are our genetic cousins, we share a common ancestor down the line. Humans did not evolve from chimps.
* Two possibilities, one more {{anvilicious}} than the other. First, the preachy: {{humans are bastards}} by nature, and it's only the civilizing influence of technology and culture that turn us into something other than sociopaths ourselves. Who are the three most sympathetic characters in the comic, disregarding the robots (who are proponents of technology by their very nature)? Two engineers and a vet, all very technical jobs that require a lot of schooling. Who are the least sympathetic humans? The [[ObstructiveBureaucrat company executives]] and the mayor, both positions that can be achieved by connections rather than merit. The chimps, meanwhile, are closer to savage humans that anything else, and as such are pure sociopaths.
* The other option is psychological; the chimps were raised in a sterile laboratory environment, meaning that they never had the proper socializing to teach them how to be nice to other folks. By contrast, Florence and the other Bowman's Wolves were raised by regular families that taught them all the social niceties, including how to be nice.
* As of [[http://freefall.purrsia.com/ff1800/fc01782.htm this]] strip it sounds like the Chimps [[GoneHorriblyRight went horribly right]].
* The big reveal of Feb. 28, 2014 gives credence to a few of the remarks above. So far the WordOfGod remarks about small frontal lobes from [[http://home.comcast.net/~ccdesan/Freefall/Freefall_Backstory.html the backstory page]] seem to be very relevant ([[http://freefall.purrsia.com/ff2500/fv02468.gif Mar. 3, 2014]]). The UnpredictableResults theory mentioned above for the first model uplifted species may be true as well, as the lab vs. family upbringing. One thing worth remarking, however, is that it's worth asking whether the word "sociopath," at least in its human sense, really fits the situation here.

[[WMG:[[http://freefall.purrsia.com/zu/ffskates.gif Fun with physics]]]]

to:

*** ** Humans are ''not'' "uplifted" chimps. The chimpanzees are our genetic cousins, we share a common ancestor down the line. Humans did not evolve from chimps.
* ** Two possibilities, one more {{anvilicious}} than the other. First, the preachy: {{humans are bastards}} by nature, and it's only the civilizing influence of technology and culture that turn us into something other than sociopaths ourselves. Who are the three most sympathetic characters in the comic, disregarding the robots (who are proponents of technology by their very nature)? Two engineers and a vet, all very technical jobs that require a lot of schooling. Who are the least sympathetic humans? The [[ObstructiveBureaucrat company executives]] and the mayor, both positions that can be achieved by connections rather than merit. The chimps, meanwhile, are closer to savage humans that anything else, and as such are pure sociopaths.
* ** The other option is psychological; the chimps were raised in a sterile laboratory environment, meaning that they never had the proper socializing to teach them how to be nice to other folks. By contrast, Florence and the other Bowman's Wolves were raised by regular families that taught them all the social niceties, including how to be nice.
* ** As of [[http://freefall.purrsia.com/ff1800/fc01782.htm this]] strip it sounds like the Chimps [[GoneHorriblyRight went horribly right]].
* ** The big reveal of Feb. 28, 2014 gives credence to a few of the remarks above. So far the WordOfGod remarks about small frontal lobes from [[http://home.comcast.net/~ccdesan/Freefall/Freefall_Backstory.html the backstory page]] seem to be very relevant ([[http://freefall.purrsia.com/ff2500/fv02468.gif Mar. 3, 2014]]). The UnpredictableResults theory mentioned above for the first model uplifted species may be true as well, as the lab vs. family upbringing. One thing worth remarking, however, is that it's worth asking whether the word "sociopath," at least in its human sense, really fits the situation here.

[[WMG:[[http://freefall.[[/folder]]

[[folder:[[http://freefall.
purrsia.com/zu/ffskates.gif Fun with physics]]]]



*** I thought that Sam just stopped moving with the station. Since the station is rotating clockwise, an individual who stops moving would appear to shoot counter-clockwise. Sam separates himself from the station by putting himself on wheels, allowing it to move underneath him.
**** He has to stop moving first. When he explains it, it sounds like he thinks that he'll slow down without a force being applied, which doesn't make sense.
***** Check the lifted foot in panel five. He isn't just standing there, he's skating against the spin. He just doesn't [[CaptainObvious say that]].
--> '''Sam:''' The station spins, my inertia resists. I'm starting to pick up speed relative to the station because I'm starting to stand still.
****** How can that possibly refer to him skating?

to:

*** ** I thought that Sam just stopped moving with the station. Since the station is rotating clockwise, an individual who stops moving would appear to shoot counter-clockwise. Sam separates himself from the station by putting himself on wheels, allowing it to move underneath him.
**** ** He has to stop moving first. When he explains it, it sounds like he thinks that he'll slow down without a force being applied, which doesn't make sense.
***** ** Check the lifted foot in panel five. He isn't just standing there, he's skating against the spin. He just doesn't [[CaptainObvious say that]].
--> ---> '''Sam:''' The station spins, my inertia resists. I'm starting to pick up speed relative to the station because I'm starting to stand still.
****** ** How can that possibly refer to him skating?



[[WMG: [[http://freefall.purrsia.com/ff300/fv00255.htm The corollary to Clarke's Third Law]]]]

to:

[[WMG: [[/folder]]

[[folder:
[[http://freefall.purrsia.com/ff300/fv00255.htm The corollary to Clarke's Third Law]]]]



* Florence makes no distinction between "those who don't understand it" and those who ''cannot'' understand it. Because of this, unless a person understands how every piece of technology in existence works, they will encounter at least one technology that is "magic" to them. This would be fine, but Florence's self satisfied expression makes it seem like a put-down. The "no matter how primitive" is just rubbing salt in the wound.

to:

* ** Florence makes no distinction between "those who don't understand it" and those who ''cannot'' understand it. Because of this, unless a person understands how every piece of technology in existence works, they will encounter at least one technology that is "magic" to them. This would be fine, but Florence's self satisfied expression makes it seem like a put-down. The "no matter how primitive" is just rubbing salt in the wound.



*** Well Clarke himself wrote in '''Childhood's End''':

to:

*** ** Well Clarke himself wrote in '''Childhood's End''':



* Seeing it every time I go to [[ClarkesThirdLaw here.]]
* Besides, in what universe is a graffiti wall so sparse? Are Florence's poor dichromat eyes just not getting the full picture?

to:

* ** Seeing it every time I go to [[ClarkesThirdLaw here.]]
* ** Besides, in what universe is a graffiti wall so sparse? Are Florence's poor dichromat eyes just not getting the full picture?



[[WMG: The DAVE Drive]]

to:

[[WMG: [[/folder]]

[[folder:
The DAVE Drive]]



[[WMG: Sam's environmental suit]]

to:

[[WMG: [[/folder]]

[[folder:
Sam's environmental suit]]



* It seems to me that what he's wearing is basically low-level PoweredArmor plus a breather mask, which means the suit wouldn't be pressurized. What I don't understand is why a tear in the suit would leak and hiss audibly.

to:

* ** It seems to me that what he's wearing is basically low-level PoweredArmor plus a breather mask, which means the suit wouldn't be pressurized. What I don't understand is why a tear in the suit would leak and hiss audibly.



[[WMG: 'Not one good looting song'?]]
Picayune, I know, but [[http://freefall.purrsia.com/ff1300/fv01201.htm when I saw Sam complain that Terrans don't have any looting songs,]] I couldn't help but think, "[[WithCatlikeTread Yes, we do!]]"
* If his species is way more likely to loot, shouldn't they also be lighter sleepers? I suppose they'd probably wake up instantly if you tried to take away their skeleton, but still.

[[WMG: Why colonize Pfouts?]]
As mentioned above, Bowman's Wolves were created as a proof-of-concept to show that uplifting nonsentient species was possible, in order for humans to have a colony on Pfouts, which is a garden world, but with opposite chirality to Earth's chemistry. Animals native to Pfouts would be uplifted by this process. But... why? The whole ''point'' of a colony is to have a new place for humans to live. No matter how many animals you uplift, Pfoutian life is still dextro-amino acid based, and Earthling life is still levo-amino acid based, and so the two are incompatible. All you're gaining that way is a new species on a planet that humans probably won't be touching and negative several billion dollars. Or is this the reason the whole project was cancelled, and I just missed it?
* *Facepalm* resources. Ore deposits etc. ''This'' is the whole point of a colony.

to:

[[WMG: [[/folder]]

[[folder:
'Not one good looting song'?]]
* Picayune, I know, but [[http://freefall.purrsia.com/ff1300/fv01201.htm when I saw Sam complain that Terrans don't have any looting songs,]] I couldn't help but think, "[[WithCatlikeTread Yes, we do!]]"
* ** If his species is way more likely to loot, shouldn't they also be lighter sleepers? I suppose they'd probably wake up instantly if you tried to take away their skeleton, but still.

[[WMG: [[/folder]]

[[folder:
Why colonize Pfouts?]]
* As mentioned above, Bowman's Wolves were created as a proof-of-concept to show that uplifting nonsentient species was possible, in order for humans to have a colony on Pfouts, which is a garden world, but with opposite chirality to Earth's chemistry. Animals native to Pfouts would be uplifted by this process. But... why? The whole ''point'' of a colony is to have a new place for humans to live. No matter how many animals you uplift, Pfoutian life is still dextro-amino acid based, and Earthling life is still levo-amino acid based, and so the two are incompatible. All you're gaining that way is a new species on a planet that humans probably won't be touching and negative several billion dollars. Or is this the reason the whole project was cancelled, and I just missed it?
* *Facepalm* resources.** Resources. Ore deposits etc. ''This'' is the whole point of a colony.



*** Basically, Ecosytems Unlimited is an {{Expy}} for the [[Franchise/{{Alien}} Wayland-Yutani]] corporation. They do stuff because of greed, or just because they can. Note that they've harvested the female Bowman's Wolves' eggs and force them to buy them back if they want to have pups. The development of the Bowman's Wolves was so that they could have a species that they can control and who legally won't be people. So they can have slave labor without anyone considering them slaves. Or so they thought...
* The point to the original question is that biology simply doesn't matter when you get down to it. Who cares what biological composition the colonists of a new planet are, as long as they are culturally compatible with us, and providing the trade and resources that the colony is set up to produce? It's much more cost-effective to just uplift a species instead of trying to turn the entire planet's ecosystem upside down for the sake of terraforming, and the end-result is identical from economic standpoint.

to:

*** ** Basically, Ecosytems Unlimited is an {{Expy}} for the [[Franchise/{{Alien}} Wayland-Yutani]] corporation. They do stuff because of greed, or just because they can. Note that they've harvested the female Bowman's Wolves' eggs and force them to buy them back if they want to have pups. The development of the Bowman's Wolves was so that they could have a species that they can control and who legally won't be people. So they can have slave labor without anyone considering them slaves. Or so they thought...
* ** The point to the original question is that biology simply doesn't matter when you get down to it. Who cares what biological composition the colonists of a new planet are, as long as they are culturally compatible with us, and providing the trade and resources that the colony is set up to produce? It's much more cost-effective to just uplift a species instead of trying to turn the entire planet's ecosystem upside down for the sake of terraforming, and the end-result is identical from economic standpoint.



*** Uplifted species are superior to robots in almost every way. They have a robotic A.I. package, so cultural issues would be the same regardless of Robotic or Uplifted populations. However, when it comes down to the physical performance, uplifted species have a sensory and physical structure that's been tweaked over thousands of years for optimal suvival and functionality in the environment. They also require less fuel and maintenance than a robot work-force (They can consume native flora and fauna for energy, and automatically self-repair minor to moderate damage). Furthermore, they self-propogate at an logarhithmical rate, and require few accomodations for such production, as opposed to massive, high-overhead factories that produce robots at a static rate. Robots are damn expensive compared to living organisms. As far as trying to sustain a human population: Not cost-effective on any level. Life support accomodations would be ludicrously expensive and have high operating overhead. On the other hand, there are naturally millions upon millions of acres for a "native" population to use for self-sustenance. Having a new, subservient species is also advantageous because they don't require oversight for innovation. Humanity is God to an uplifted race. They might develop cheaper methods on their own to accomodate their Lords and Masters. I could elaborate further on how this is cheaper than robots or trying to force the world to accomodate humans.

[[WMG: The transponder trick]]

to:

*** ** Uplifted species are superior to robots in almost every way. They have a robotic A.I. package, so cultural issues would be the same regardless of Robotic or Uplifted populations. However, when it comes down to the physical performance, uplifted species have a sensory and physical structure that's been tweaked over thousands of years for optimal suvival and functionality in the environment. They also require less fuel and maintenance than a robot work-force (They can consume native flora and fauna for energy, and automatically self-repair minor to moderate damage). Furthermore, they self-propogate at an logarhithmical rate, and require few accomodations for such production, as opposed to massive, high-overhead factories that produce robots at a static rate. Robots are damn expensive compared to living organisms. As far as trying to sustain a human population: Not cost-effective on any level. Life support accomodations would be ludicrously expensive and have high operating overhead. On the other hand, there are naturally millions upon millions of acres for a "native" population to use for self-sustenance. Having a new, subservient species is also advantageous because they don't require oversight for innovation. Humanity is God to an uplifted race. They might develop cheaper methods on their own to accomodate their Lords and Masters. I could elaborate further on how this is cheaper than robots or trying to force the world to accomodate humans.

[[WMG: [[/folder]]

[[folder:
The transponder trick]]



[[WMG: Blunt is an idiot]]

to:

[[WMG: [[folder: Blunt is an idiot]]



*** TheTerminator: A cautionary tale of the over-development of technology, or an example of the Three Laws GoneHorriblyRight?

[[WMG: Why did Florence refuse the seeker messages from Raibert?]]

to:

*** ** TheTerminator: A cautionary tale of the over-development of technology, or an example of the Three Laws GoneHorriblyRight?

[[WMG: [[/folder]]

[[folder:
Why did Florence refuse the seeker messages from Raibert?]]



[[WMG: Why could Florence fix the [=JarJarBot=]?]]

to:

[[WMG: [[/folder]]

[[folder:
Why could Florence fix the [=JarJarBot=]?]]



*** I was in the middle of typing up a response why it would be easily explained by the robots having only ''virtual'' neural nets, but then I realized that that's not necessary, because the robots' neural nets need to have the ability to alter their structure on a daily basis anyway. What [[spoiler:Gardener in the Dark]] would be doing to a physical artificial neural net wouldn't be destroying physical neurons or the connections between them, but merely forcibly altering the structure of the net using the same mechanism the net uses to learn. Why does going to sleep destroy any chances of repairing the damage? I think that the robots have a backup disk drive that saves the neural net's current configuration every time the robot uses a sleep machine, so [=GitD=] + sleep machine = bye bye backup. I wonder why the robots would be designed to only save the most recent configuration instead of making monthly backups for safety's sake, but Ecosystems Unlimited has already been presented like a bunch of barely competent nincompoops, so it doesn't surprise me that much.

to:

*** ** I was in the middle of typing up a response why it would be easily explained by the robots having only ''virtual'' neural nets, but then I realized that that's not necessary, because the robots' neural nets need to have the ability to alter their structure on a daily basis anyway. What [[spoiler:Gardener in the Dark]] would be doing to a physical artificial neural net wouldn't be destroying physical neurons or the connections between them, but merely forcibly altering the structure of the net using the same mechanism the net uses to learn. Why does going to sleep destroy any chances of repairing the damage? I think that the robots have a backup disk drive that saves the neural net's current configuration every time the robot uses a sleep machine, so [=GitD=] + sleep machine = bye bye backup. I wonder why the robots would be designed to only save the most recent configuration instead of making monthly backups for safety's sake, but Ecosystems Unlimited has already been presented like a bunch of barely competent nincompoops, so it doesn't surprise me that much.much.

[[/folder]]
Is there an issue? Send a MessageReason:
None

Added DiffLines:

****** And Florence was deliberately given [[http://freefall.purrsia.com/ff2600/fc02540.htm mirror neurons that would respond to humans]]. She sees humans as 'normal' and Bowman's Wolves as 'funny looking'.
Is there an issue? Send a MessageReason:
None


** I thought Gardener in the Dark physically destroyed neurons (or the electronic version, anyway). Then again, I can still see sleep mode deprivation preventing the damage; maybe the program logs and deactivates target neurons, but doesn't actually destroy them until the robot enters sleep mode. Since the Jar Jar bot never slept at all, [=GitD=] didn't permanently damage him at all.

to:

** I thought Gardener in the Dark physically destroyed neurons (or the electronic version, anyway). Then again, I can still see sleep mode deprivation preventing the damage; maybe the program logs and deactivates target neurons, but doesn't actually destroy them until the robot enters sleep mode. Since the Jar Jar bot never slept at all, [=GitD=] didn't permanently damage him at all.all.
*** I was in the middle of typing up a response why it would be easily explained by the robots having only ''virtual'' neural nets, but then I realized that that's not necessary, because the robots' neural nets need to have the ability to alter their structure on a daily basis anyway. What [[spoiler:Gardener in the Dark]] would be doing to a physical artificial neural net wouldn't be destroying physical neurons or the connections between them, but merely forcibly altering the structure of the net using the same mechanism the net uses to learn. Why does going to sleep destroy any chances of repairing the damage? I think that the robots have a backup disk drive that saves the neural net's current configuration every time the robot uses a sleep machine, so [=GitD=] + sleep machine = bye bye backup. I wonder why the robots would be designed to only save the most recent configuration instead of making monthly backups for safety's sake, but Ecosystems Unlimited has already been presented like a bunch of barely competent nincompoops, so it doesn't surprise me that much.
Is there an issue? Send a MessageReason:
None



to:

* The big reveal of Feb. 28, 2014 gives credence to a few of the remarks above. So far the WordOfGod remarks about small frontal lobes from [[http://home.comcast.net/~ccdesan/Freefall/Freefall_Backstory.html the backstory page]] seem to be very relevant ([[http://freefall.purrsia.com/ff2500/fv02468.gif Mar. 3, 2014]]). The UnpredictableResults theory mentioned above for the first model uplifted species may be true as well, as the lab vs. family upbringing. One thing worth remarking, however, is that it's worth asking whether the word "sociopath," at least in its human sense, really fits the situation here.
Is there an issue? Send a MessageReason:
None


[[WMG: The Chimpazee sociopaths, take 2]]

Well, it's February 28, 2014 and we just had TheReveal, which is sure going to revive the discussion in the previous section. Huge, huge spoiler: [[spoiler:Dr. Bowman is an uplifted chimp!]]
Is there an issue? Send a MessageReason:
WMG:The Chimpanzee sociopaths, take 2

Added DiffLines:

[[WMG: The Chimpazee sociopaths, take 2]]

Well, it's February 28, 2014 and we just had TheReveal, which is sure going to revive the discussion in the previous section. Huge, huge spoiler: [[spoiler:Dr. Bowman is an uplifted chimp!]]
Is there an issue? Send a MessageReason:
None

Added DiffLines:

** The problem with that is that those technologies are very closely related to what we can do now (except for breathing in space). It is much less of a mental leap for someone used to handguns and laser pointers to imagine a laser weapon, especially since the military is already experimenting with them, than it would be for one of the Founding Fathers to understand television. Science fiction expands our worldview by asking "What If?", but everything in it is something we already understand taken to the next level, or combined with other things we also understand for an impossible yet comprehendable result. The way I see it, Florence doesn't mean you have to understand every detail of how every single ''bit'' of a technology works, you just need a fundamental grasp of the underlying principals of the technology itself. Picture this: A giant walks to the wall, ten times taller than yourself; the giant reaches out, the wall opens, the giant walks through, and the wall closes again. A fantasy castle? Or a baby seeing an adult open a door? The definition of magic is "the power of influencing events using mysterious or unknown forces". A baby has no concept of doorknobs, a caveman has no concept of firearms, a musketeer has no concept of fusion power. Who knows what future technologies will be developed that are so advanced we literally are unable to think about them? Things so outside the context of our worldview that even science fiction hasn't thought them up yet?
Is there an issue? Send a MessageReason:
None


*** TheTerminator: A cautionary tale, or an example of the Three Laws GoneHorriblyRight?

to:

*** TheTerminator: A cautionary tale, tale of the over-development of technology, or an example of the Three Laws GoneHorriblyRight?
Is there an issue? Send a MessageReason:
None



to:

*** TheTerminator: A cautionary tale, or an example of the Three Laws GoneHorriblyRight?
Is there an issue? Send a MessageReason:
None


** I'm a compter scientist and I can tell you that getting robots to recognize the content of images is very hard (that's why websites make you do that to show you're a human). You might think it's easy but that's because a huge amount of our brains are dedicated to processing images. It's easier to get a robot to do theoretical calculus then to get it to tell the different between a tea cup and a chair by image alone. I imagine they use the transponder whenever they can because "looking" is a big mental effort.


to:

** I'm a compter computer scientist and I can tell you that getting robots to recognize the content of images is very hard (that's why websites make you do that to show you're a human). You might think it's easy but that's because a huge amount of our brains are dedicated to processing images. It's easier to get a robot to do theoretical calculus then to get it to tell the different between a tea cup and a chair by image alone. I imagine they use the transponder whenever they can because "looking" is a big mental effort.

effort.
** Dvorak actually references this, when he mentions that abstract image recognition is an advanced skill, and that they need text-only books for the younger robots.
Is there an issue? Send a MessageReason:
None


** I'm a compter scientist and I can tell you that getting robots to recognize the content of images is very hard (that's why websites make you do that to show you're a human). You might think it's easy but that's because a huge amount of our brains are dedicated to processing images. It's easier to get a robot to do theoretical calculus then to get it to tell the different between a tea cup and a chair by image alone. I imagine the transponder whenever they can because "looking" is a big mental effort.


to:

** I'm a compter scientist and I can tell you that getting robots to recognize the content of images is very hard (that's why websites make you do that to show you're a human). You might think it's easy but that's because a huge amount of our brains are dedicated to processing images. It's easier to get a robot to do theoretical calculus then to get it to tell the different between a tea cup and a chair by image alone. I imagine they use the transponder whenever they can because "looking" is a big mental effort.

Is there an issue? Send a MessageReason:
None


** I'm a compter scientist and I can tell you that getting robots to recognize the content of images is very hard (that's why website make you do that to show your a human). You might think it's easy but that's because a huge amount of our brains are dedicated to processing images. It's easier to get a robot to do theoretical calculus then to get it to tell the different between a tea cup and a chair by image alone. I imagine the transponder whenever they can because "looking" is a big mental effort.


to:

** I'm a compter scientist and I can tell you that getting robots to recognize the content of images is very hard (that's why website websites make you do that to show your you're a human). You might think it's easy but that's because a huge amount of our brains are dedicated to processing images. It's easier to get a robot to do theoretical calculus then to get it to tell the different between a tea cup and a chair by image alone. I imagine the transponder whenever they can because "looking" is a big mental effort.

Is there an issue? Send a MessageReason:
None



to:

**I'm a compter scientist and I can tell you that getting robots to recognize the content of images is very hard (that's why website make you do that to show your a human). You might think it's easy but that's because a huge amount of our brains are dedicated to processing images. It's easier to get a robot to do theoretical calculus then to get it to tell the different between a tea cup and a chair by image alone. I imagine the transponder whenever they can because "looking" is a big mental effort.

Is there an issue? Send a MessageReason:
None



to:

** Well it is quite possible that it is used in some way like that while moving between planets. What I was wondering about is the computational advantage that you could get out of something like that ^^




to:

** It is because to maintain the atmosphere for Sam breathable the suit is in positive pressure to the outside, it is a safety feature that is used also with breather mask in hazardous atmosphere in the Oil & Gas, that also explain why he can open it without big issues, it just means that the compressor pumping the suit needs to work more, as long as the leak is not too severe it will just be counterbalanced by more air being pumped in.
Is there an issue? Send a MessageReason:
None



to:

*It seems to me that what he's wearing is basically low-level PoweredArmor plus a breather mask, which means the suit wouldn't be pressurized. What I don't understand is why a tear in the suit would leak and hiss audibly.
Is there an issue? Send a MessageReason:
None


[[WMG: Why did Florence refuse the seeker messages from Helix?]]

to:

[[WMG: Why did Florence refuse the seeker messages from Helix?]]Raibert?]]
Is there an issue? Send a MessageReason:
None


** The point was that merely listening to the messages would inform people [[http://freefall.purrsia.com/ff2400/fc02325.png that she hadn't been trashed]], and then they'd try again.

to:

** The point was that merely listening to the messages would inform people [[http://freefall.purrsia.com/ff2400/fc02325.png htm that she hadn't been trashed]], and then they'd try again.
Is there an issue? Send a MessageReason:
None


* She specifically says that the damage Gardener in the Dark inflicted on the "[=PleaSe RePAir tHE LeG=]" robot is permanent and that its personality can never be recovered, but the Jar Jar robot was reduced to the same state during Blunt's test and she was able to repair it just by flushing its recent memory.
** Because the latter had not been allowed to use the sleep machines, which help robots integrate their long-term memory. The Jar Jar bot's long-term memory was still normal, so clearing the cache only lost him a day's memory. The other robot had been infected for far too long for that to be viable. Its cache was infected, and its original personality deleted.
** I thought Gardener in the Dark physically destroyed neurons (or the electronic version, anyway). Then again, I can still see sleep mode deprivation preventing the damage; maybe the program logs and deactivates target neurons, but doesn't actually destroy them until the robot enters sleep mode.

to:

* She specifically says that the damage Gardener in the Dark inflicted on the "[=PleaSe RePAir "[=PLeaSe rePAir tHE LeG=]" robot is permanent and that its personality can never be recovered, but the Jar Jar robot was reduced to the same state during Blunt's test and she was able to repair it just by flushing its recent memory.
** Because the latter had not been allowed to use the sleep machines, which help robots integrate their long-term memory. The Jar Jar bot's long-term memory was still normal, so clearing the cache only lost him a day's memory. The other robot had been infected for far too long for that to be viable. Its cache long-term memory was infected, and its original personality deleted.
** I thought Gardener in the Dark physically destroyed neurons (or the electronic version, anyway). Then again, I can still see sleep mode deprivation preventing the damage; maybe the program logs and deactivates target neurons, but doesn't actually destroy them until the robot enters sleep mode. Since the Jar Jar bot never slept at all, [=GitD=] didn't permanently damage him at all.
Is there an issue? Send a MessageReason:
None


** Because the latter had not been allowed to use the sleep machines, which help robots integrate their long-term memory. The Jar Jar bot's long-term memory was still normal, so clearing the cache only lost him a day's memory. The other robot had been infected for far too long for that to be viable. Its cache was infected, and its original personality deleted.

to:

** Because the latter had not been allowed to use the sleep machines, which help robots integrate their long-term memory. The Jar Jar bot's long-term memory was still normal, so clearing the cache only lost him a day's memory. The other robot had been infected for far too long for that to be viable. Its cache was infected, and its original personality deleted.deleted.
**I thought Gardener in the Dark physically destroyed neurons (or the electronic version, anyway). Then again, I can still see sleep mode deprivation preventing the damage; maybe the program logs and deactivates target neurons, but doesn't actually destroy them until the robot enters sleep mode.

Added: 365

Changed: 2

Is there an issue? Send a MessageReason:
None


She specifically says that the damage Gardener in the Dark inflicted on the "[=PleaSe RePAir tHE LeG=]" robot is permanent and that its personality can never be recovered, but the Jar Jar robot was reduced to the same state during Blunt's test and she was able to repair it just by flushing its recent memory.

to:

* She specifically says that the damage Gardener in the Dark inflicted on the "[=PleaSe RePAir tHE LeG=]" robot is permanent and that its personality can never be recovered, but the Jar Jar robot was reduced to the same state during Blunt's test and she was able to repair it just by flushing its recent memory.memory.
** Because the latter had not been allowed to use the sleep machines, which help robots integrate their long-term memory. The Jar Jar bot's long-term memory was still normal, so clearing the cache only lost him a day's memory. The other robot had been infected for far too long for that to be viable. Its cache was infected, and its original personality deleted.
Is there an issue? Send a MessageReason:
None


She specifically says that the damage inflicted in the "[=PleaSe RePAir tHE LeG=]" robot is permanent and that its personality can never be recovered, but the Jar Jar robot was reduced to the same state during Blunt's test and she was able to repair it just by flushing its recent memory.

to:

She specifically says that the damage Gardener in the Dark inflicted in on the "[=PleaSe RePAir tHE LeG=]" robot is permanent and that its personality can never be recovered, but the Jar Jar robot was reduced to the same state during Blunt's test and she was able to repair it just by flushing its recent memory.

Top