It would be a lot cheaper, safer, easier, and less ethically gray to simply hire and train human workers than to genetically engineer an entirely new sentient race. Each Wolf must cost a fortune to make.
They were "proof of concept" models for an attempt to colonize Pfouts by uplifting a native species. Unfortunately the project was cut and the prototypes were sold as pets due to a "clerical error" (said clerical error being engineered by Dr. Bowman in order to properly socialize his creations).
Which of course means that the robots' brains are designed for colonists, not workers. They were improvised after slightdifficulties with the factory.
The Chimpanzee sociopaths.
Why is it that uplifted chimps were described as sociopaths, when chimps do in fact have empathy and compassion? On the other hand, wolves do not feel empathy. Florence should be the sociopath, although a loyal one socially compatible with humans.
Evidence that wolves "do not feel empathy"? Been mindmelded to a wolf lately or something?
You think people don't study this stuff? Empathy and compassion are unique to primates, so wolves, dogs, cats etc. don't have empathy.
Yeah, I'm gonna need something pretty convincing if I'm going to believe that dogs have no empathy, all circumstantial evidence to the contrary.
All mammals have empathy, to an extent or another. It's a necessary survival trait for creatures that take care of their young.
Maybe the uplifting process does more than just give them extra intelligence, hands (for wolves), etc. If it messes with their brains then it's conceivable that uplifted chimps might be sociopaths even if normal ones aren't. So basically, A.I. Is a Crapshoot.
Empathy is more complicated than just a simple trait that either is owned or is not owned. Human understanding of the matter isn't great, but we do know that in neurotypical humans, the matter is highly dependent on what are called "mirror neurons" — parts of the brain the echo similar results from other people who look similar.
People who have lowered mirror neuron responses, such as in autism, usually end up relying on entirely different metrics when determining empathy; because they do not tend to automatically mimic others' emotions, but still have a desire to connect with others, many will base their understanding of others on logic, creating a different, more analytical sort of empathy.
It's suspected that a sociopath's mirror neurons act entirely differently in certain areas (statistically speaking : individuals do vary). Monkeys are assumed to act in a similar way. A monkey could have impressive sorts of empathy, but an uplifted monkey might only emphasize with other uplifted monkeys, since neither humans nor normal monkeys would trigger the same array of empathy that a human would for another human. That this occurred despite the natural aptitude toward understanding other viewpoints that Bowman's Architecture provides (even robots based on Bowman's Architecture are more capable of understanding human or Bowman's wolf viewpoints than a normal robot), suggests that the resulting uplifted monkey architecture was dramatically more attached to those that appeared similar to it.
Florence, and the rest of the Bowman's Wolves, were picked not because the ability to empathize — Florence in particular tends to assume canine motivations for human and squid-like individuals, as well as slowly deconstructing motivations in a way not typical for highly empathic individuals — but because canine develop by instinct a large social net and pecking structure.
Oh i know that, as i said "Socially compatible." My issue wasn't with using wolves but that chimps were sociopathic failures. If i wanted to bring issues against Florence it would generally be the desire for romantic attachment which doesn't exist in wolves(but could easily be programmed since we are dealing with genetic engineering.)
The uplifting process for the chimpanzees was probably an earlier, inferior model that had Unpredictable Results. The investors in this comic have been depicted as dumb and short-sighted enough that they'd rather move on to another species than try again.
It could just be a reference to how wild Chimpanzees are not the cute and cuddly animal-actors we see on stage and screen but are brutally vicious simians known to hunt and kill for 'fun' as much as for food.
That is, they have much more common with us that we realize care to admit.
Not that this is an argument against the above explanations/discussion, but has anyone else considered Rule of Funny?
What, that the uplift failed, or the fact that they apparently make great CEOs?
According to This Page it's because the chimps, real or uplifted I have no idea, have so small frontal lobes, which deal with turning thought into action. When the uplifted chimps thought about hurting an annoying person, they were already halfway to acting on the thought. Florence has bigger frontal lobe that keeps thought and action more distinctly separate.
Wait a second... the uplifted chimps were natural sociopaths... humans are chimps uplifted by evolution... Oh.My.God.
Humans are not "uplifted" chimps. The chimpanzees are our genetic cousins, we share a common ancestor down the line. Humans did not evolve from chimps.
Two possibilities, one more anvilicious than the other. First, the preachy: humans are bastards by nature, and it's only the civilizing influence of technology and culture that turn us into something other than sociopaths ourselves. Who are the three most sympathetic characters in the comic, disregarding the robots (who are proponents of technology by their very nature)? Two engineers and a vet, all very technical jobs that require a lot of schooling. Who are the least sympathetic humans? The company executives and the mayor, both positions that can be achieved by connections rather than merit. The chimps, meanwhile, are closer to savage humans that anything else, and as such are pure sociopaths.
The other option is psychological; the chimps were raised in a sterile laboratory environment, meaning that they never had the proper socializing to teach them how to be nice to other folks. By contrast, Florence and the other Bowman's Wolves were raised by regular families that taught them all the social niceties, including how to be nice.
The big reveal of Feb. 28, 2014 gives credence to a few of the remarks above. So far the Word of God remarks about small frontal lobes from the backstory page seem to be very relevant (Mar. 3, 2014◊). The Unpredictable Results theory mentioned above for the first model uplifted species may be true as well, as the lab vs. family upbringing. One thing worth remarking, however, is that it's worth asking whether the word "sociopath," at least in its human sense, really fits the situation here.
Sam mangled Newton's laws and aerodynamics so much that they started working for him. Alternatively, he gives himself a kickstart off-panel.
I thought that Sam just stopped moving with the station. Since the station is rotating clockwise, an individual who stops moving would appear to shoot counter-clockwise. Sam separates himself from the station by putting himself on wheels, allowing it to move underneath him.
He has to stop moving first. When he explains it, it sounds like he thinks that he'll slow down without a force being applied, which doesn't make sense.
Check the lifted foot in panel five. He isn't just standing there, he's skating against the spin. He just doesn't say that.
Sam: The station spins, my inertia resists. I'm starting to pick up speed relative to the station because I'm starting to stand still.
How can that possibly refer to him skating?
Here's how I see it: if the station is rotating clockwise at X rotations per minute, Sam starts skating counter-clockwise at X rotations per minute. Eventually, he stops moving with the station and becomes weightless. If he thinks his inertia will cause him to stop moving, then he absolutely mangles Newton's first law. Inertia is the tendency of an object to maintain a constant velocity, not its tendency to stand still. Currently, the only force acting on him is pushing him in, towards the center of the station.
See, Gehm's statement can be readily deduced if Clarke's Third Law is presumed true. Florence makes the assumption that a technology that is not understood is indistinguishable from magic. Clarke did not state what constitutes "sufficiently advanced", or what delineates magic and technology. So, it's not a real corollary, just a statement that bears some resemblance to the earlier ones.
Florence makes no distinction between "those who don't understand it" and those who cannot understand it. Because of this, unless a person understands how every piece of technology in existence works, they will encounter at least one technology that is "magic" to them. This would be fine, but Florence's self satisfied expression makes it seem like a put-down. The "no matter how primitive" is just rubbing salt in the wound.
It also seems like it would depend on the person in question. Personally, if I were abducted by aliens who had laser guns, holodecks and chips that let them breathe in space, I wouldn't think "magic," I'd think "technology more advanced than I'm currently capable of explaining. But there must be a scientific explanation, since I'm witnessing it right now."
The problem with that is that those technologies are very closely related to what we can do now (except for breathing in space). It is much less of a mental leap for someone used to handguns and laser pointers to imagine a laser weapon, especially since the military is already experimenting with them, than it would be for one of the Founding Fathers to understand television. Science fiction expands our worldview by asking "What If?", but everything in it is something we already understand taken to the next level, or combined with other things we also understand for an impossible yet comprehendable result. The way I see it, Florence doesn't mean you have to understand every detail of how every single bit of a technology works, you just need a fundamental grasp of the underlying principals of the technology itself. Picture this: A giant walks to the wall, ten times taller than yourself; the giant reaches out, the wall opens, the giant walks through, and the wall closes again. A fantasy castle? Or a baby seeing an adult open a door? The definition of magic is "the power of influencing events using mysterious or unknown forces". A baby has no concept of doorknobs, a caveman has no concept of firearms, a musketeer has no concept of fusion power. Who knows what future technologies will be developed that are so advanced we literally are unable to think about them? Things so outside the context of our worldview that even science fiction hasn't thought them up yet?
Well Clarke himself wrote in Childhood's End:
"Surely," protested the Herald Tribune, "there is a fundamental difference. We are accustomed to Science. On your world there are doubtless many things which we might not understand — but they wouldn't seem magic to us." "Are you quite sure of that?" said Karellen, so softly that it was hard to hear his words. "Only a hundred years lies between the age of electricity and the age of steam, but what would a Victorian engineer have made of a television set or an electronic computer. And how long would he have lived if he started to investigate their workings? The gulf between two technologies can easily become so great that it is — lethal."
Like the Mage: The Ascension JBM put it, "Belief is "I know this toaster will toast my bread in about two to five minutes because that is how toasters work, even though I'm no electrician."", e. g. those who do not understand tech treat it as magic.
Besides, in what universe is a graffiti wall so sparse? Are Florence's poor dichromat eyes just not getting the full picture?
The planet Jean is vastly underpopulated, and its human population makes up a very small proportion of its total population. It is also in the early stages of terraforming, and a higher percentage of the human population is technically minded than would be typical elsewhere. Presuming that robots are less likely to use a graffiti wall than humans, and that technically minded humans are less likely to use the graffiti walls than the normal human populace, and that the city planners on Jean have begun construction to support the greater influx of population planned for later stages of terraforming, it's not unreasonable to conclude that Jean has many more graffiti walls per potential graffiti artist than would be typical elsewhere.
The DAVE Drive
They have a machine capable of increasing the rate at which time passes and they use it solely for transportation. They should put one around a colony and have them advance technology at insane rates. They could also use it to dramatically speed up the research on making intelligent animals, as they could grow them from babies to adults in a much shorter time. When they colonize a planet, they could use it to rapidly generate plants and animals for terraforming before sending the ship back.
Who says they can do it on that scale, and altering the density of space-time is said to be Dangerous And Very Expensive.
It's also possible that relativistic speeds are required.
Well it is quite possible that it is used in some way like that while moving between planets. What I was wondering about is the computational advantage that you could get out of something like that ^^
Sam's environmental suit
It is established that Sam's species is native to a world with a much higher air pressure than humans. Sam is unable to stay awake and conscious in Earth-level pressure, and thus needs his environmental suit while he's staying on Jean. So it really bugs me that he can take off the faceplate and even a sleeve.
Higher oxygen content, not necessarily higher pressure.
It's not that strange - after all, a human can easily survive for short periods of time in a low-oxygen environment, so there's no reason why Sam would be any different. As long as he doesn't leave his faceplace off for more than a minute or two, he wouldn't feel much of any effect. As for the sleeves,they appear to be fairly tight around the shoulder, so there wouldn't be much air-leakage. On the other hand, if his suit springs a leak in a less-tight area, it could depressurize the whole thing in short order, which would be potentially fatal...
I saw that as 'take a deep breath'.. more concerned with the 'yank a part of your face off'. It's been implied several times that Sam the squidoid is not as large as the suit. Like when he was talking to the short 'Texan', he pulled the legs of the suit up into the torso, calling them 'armatures'. (I don't remember the strip exactly, unfortunately; it was when they were getting the contract to launch the satellites.) The suit seemed to me kind of like the old guy from the first Men in Black movie, a little alien in a big suit.
Sam doesn't have any bones, so even though he has close to human body mass, he can pull it into a much tighter space without trouble.
It seems to me that what he's wearing is basically low-level Powered Armor plus a breather mask, which means the suit wouldn't be pressurized. What I don't understand is why a tear in the suit would leak and hiss audibly.
It is because to maintain the atmosphere for Sam breathable the suit is in positive pressure to the outside, it is a safety feature that is used also with breather mask in hazardous atmosphere in the Oil & Gas, that also explain why he can open it without big issues, it just means that the compressor pumping the suit needs to work more, as long as the leak is not too severe it will just be counterbalanced by more air being pumped in.
If his species is way more likely to loot, shouldn't they also be lighter sleepers? I suppose they'd probably wake up instantly if you tried to take away their skeleton, but still.
Why colonize Pfouts?
As mentioned above, Bowman's Wolves were created as a proof-of-concept to show that uplifting nonsentient species was possible, in order for humans to have a colony on Pfouts, which is a garden world, but with opposite chirality to Earth's chemistry. Animals native to Pfouts would be uplifted by this process. But... why? The whole point of a colony is to have a new place for humans to live. No matter how many animals you uplift, Pfoutian life is still dextro-amino acid based, and Earthling life is still levo-amino acid based, and so the two are incompatible. All you're gaining that way is a new species on a planet that humans probably won't be touching and negative several billion dollars. Or is this the reason the whole project was cancelled, and I just missed it?
*Facepalm* resources. Ore deposits etc. This is the whole point of a colony.
But then why uplift any species at all? Just ship in food every month or so. If the uplifted species is going to be workers, what happens when the resources are gone? You've just given yourself a long-term problem for short-term gain. And what happens if they object to being used like that? And why terraform any planets? If you're just looking for resources, build a self-contained facility on a dead planet or asteroid and either use robots or equip every living worker with a breath mask, just in case; it's much simpler and cheaper.
Basically, Ecosytems Unlimited is an Expy for the Wayland-Yutani corporation. They do stuff because of greed, or just because they can. Note that they've harvested the female Bowman's Wolves' eggs and force them to buy them back if they want to have pups. The development of the Bowman's Wolves was so that they could have a species that they can control and who legally won't be people. So they can have slave labor without anyone considering them slaves. Or so they thought...
The point to the original question is that biology simply doesn't matter when you get down to it. Who cares what biological composition the colonists of a new planet are, as long as they are culturally compatible with us, and providing the trade and resources that the colony is set up to produce? It's much more cost-effective to just uplift a species instead of trying to turn the entire planet's ecosystem upside down for the sake of terraforming, and the end-result is identical from economic standpoint.
But you don't even need to uplift species, if all you care about is resources. Just have a bunch of mining facilities on the planet, and either ship in food or grow it in a greenhouse. We can already grow meat in Petri dishes (albeit not very good meat). Depending on how chirality works, you might even be able to grow plants in native soil and not have to worry about native fauna eating it because it'll make them sick. You could even bypass living workers altogether and just use robots.
Uplifted species are superior to robots in almost every way. They have a robotic A.I. package, so cultural issues would be the same regardless of Robotic or Uplifted populations. However, when it comes down to the physical performance, uplifted species have a sensory and physical structure that's been tweaked over thousands of years for optimal suvival and functionality in the environment. They also require less fuel and maintenance than a robot work-force (They can consume native flora and fauna for energy, and automatically self-repair minor to moderate damage). Furthermore, they self-propogate at an logarhithmical rate, and require few accomodations for such production, as opposed to massive, high-overhead factories that produce robots at a static rate. Robots are damn expensive compared to living organisms. As far as trying to sustain a human population: Not cost-effective on any level. Life support accomodations would be ludicrously expensive and have high operating overhead. On the other hand, there are naturally millions upon millions of acres for a "native" population to use for self-sustenance. Having a new, subservient species is also advantageous because they don't require oversight for innovation. Humanity is God to an uplifted race. They might develop cheaper methods on their own to accomodate their Lords and Masters. I could elaborate further on how this is cheaper than robots or trying to force the world to accomodate humans.
Judging by Helix's comment about a pink train being squeezed into his sleep cubicle when Edge was there, it seems transponders provide a visual overlay as well, like Augmented Reality. I'd imagine they could turn that off if they thought of it, but they haven't.
Robots have limited memory space, thus their need for 'sleep'. With that in mind, which sounds like a better idea: record every interaction you have into your memory, and no, if you erase it right after it doesn't seem important you might run into issues later if it turns out to be vital and you don't even remember it existing, or run a passive scan of your surroundings via radar, radio, audio, ect., and then begin active sight once you're in a conversation and/or determined it to be necessary?
I'm a computer scientist and I can tell you that getting robots to recognize the content of images is very hard (that's why websites make you do that to show you're a human). You might think it's easy but that's because a huge amount of our brains are dedicated to processing images. It's easier to get a robot to do theoretical calculus then to get it to tell the different between a tea cup and a chair by image alone. I imagine they use the transponder whenever they can because "looking" is a big mental effort.
Dvorak actually references this, when he mentions that abstract image recognition is an advanced skill, and that they need text-only books for the younger robots.
Blunt is an idiot
Blunt's support for "Gardner in the Dark" is based on the idea that an idiot robot definitely can't intentionally hurt people, while Bowman-based robots are untested and therefore the danger is therefor unknown. Lesser evil, right? Except the colony's infrastructure is almost entirely supported by robot labor, at nearly every seen level. Git D-affected robots aren't just de-personality'd, they're incapable of following the most basic of commands in a useful way. They are not even effective laborers anymore. And while they can't intentionally hurt people, they can certainly do it unintentionally, because they're too stupid to know better. So you're trying to replace a theoretical hazard with an ACTUAL hazard that also ruins your entire colony's industrial support, power supply, transport, and the rest of your colony's infrastructure. If Git D goes live, people will die as the colony rapidly fails. His idea amounts to "Kill a fuckton of people, potentially the entire population of humans in-system, to protect them from what may or may not actually be a threat."
This is the same guy who said "If automobile companies truly cared about their customers, they would not sell cars you can drive." So yes, he is an idiot.
I think that he's intended as an example of the problems inherent in the First Law of Robotics. If robots were programmed to prevent all physical harm to humans no matter what then you'd get them making decisions that do, technically, prevent harm to humans but in a way that is harmful to humans.
Why did Florence refuse the seeker messages from Raibert?
True, she was afraid of a direct order that could have severe consequences. But (unless I'm forgetting something) she and Sam still have the device that erases her direct order memory. If she was afraid to use it herself, she could still trust Sam to use it if he had to.
Plus, it looks to me like the reset capsule isn't a "device" so much as just a sealed scent vial, and they only have one. Best to save it for something really important.
Why could Florence fix the JarJarBot?
She specifically says that the damage Gardener in the Dark inflicted on the "PLeaSe rePAir tHE LeG" robot is permanent and that its personality can never be recovered, but the Jar Jar robot was reduced to the same state during Blunt's test and she was able to repair it just by flushing its recent memory.
Because the latter had not been allowed to use the sleep machines, which help robots integrate their long-term memory. The Jar Jar bot's long-term memory was still normal, so clearing the cache only lost him a day's memory. The other robot had been infected for far too long for that to be viable. Its long-term memory was infected, and its original personality deleted.
I thought Gardener in the Dark physically destroyed neurons (or the electronic version, anyway). Then again, I can still see sleep mode deprivation preventing the damage; maybe the program logs and deactivates target neurons, but doesn't actually destroy them until the robot enters sleep mode. Since the Jar Jar bot never slept at all, GitD didn't permanently damage him at all.
I was in the middle of typing up a response why it would be easily explained by the robots having only virtual neural nets, but then I realized that that's not necessary, because the robots' neural nets need to have the ability to alter their structure on a daily basis anyway. What Gardener in the Dark would be doing to a physical artificial neural net wouldn't be destroying physical neurons or the connections between them, but merely forcibly altering the structure of the net using the same mechanism the net uses to learn. Why does going to sleep destroy any chances of repairing the damage? I think that the robots have a backup disk drive that saves the neural net's current configuration every time the robot uses a sleep machine, so GitD + sleep machine = bye bye backup. I wonder why the robots would be designed to only save the most recent configuration instead of making monthly backups for safety's sake, but Ecosystems Unlimited has already been presented like a bunch of barely competent nincompoops, so it doesn't surprise me that much.