I think philosophical zombie was their point with that remark.
No, if the Ship can simulate all aspects of human cognition, then it's effectively human, but there is no evidence that here. Ship is able to converse with a human and pass the Turing test, but that's no indication of true sapience.
What would it take to prove "true sapience", then?
Edited by crazysamaritan on May 11th 2019 at 11:30:09 AM
Link to TRS threads in project mode here.Yeah that's just an arbitrary autofail.
The ship is capable of examining its own process of thinking on several occasions, forming thoughts and conclusions beyond the scope of what it's clear and programmed purpose was (i.e. in one of the older strips where it thought about obeying the order to keep the doors locked: it is programmed to evaluate the validity of orders, yes, but inner dialogue adding "whew I'm thankful the rules apply" is beyond that. Similarly, in this strip, making judgements about different models of consciousness is beyond its purpose of running the sensors and stuff.)
Dat bitch sapient.
You would have to look "under the hood", so to speak, to determine that. I would have to see the code itself before I would accept that a computer had achieved sapience- I dont think there exists a dialogue protocol that would settle the matter.
Yeaaah because the massive body of code is going to be comprehendable as a unified entity and have a clear definition of sapience in it.
You can't even do that with humans (yet). Seems unreasonable.
Link to TRS threads in project mode here.Somebody coded it, somebody can read it. Why would it be "incomprehensible"? I want to know exactly how it works before I believe something is sapient or not. Since the Turing Test is no test any longer, we need another way.
Okay so humans are not sapient either. Done with you.
I'm honestly confused regarding where your hostility is coming from. I'm not willing to accept that a machine possesses sapience based only on outward behavior, because that is easy to spoof. I want additional confirmation, and a better understanding of how the relevant programming was able to produce such an outcome. Why is that a problem?
You are setting yourself up as a supreme judgement with impossible to fulfil criteria. Discussion on the topic with you is worthless as you don't discuss, you just say no.
Edited by Adannor on May 14th 2019 at 9:53:06 PM
Edited by crazysamaritan on May 14th 2019 at 10:39:43 AM
Link to TRS threads in project mode here.Heck, humans may not be sapient when you dig into the core of our brains. 95% of what we do is mechanistic and autonomous, 4.5% is rote and habit, and maybe 0.5% is actual cognition and decision-making. There's evidence that even that latter is deterministic under the hood, or at least that the experience of consciousness is how we rationalize things we've already decided to do before we're aware of them.
The real irony of trying to establish a perfect test for sapience is that humans might fail it, and then where are we? Our basis for judging AI consciousness can only ever be based on our own experience of that phenomenon.
Heck, one of our greatest fears is that AI will be better at making decisions than we are and tell us things that we don't want to hear. "Stop smoking." "Stop driving and let computers do it." "Your economic system is a complete mess." "Democracy is a sham."
Edited by Fighteer on May 14th 2019 at 3:20:06 PM
"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"Neural networks, and soft computing in general (that's how most AI works), are not something where the behavior can be determined just by looking at the code. At code level, it's just a bunch of parameters. We feed it a lot of data, then tweak the parameters until it gives the right results.
On the other hand, computer decision making is just not there yet. For some things, like driving a car, maybe so, as the goals there are pretty straightforward: reach the destination safely. But things like running a country, or even a company, is much more complex. The problem is that AI is extremely biased. It's not because how we make it, but how we feed it the data. If we give it a bunch of data and expect it to yield the best results, it will do just that. And it will probably result in a hugely unjust system. We have to deliberately adjust its input so that it will produce an acceptable result. So ultimately a computer's decision-making will be based on the, conscious or accidental, decision of humans who teach it.
Edit: The thing that most sci-fi writers do wrong is that robots become so intelligent that they somehow accidentally develop human emotions. Since emotions are so closely tied to our biology, this will never happen unless we deliberately program them to do so. This is where Asimov was so groundbreaking in his robot stories. His robots has emotions, but they are based on completely different principles than that of humans, making them much more believable than Ridiculously Human Robots. And that was in the '50s, yet even to this day there are few real followers. Most sci-fi robots either has no emotions and are utterly (sometimes comically) logical, or they have human emotions. Freefall is a good exception, where organic A.I.s work on the instincts of their base animal (Florence, is in this case a dog, however she insists that she is a wolf), while robots work on something similar to Asimov's laws.
Although to be fair if we teach a robot empathy, so that it observes how humans react to situations, it may just develop something that is similar to human emotions. For example, in A.I.: Artificial Intelligence, the protagonists are programmed to mimic human emotions, and it works so well that humans become creeped out by them and they will become essentially human. So there goes that too.
Edited by petersohn on May 14th 2019 at 9:57:44 PM
The universe is under no obligation to make sense to us.I get that plotwise there has to be someone who needs all of this explained to him, but.. it does make you wonder about Sam's hitchhiking trip from his homeworld. How did he ever survive it?
Sqids can naturally encyst and hibernate. It's like cold sleep, but with a much lower success rate. That's what happened to Sam when he stowed away; he only woke up when someone tugged at his wallet.
Funnily enough, existence of that mechanism probably means that it'd actually be easy to figure hibernation drugs for the sqids, but it's just that nobody bothered for Sam. (For starters, that'd require catching him first.)
Also, it would require people actually caring enough to make sure Sam survives.
Reminder: Offscreen Villainy does not count towards Complete Monster.They're moral people, on the general whole, so yes, they would care.
They would love to freeze him and ship him out on a long long cruise far far away from them, but they would care to make sure he lives.
I believe in human sapience because I am human, and I have "inside information" as it were. There is plenty of evidence that other humans are sapient as well. Machines, on the other hand, have never proven sapient in the past, so there is reason to be skeptical of claims that a new one is. The Turing test is useless, so we need another method. I provided my alternative, what do you suggest as a test?
Edited by crazysamaritan on May 17th 2019 at 9:46:18 AM
Link to TRS threads in project mode here.