That's a claim, not evidence. P-zombie says that I would react exactly the same if I wasn't sapient.
Link to TRS threads in project mode here.Are you saying that you dont know if you are a p-zombie or not? Surely your internal experiences are evidence to you?
Edited by DeMarquis on May 17th 2019 at 11:25:06 AM
"We learn from history that we do not learn from history."I feel that it's easier to say that "humans are not sapient" than to find a good definition of sapience. Sometimes, scientific advancements make us rethink things that we thought were obvious. We thought that consciousness is something that directly affects our behaviour. Then we started building computers that could fool people to think them as humans, which raises some philosophical questions. It makes us realize that we don't know really what consciousness is. It's also worth noting that computers are still far away from really acting conscious. The ones that pass the Turing test are clearly faking it. If a computer will ever act consciously (whether it's "real" or "faked"), we'll know we are conversing with a machine. Because then it will not be a dumb machine impersonating a human ego, but it will be a machine with a real (for some definition of real) machine ego.
The "internal evidence" thing is good, but so far it's merely philosophical. I know that I am conscious, and I see that other humans are similar to me so I suppose they are conscious too. But I have no direct evidence to it. You could be a zombie that impersonates a human, for all that I know. The only thing we know now is that we keep using the words "sapience" and "consciousness", but we have even less idea what they mean than we previously thought. We are basically back at Socrates' (?) cave.
My personal opinion is that consciousness (whatever it really is) has less influence on our actions than we previously thought. It certainly has some, as we are talking about it.
The universe is under no obligation to make sense to us.Cave was Plato's thing, shadows on the wall. And looking it up he wrote it using Socrates as a character.
I admit that I have no direct evidence that other humans are self-aware. But I'm self-aware, and none of my other traits are unique to me among all seven billion humans on the planet — heck, even the Mary Sue-sounding combination "autistic polyamorous plural trans lesbian who identifies as a catgirl" isn't unique to me.
It is, I think, safe to assume that other humans are self-aware. It is possible that they are not. But it is also possible that they are, so that just cancels out.
Trouble Cube continues to be a general-purpose forum for those who desire such a thing.You proposed a gatekeeper rule that would only work for certain programs. It wouldn't even work for the Google system. A test should be species-agnostic when applied. That's why I linked one particular strip earlier; take a number of factors, including the ability to hold a conversation, and treat anything that exceeds a certain total as human/sapient/person.
Link to TRS threads in project mode here.The problem is that we're trying to provide a concrete definition for an extremely intangible concept, with no referent other than ourselves.
"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"Yes, you could technically be an AI faking to be a human, but provided that you are actually human and not a computer or a doppelganger deliberately trying to pose as a human, then it's safe to assume that you have consciousness.
The universe is under no obligation to make sense to us.Yeah, the only practical definition is the vague "this sorta thing we do". All the grasps for a specific definition I see amount to just wanting to go "nuh uh it doesn't count!" at something.
Which is why I said "if the simulation is complete, then there is no effective difference between sentience and the illusion of sentience". Thought bubbles as the vague "this sorta thing we do".
Link to TRS threads in project mode here.@Petersohn: Your point that we really know know what self-awareness is or how to measure it is well taken. However, you are wrong that computers are far from acting as if they were conscious, at least if you mean in terms of interpersonal conversation. Within the narrow confines of a two-way conversation where the human is not warned to be skeptical (that is, no one told them they might be talking to a computer) programs have been developed that pass the Turing test with flying colors. Granted, if you deliberately try to trip them up, you still can, but how long will that last? This is the basis of my claim that you cannot necessarily know that the Savage Chicken computer is sapient, esp. in light of it's own claim that it isnt.
As for the rest of you, I agreed that if a computer can simulate all aspects of human cognition, then it's effectively human, but you that you cant judge that on the basis of spoken behavior alone. All aspects means all aspects, including the internal experience of qualia. Since you cant know that based on external behavior, you have to look inside the mental functioning to come to some sort of informed judgement. Short of telepathy, we cant do that with humans, but in theory we should be able to with a computer. You can actually test sub-routines in the program and see what they do.
Regardless of that, the criteria I use to judge the sapience of an entity is based on the following: I have deep personal experience being human, years of interacting with other humans, and an extensive knowledge base of the documented behavior of humans across time and place. I also have extensive knowledge and experience interacting with and learning about computers and machines. So far, every human I have ever heard of behaves in a manner consistent with the hypothesis that they are all self-aware in the same way that I am. By contrast, no computer ever has. Therefore, I feel justified in possessing a set of priors such that the probability of any new human I meet being self-aware is very high, but that for a computer is very low. I will be naturally skeptical of any claim that a human is not fully self-aware, or that a computer is. I therefore require additional evidence before accepting such claims.
Edited by DeMarquis on May 18th 2019 at 11:15:36 AM
"We learn from history that we do not learn from history."All aspects means all aspects, including the internal experience of qualia.
Duct Tape for Everything has another use! Okay, strictly speaking, I believe this is simply a variant of a use NASA already came up with during space flight, but substituting "insanity" with "boredom."
Which reminds me, interesting that nobody ever brings up "duct tape" as a reason to prevent Sam from returning home. One Sqid with access to it is dangerous enough.
Reminder: Offscreen Villainy does not count towards Complete Monster.It's probably worth asking... is Florence the first person to think to ask Sam of this? I don't just mean in the comic itself (which she certainly is), but I mean for all of Planet Jean?
Reminder: Offscreen Villainy does not count towards Complete Monster.A number of people have probably asked him... and regretted it a minute later.
"We learn from history that we do not learn from history."I have to imagine that Sam's people had stories about beings from the sky that kidnap individuals (several Earth cultures did, even before "aliens" was a thing). I'm just trying to imagine how it appears to them that it actually happened. Hell, while he's certainly adapted, think about how Sam felt at the time.
Reminder: Offscreen Villainy does not count towards Complete Monster.Remember, the humans spoke to the sqids. They showed them the insides of their ships, and the sqids thought they were similar because everything was fastened down. What happened to Sam is a mystery, but only because he disappeared one day with no trace. I'm sure "got abducted by those aliens we met" is on the list of theories.
Flashback incoming...
"We learn from history that we do not learn from history."Yeah, Sam is helping illustrate one major problem with AI command structures.
"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
"Where is this evidence?"
In your head. Isnt it?
Edited by DeMarquis on May 17th 2019 at 9:44:18 AM
"We learn from history that we do not learn from history."