Follow TV Tropes

Following

Freefall

Go To

crazysamaritan NaNo 4328 / 50,000 from Lupin III Since: Apr, 2010
NaNo 4328 / 50,000
#5551: May 11th 2019 at 5:47:58 AM

It is merely simulating them, which is super creepy.
Now you're invoking the Chinese Room Experiment, which I respond to with Philosophical Zombies; if the simulation is complete, then there is no effective difference between sentience and the illusion of sentience, so you should treat anyone who manages it as human.

Link to TRS threads in project mode here.
Adannor Since: May, 2010
#5552: May 11th 2019 at 5:58:56 AM

I think philosophical zombie was their point with that remark.

DeMarquis Since: Feb, 2010
#5553: May 11th 2019 at 5:56:34 PM

No, if the Ship can simulate all aspects of human cognition, then it's effectively human, but there is no evidence that here. Ship is able to converse with a human and pass the Turing test, but that's no indication of true sapience.

crazysamaritan NaNo 4328 / 50,000 from Lupin III Since: Apr, 2010
NaNo 4328 / 50,000
#5554: May 11th 2019 at 8:29:44 PM

What would it take to prove "true sapience", then?

Edited by crazysamaritan on May 11th 2019 at 11:30:09 AM

Link to TRS threads in project mode here.
Adannor Since: May, 2010
#5555: May 11th 2019 at 9:45:12 PM

[up][up] Yeah that's just an arbitrary autofail.

The ship is capable of examining its own process of thinking on several occasions, forming thoughts and conclusions beyond the scope of what it's clear and programmed purpose was (i.e. in one of the older strips where it thought about obeying the order to keep the doors locked: it is programmed to evaluate the validity of orders, yes, but inner dialogue adding "whew I'm thankful the rules apply" is beyond that. Similarly, in this strip, making judgements about different models of consciousness is beyond its purpose of running the sensors and stuff.)

Dat bitch sapient.

FuzzyBoots from Outlying borough of Pittsburgh (there's a lot of Since: Jan, 2001 Relationship Status: And they all lived happily ever after <3
DeMarquis Since: Feb, 2010
#5557: May 13th 2019 at 9:07:16 AM

You would have to look "under the hood", so to speak, to determine that. I would have to see the code itself before I would accept that a computer had achieved sapience- I dont think there exists a dialogue protocol that would settle the matter.

Adannor Since: May, 2010
#5558: May 13th 2019 at 9:22:31 AM

Yeaaah because the massive body of code is going to be comprehendable as a unified entity and have a clear definition of sapience in it.

crazysamaritan NaNo 4328 / 50,000 from Lupin III Since: Apr, 2010
NaNo 4328 / 50,000
#5559: May 13th 2019 at 10:11:20 AM

[up][up] You can't even do that with humans (yet). Seems unreasonable.

Link to TRS threads in project mode here.
DeMarquis Since: Feb, 2010
#5560: May 14th 2019 at 11:06:40 AM

Somebody coded it, somebody can read it. Why would it be "incomprehensible"? I want to know exactly how it works before I believe something is sapient or not. Since the Turing Test is no test any longer, we need another way.

Adannor Since: May, 2010
#5561: May 14th 2019 at 11:23:36 AM

Okay so humans are not sapient either. Done with you.

DeMarquis Since: Feb, 2010
#5562: May 14th 2019 at 11:32:44 AM

I'm honestly confused regarding where your hostility is coming from. I'm not willing to accept that a machine possesses sapience based only on outward behavior, because that is easy to spoof. I want additional confirmation, and a better understanding of how the relevant programming was able to produce such an outcome. Why is that a problem?

Adannor Since: May, 2010
#5563: May 14th 2019 at 11:46:27 AM

You are setting yourself up as a supreme judgement with impossible to fulfil criteria. Discussion on the topic with you is worthless as you don't discuss, you just say no.

Edited by Adannor on May 14th 2019 at 9:53:06 PM

crazysamaritan NaNo 4328 / 50,000 from Lupin III Since: Apr, 2010
NaNo 4328 / 50,000
#5564: May 14th 2019 at 12:14:26 PM

Somebody coded it, somebody can read it. Why would it be "incomprehensible"?
That idea hasn't been true for at least a decade. Leaving aside Spaghetti code, company-wide programs tend to have dozens to hundreds of people adding code, revising it, and creating libraries to reference. Programs like Microsoft Word are nearly incomprehensible, Google's search algorithm is incomprehensible (it is comparable to a novel with half a billion pages or over one million novels of average length), and AlphaGo Zero (with its neural network components) is beyond even that. Check out Black Box for more information.

I'm not willing to accept that a machine possesses sapience based only on outward behavior, because that is easy to spoof.
Then you are also refusing to believe that a human possesses sapience, because you still only have access to outward behavior. You do not have the "program" that humans operate on. You cannot compare a computer's program to a human's and say "these lines represent sapience".

Edited by crazysamaritan on May 14th 2019 at 10:39:43 AM

Link to TRS threads in project mode here.
Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#5565: May 14th 2019 at 12:17:40 PM

Heck, humans may not be sapient when you dig into the core of our brains. 95% of what we do is mechanistic and autonomous, 4.5% is rote and habit, and maybe 0.5% is actual cognition and decision-making. There's evidence that even that latter is deterministic under the hood, or at least that the experience of consciousness is how we rationalize things we've already decided to do before we're aware of them.

The real irony of trying to establish a perfect test for sapience is that humans might fail it, and then where are we? Our basis for judging AI consciousness can only ever be based on our own experience of that phenomenon.

Heck, one of our greatest fears is that AI will be better at making decisions than we are and tell us things that we don't want to hear. "Stop smoking." "Stop driving and let computers do it." "Your economic system is a complete mess." "Democracy is a sham."

Edited by Fighteer on May 14th 2019 at 3:20:06 PM

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
petersohn from Earth, Solar System (Long Runner) Relationship Status: Hiding
#5566: May 14th 2019 at 12:44:51 PM

Neural networks, and soft computing in general (that's how most AI works), are not something where the behavior can be determined just by looking at the code. At code level, it's just a bunch of parameters. We feed it a lot of data, then tweak the parameters until it gives the right results.

On the other hand, computer decision making is just not there yet. For some things, like driving a car, maybe so, as the goals there are pretty straightforward: reach the destination safely. But things like running a country, or even a company, is much more complex. The problem is that AI is extremely biased. It's not because how we make it, but how we feed it the data. If we give it a bunch of data and expect it to yield the best results, it will do just that. And it will probably result in a hugely unjust system. We have to deliberately adjust its input so that it will produce an acceptable result. So ultimately a computer's decision-making will be based on the, conscious or accidental, decision of humans who teach it.

Edit: The thing that most sci-fi writers do wrong is that robots become so intelligent that they somehow accidentally develop human emotions. Since emotions are so closely tied to our biology, this will never happen unless we deliberately program them to do so. This is where Asimov was so groundbreaking in his robot stories. His robots has emotions, but they are based on completely different principles than that of humans, making them much more believable than Ridiculously Human Robots. And that was in the '50s, yet even to this day there are few real followers. Most sci-fi robots either has no emotions and are utterly (sometimes comically) logical, or they have human emotions. Freefall is a good exception, where organic A.I.s work on the instincts of their base animal (Florence, is in this case a dog, however she insists that she is a wolf), while robots work on something similar to Asimov's laws.

Although to be fair if we teach a robot empathy, so that it observes how humans react to situations, it may just develop something that is similar to human emotions. For example, in A.I.: Artificial Intelligence, the protagonists are programmed to mimic human emotions, and it works so well that humans become creeped out by them and they will become essentially human. So there goes that too.

Edited by petersohn on May 14th 2019 at 9:57:44 PM

The universe is under no obligation to make sense to us.
Nohbody "In distress", my ass. from Somewhere in Dixie Since: Jan, 2001 Relationship Status: Mu
Geoduck Since: Jan, 2001
#5568: May 15th 2019 at 12:14:43 PM

I get that plotwise there has to be someone who needs all of this explained to him, but.. it does make you wonder about Sam's hitchhiking trip from his homeworld. How did he ever survive it?

Discar Since: Jun, 2009
#5569: May 15th 2019 at 1:48:00 PM

Sqids can naturally encyst and hibernate. It's like cold sleep, but with a much lower success rate. That's what happened to Sam when he stowed away; he only woke up when someone tugged at his wallet.

Adannor Since: May, 2010
#5570: May 15th 2019 at 9:49:42 PM

[up] Funnily enough, existence of that mechanism probably means that it'd actually be easy to figure hibernation drugs for the sqids, but it's just that nobody bothered for Sam. (For starters, that'd require catching him first.)

32_Footsteps Think of the mooks! from Just north of Arkham Since: Jan, 2001 Relationship Status: THIS CONCEPT OF 'WUV' CONFUSES AND INFURIATES US!
Think of the mooks!
#5571: May 16th 2019 at 7:13:50 AM

Also, it would require people actually caring enough to make sure Sam survives.

Reminder: Offscreen Villainy does not count towards Complete Monster.
Adannor Since: May, 2010
#5572: May 16th 2019 at 8:33:04 AM

They're moral people, on the general whole, so yes, they would care.

They would love to freeze him and ship him out on a long long cruise far far away from them, but they would care to make sure he lives.

FuzzyBoots from Outlying borough of Pittsburgh (there's a lot of Since: Jan, 2001 Relationship Status: And they all lived happily ever after <3
DeMarquis Since: Feb, 2010
#5574: May 17th 2019 at 6:23:01 PM

I believe in human sapience because I am human, and I have "inside information" as it were. There is plenty of evidence that other humans are sapient as well. Machines, on the other hand, have never proven sapient in the past, so there is reason to be skeptical of claims that a new one is. The Turing test is useless, so we need another method. I provided my alternative, what do you suggest as a test?

crazysamaritan NaNo 4328 / 50,000 from Lupin III Since: Apr, 2010
NaNo 4328 / 50,000
#5575: May 17th 2019 at 6:43:23 PM

There is plenty of evidence that other humans are sapient as well.
Where is this evidence?

. I provided my alternative, what do you suggest as a test?
No, you proposed a gatekeeper rule that would only work for certain programs. A test should be species-agnostic when applied.

Edited by crazysamaritan on May 17th 2019 at 9:46:18 AM

Link to TRS threads in project mode here.

Total posts: 8,032
Top