Follow TV Tropes

Following

The implications of human-like AI

Go To

Sark AI Entity from across 100 000 miles Since: Feb, 2011
AI Entity
#1: Mar 2nd 2011 at 6:28:00 PM

This is just a generic discussion about Human intelligence and Machine intelligence and where the line between them grows fuzzy.

In some senses, scientists figure that human brains are just vastly complicated, organic computers. Hypothetically, a vastly complicated, gigantic regular computer could replicate human intelligence, although it is still the stuff of science fiction.

There are still a number of differences between the two though, both have advantages and disadvantages. Machines are much better and hard computing and number crunching, both in speed and accuracy. They also have the advantage of being able to add computing power very easily and perhaps most importantly, with proper care, a fully capable AI would have an incredibly long lifespan if not an infinite one. Human brains aren't to be scoffed at though, not only are the benefits of sentinece and sapience are both numerous and fruitful but they are more powerful than most currently existing computers, capable of understanding concepts completely alien to computer and they are also very small.

On the topic of human-like Artifcial intelligence, are machines with perfectly replicated human intelligence (and all that implies) possible? To be a perfect human mind they would need to be sentient, sapient, creative and perhaps most importantly, they would need to be able to make mistakes.

In most works of fiction, ultra intelligent A.I.s seem to be more machine-like in their thinking, calculating without emotion and what not, so they are not perfect replicas despite their sentience.

So what are the real world implications of perfectly human AI or even just sentient AI?

Would they be considered to have a soul even though they are entirely manufactured?

Could they be considered human if their was no distinction between their personality and a human personality?

What impact would this have on religion? I doubt that a manufactured AI could achieve a place in any religious afterlife.

/end rant

Point is, a perfectly human artifcial intelligence or even just the regular kind you always see talking over everything in science fiction. Discuss.

Without good, no evil. Without want, no lack. Without desire, no need.
Ultrayellow Unchanging Avatar. Since: Dec, 2010
Unchanging Avatar.
#2: Mar 2nd 2011 at 6:50:51 PM

As long as the template is the same, there's no reason why an AI which could conform to an exactly human standard of sentience couldn't make it into most afterlives. It just has a slightly different body.

Thing is, though, by the time we manage to teach AI certain human skills, it'll be so far ahead of us in others it's not even funny.

edited 2nd Mar '11 7:17:44 PM by Ultrayellow

Except for 4/1/2011. That day lingers in my memory like...metaphor here...I should go.
Chagen46 Dude Looks Like a Lady from I don't really know Since: Jan, 2010
#3: Mar 2nd 2011 at 6:54:38 PM

Could we possibly talk about fiction we've made that includes human-like robots?

One of my series, for example, has sentient and human-like robots in it. Talking about them in that setting might help explain my view a little better.

"Who wants to hear about good stuff when the bottom of the abyss of human failure that you know doesn't exist is so much greater?"-Wraith
storyyeller More like giant cherries from Appleloosa Since: Jan, 2001 Relationship Status: RelationshipOutOfBoundsException: 1
More like giant cherries
#4: Mar 2nd 2011 at 6:55:42 PM

I just wish fictional A.I.s were less human. It's so unrealistic.

Blind Final Fantasy 6 Let's Play
Sark AI Entity from across 100 000 miles Since: Feb, 2011
AI Entity
#5: Mar 2nd 2011 at 7:12:08 PM

^^ If it's related to the topic I don't see why you couldn't

Without good, no evil. Without want, no lack. Without desire, no need.
Kzickas Since: Apr, 2009
#6: Mar 2nd 2011 at 7:29:45 PM

Human-like AI will not happen for a very long time. Mostly it's an economics thing cost of developing a human-like AI, probably in the billions, going price of a human intelligence $10 to $15 per hour for the plain version. People are going to make machines that can do things that people can't do or that machines can do better. Generally emotions are the results of instinctual wish for things that improve human life. A machine will not have any instincts that it's creator didn't put there and generally there's no reason that anyone would want their machines to have most human emotions. Self preservation is useful for a machine so they might have something you could call fear, although not irrational fear, unless their risk algorithm is broken. Which I guess is more or less the definition of irrational fear. But they wouldn't have it for long as people could and would update the algorithms once discovered.

Sark AI Entity from across 100 000 miles Since: Feb, 2011
AI Entity
#7: Mar 2nd 2011 at 7:39:15 PM

Right, but regardless of whether such machines will or even can happen, I'd like to talk about what would happen if they did.

Without good, no evil. Without want, no lack. Without desire, no need.
storyyeller More like giant cherries from Appleloosa Since: Jan, 2001 Relationship Status: RelationshipOutOfBoundsException: 1
More like giant cherries
#8: Mar 2nd 2011 at 7:55:13 PM

^^ Actually, I'd say it's a technological issue as well.

The reason I'm skeptical of AI that can replicate a human exactly is that it is pretty much useless. There's no reason an AI designed for human tasks would actually have to act exactly like a human.

Blind Final Fantasy 6 Let's Play
Kzickas Since: Apr, 2009
#9: Mar 2nd 2011 at 8:29:56 PM

I certainly didn't mean to imply that we could just make a human AI just like that if we tried, just that even if we could we probably wouldn't

TrapperZoid Since: Dec, 2009
#10: Mar 2nd 2011 at 8:38:25 PM

It's not useless for A.I.s designed for complex interaction with people. Humans like dealing with other humans. While some tasks like street sweeping or sewer work wouldn't require much in terms of emotional AI, more personal tasks like home care nursing robots would benefit from a more human response.

The trope of having emotions as the one thing an ultra-advanced AI can't replicate has always bugged me. Emotions aren't that hard to simulate. Humans are damn good at implying emotions into everything ("Oooh look, my pet turtle is happy to see me!" tongue). It's the really complex intertwining of knowledge gained from a life-time of building up memories and inferences from our senses that is so hard to replicate.

Kzickas Since: Apr, 2009
#11: Mar 2nd 2011 at 8:45:12 PM

True, in that situation the appearance of emotion can be useful. But for the robot to get frustrated with the patients for instance? Not really.

Lanceleoghauni Cyborg Helmsman from Z or R Twice Since: Jan, 2001 Relationship Status: In my bunk
#12: Mar 2nd 2011 at 11:38:04 PM

Being able to simply CONVERSE with an AI is like, probably one of the most user friendly and possibly robust ways to access the system, it'd make sense that there'd be some functionality like that. I prefer my AI's smarter than humans, but that's just me.

"Coffee! Coffeecoffeecoffee! Coffee! Not as strong as Meth-amphetamine, but it lets you keep your teeth!"
SavageHeathen Pro-Freedom Fanatic from Somewhere Since: Feb, 2011
Pro-Freedom Fanatic
#13: Mar 3rd 2011 at 2:11:57 AM

What Measure Is a Non-Human? is a red herring. An AI is not "unique". As long as you can get a memdump, you can restore an AI from backup. So an AI is always expendable cool.

edited 3rd Mar '11 2:13:10 AM by SavageHeathen

You exist because we allow it and you will end because we demand it.
Sark AI Entity from across 100 000 miles Since: Feb, 2011
AI Entity
#14: Mar 3rd 2011 at 5:12:52 AM

So I've got another question, what makes humanity better than perfectly human A.I.s or machines that are just human-like?

Once the gap of sapience and creativity have been crossed, they would basically be better than humans in every way, so would humans really be neccesary?

Without good, no evil. Without want, no lack. Without desire, no need.
Usht Lv. 3 Genasi Wizard from an arbitrary view point. Since: Feb, 2011
Lv. 3 Genasi Wizard
#15: Mar 3rd 2011 at 5:35:28 AM

Actually had an idea for a story about the first android with human level intelligence. She (because it's always a she, notice that?) wasn't quite there yet and with the combination of being able to crunch numbers like mad and inability to really comprehend abstract ideas like irony (plus being insanely heavy), she was quickly discovered.

Actual human like intelligence is probably not going to happen until we chart everything about the human brain because we have this strange ability to understand ideas and advanced logic over pure numbers, which sets us at odds with how computers think. You'll be able to get machines that answer you in a logical manner according to several preset variables, but to get one that can take in ideas based on morality, well, that's much, much harder.

EDIT: Good question, if computers do manage to reach that boundary, I'd say it gets down to how mobile they are. If they need to be about a hundred connected PS 3s to think like us, they'll still need us to maintain them and build more. See, part of being human is that we have a rather compressed memory storage unit called the brain. It seems to handle types of data better than machines can currently.

edited 3rd Mar '11 5:37:26 AM by Usht

The thing about making witty signature lines is that it first needs to actually be witty.
EthZee Since: Oct, 2010
#16: Mar 3rd 2011 at 6:48:00 AM

Actually had an idea for a story about the first android gynoid with human level intelligence. She (because it's always a she, notice that?)

Fixed that for you.

Also, saying that an AI couldn't do emotions well doesn't say much. A lot of humans can't do emotions well. Could you get an AI to fake emotions, is the question.

storyyeller More like giant cherries from Appleloosa Since: Jan, 2001 Relationship Status: RelationshipOutOfBoundsException: 1
More like giant cherries
#17: Mar 3rd 2011 at 7:41:46 AM

Yes, and pretty easily too.

Especially since people already project emotions and personality onto completely unintelligent objects.

edited 3rd Mar '11 7:42:23 AM by storyyeller

Blind Final Fantasy 6 Let's Play
Yej See ALL the stars! from <0,1i> Since: Mar, 2010
See ALL the stars!
#18: Mar 3rd 2011 at 9:08:43 AM

[up][up][up][up] Nothing. The brain is merely a specifically designed, Sufficiently Advanced computer.

Da Rules excuse all the inaccuracy in the world. Listen to them, not me.
Usht Lv. 3 Genasi Wizard from an arbitrary view point. Since: Feb, 2011
Lv. 3 Genasi Wizard
#19: Mar 3rd 2011 at 9:12:49 AM

Here's the game changing question though: Would humans be able to make something as intelligent as our selves? Or even smarter?

Think about that for a moment, you'd need to write a program that learns beyond what the human mind can understand, at a faster rate, and can come up with its own ideas to keep progressing.

Are we able to do that?

The thing about making witty signature lines is that it first needs to actually be witty.
Tzetze DUMB from a converted church in Venice, Italy Since: Jan, 2001
DUMB
#20: Mar 3rd 2011 at 9:14:10 AM

It's frightfully easy for people to write computer programs that they don't understand the operation of. You think any one person knows how Windows functions, in every aspect? :/

[1] This facsimile operated in part by synAC.
Usht Lv. 3 Genasi Wizard from an arbitrary view point. Since: Feb, 2011
Lv. 3 Genasi Wizard
#21: Mar 3rd 2011 at 9:19:31 AM

(Yes because they have too much time on their hands but that's beside the point.)

I don't think you understand how coding works. You've got to write it and then debug it. And you'll be debugging it a lot. In order to debug it, you've got to understand where the problems are in it and fix them. To do this, you've got to understand the code. Sure, you can understand a subsection of an entire program that you're helping to write (languages do cover that using things like subclasses, subscripts, abstract scripts, implementations, etc), but you won't be able to write it unless you understand what you're writing.

The thing about making witty signature lines is that it first needs to actually be witty.
Tzetze DUMB from a converted church in Venice, Italy Since: Jan, 2001
DUMB
#22: Mar 3rd 2011 at 9:22:10 AM

I code enough. A lot of debugging in practice is empirical guesswork, which is why there are terms like this.

[1] This facsimile operated in part by synAC.
Yej See ALL the stars! from <0,1i> Since: Mar, 2010
See ALL the stars!
#23: Mar 3rd 2011 at 9:33:34 AM

[up][up] Neural nets. Self-mutating code. For that matter, simply sufficiently complex code. In a sufficiently large and/or complicated program, finding a "+1" instead of a "+2" in the wrong place can take hours.

Nobody knows how all of Windows works. Nobody knows how all of Linux, even just the kernel, works, because both of them are more complicated than a single human can understand in their entirety.

Da Rules excuse all the inaccuracy in the world. Listen to them, not me.
storyyeller More like giant cherries from Appleloosa Since: Jan, 2001 Relationship Status: RelationshipOutOfBoundsException: 1
More like giant cherries
#24: Mar 3rd 2011 at 9:43:12 AM

Here's the game changing question though: Would humans be able to make something as intelligent as our selves? Or even smarter?

Think about that for a moment, you'd need to write a program that learns beyond what the human mind can understand, at a faster rate, and can come up with its own ideas to keep progressing.

Are we able to do that?

First, you have to define intelligence.

Blind Final Fantasy 6 Let's Play
Lanceleoghauni Cyborg Helmsman from Z or R Twice Since: Jan, 2001 Relationship Status: In my bunk
#25: Mar 3rd 2011 at 10:55:47 AM

we just need to make it able to absorb information faster, or at a more efficient rate then we do. it's not that outlandish.

"Coffee! Coffeecoffeecoffee! Coffee! Not as strong as Meth-amphetamine, but it lets you keep your teeth!"

Total posts: 45
Top