As long as the template is the same, there's no reason why an AI which could conform to an exactly human standard of sentience couldn't make it into most afterlives. It just has a slightly different body.
Thing is, though, by the time we manage to teach AI certain human skills, it'll be so far ahead of us in others it's not even funny.
edited 2nd Mar '11 7:17:44 PM by Ultrayellow
Except for 4/1/2011. That day lingers in my memory like...metaphor here...I should go.Could we possibly talk about fiction we've made that includes human-like robots?
One of my series, for example, has sentient and human-like robots in it. Talking about them in that setting might help explain my view a little better.
"Who wants to hear about good stuff when the bottom of the abyss of human failure that you know doesn't exist is so much greater?"-WraithI just wish fictional A.I.s were less human. It's so unrealistic.
Blind Final Fantasy 6 Let's Play^^ If it's related to the topic I don't see why you couldn't
Without good, no evil. Without want, no lack. Without desire, no need.Human-like AI will not happen for a very long time. Mostly it's an economics thing cost of developing a human-like AI, probably in the billions, going price of a human intelligence $10 to $15 per hour for the plain version. People are going to make machines that can do things that people can't do or that machines can do better. Generally emotions are the results of instinctual wish for things that improve human life. A machine will not have any instincts that it's creator didn't put there and generally there's no reason that anyone would want their machines to have most human emotions. Self preservation is useful for a machine so they might have something you could call fear, although not irrational fear, unless their risk algorithm is broken. Which I guess is more or less the definition of irrational fear. But they wouldn't have it for long as people could and would update the algorithms once discovered.
Right, but regardless of whether such machines will or even can happen, I'd like to talk about what would happen if they did.
Without good, no evil. Without want, no lack. Without desire, no need.^^ Actually, I'd say it's a technological issue as well.
The reason I'm skeptical of AI that can replicate a human exactly is that it is pretty much useless. There's no reason an AI designed for human tasks would actually have to act exactly like a human.
Blind Final Fantasy 6 Let's PlayI certainly didn't mean to imply that we could just make a human AI just like that if we tried, just that even if we could we probably wouldn't
It's not useless for A.I.s designed for complex interaction with people. Humans like dealing with other humans. While some tasks like street sweeping or sewer work wouldn't require much in terms of emotional AI, more personal tasks like home care nursing robots would benefit from a more human response.
The trope of having emotions as the one thing an ultra-advanced AI can't replicate has always bugged me. Emotions aren't that hard to simulate. Humans are damn good at implying emotions into everything ("Oooh look, my pet turtle is happy to see me!" ). It's the really complex intertwining of knowledge gained from a life-time of building up memories and inferences from our senses that is so hard to replicate.
True, in that situation the appearance of emotion can be useful. But for the robot to get frustrated with the patients for instance? Not really.
Being able to simply CONVERSE with an AI is like, probably one of the most user friendly and possibly robust ways to access the system, it'd make sense that there'd be some functionality like that. I prefer my AI's smarter than humans, but that's just me.
"Coffee! Coffeecoffeecoffee! Coffee! Not as strong as Meth-amphetamine, but it lets you keep your teeth!"What Measure Is a Non-Human? is a red herring. An AI is not "unique". As long as you can get a memdump, you can restore an AI from backup. So an AI is always expendable .
edited 3rd Mar '11 2:13:10 AM by SavageHeathen
You exist because we allow it and you will end because we demand it.So I've got another question, what makes humanity better than perfectly human A.I.s or machines that are just human-like?
Once the gap of sapience and creativity have been crossed, they would basically be better than humans in every way, so would humans really be neccesary?
Without good, no evil. Without want, no lack. Without desire, no need.Actually had an idea for a story about the first android with human level intelligence. She (because it's always a she, notice that?) wasn't quite there yet and with the combination of being able to crunch numbers like mad and inability to really comprehend abstract ideas like irony (plus being insanely heavy), she was quickly discovered.
Actual human like intelligence is probably not going to happen until we chart everything about the human brain because we have this strange ability to understand ideas and advanced logic over pure numbers, which sets us at odds with how computers think. You'll be able to get machines that answer you in a logical manner according to several preset variables, but to get one that can take in ideas based on morality, well, that's much, much harder.
EDIT: Good question, if computers do manage to reach that boundary, I'd say it gets down to how mobile they are. If they need to be about a hundred connected PS 3s to think like us, they'll still need us to maintain them and build more. See, part of being human is that we have a rather compressed memory storage unit called the brain. It seems to handle types of data better than machines can currently.
edited 3rd Mar '11 5:37:26 AM by Usht
The thing about making witty signature lines is that it first needs to actually be witty.Fixed that for you.
Also, saying that an AI couldn't do emotions well doesn't say much. A lot of humans can't do emotions well. Could you get an AI to fake emotions, is the question.
Yes, and pretty easily too.
Especially since people already project emotions and personality onto completely unintelligent objects.
edited 3rd Mar '11 7:42:23 AM by storyyeller
Blind Final Fantasy 6 Let's PlayNothing. The brain is merely a specifically designed, Sufficiently Advanced computer.
Da Rules excuse all the inaccuracy in the world. Listen to them, not me.Here's the game changing question though: Would humans be able to make something as intelligent as our selves? Or even smarter?
Think about that for a moment, you'd need to write a program that learns beyond what the human mind can understand, at a faster rate, and can come up with its own ideas to keep progressing.
Are we able to do that?
The thing about making witty signature lines is that it first needs to actually be witty.It's frightfully easy for people to write computer programs that they don't understand the operation of. You think any one person knows how Windows functions, in every aspect? :/
[1] This facsimile operated in part by synAC.(Yes because they have too much time on their hands but that's beside the point.)
I don't think you understand how coding works. You've got to write it and then debug it. And you'll be debugging it a lot. In order to debug it, you've got to understand where the problems are in it and fix them. To do this, you've got to understand the code. Sure, you can understand a subsection of an entire program that you're helping to write (languages do cover that using things like subclasses, subscripts, abstract scripts, implementations, etc), but you won't be able to write it unless you understand what you're writing.
The thing about making witty signature lines is that it first needs to actually be witty.I code enough. A lot of debugging in practice is empirical guesswork, which is why there are terms like this.
[1] This facsimile operated in part by synAC.Neural nets. Self-mutating code. For that matter, simply sufficiently complex code. In a sufficiently large and/or complicated program, finding a "+1" instead of a "+2" in the wrong place can take hours.
Nobody knows how all of Windows works. Nobody knows how all of Linux, even just the kernel, works, because both of them are more complicated than a single human can understand in their entirety.
Da Rules excuse all the inaccuracy in the world. Listen to them, not me.Think about that for a moment, you'd need to write a program that learns beyond what the human mind can understand, at a faster rate, and can come up with its own ideas to keep progressing.
Are we able to do that?
First, you have to define intelligence.
Blind Final Fantasy 6 Let's Playwe just need to make it able to absorb information faster, or at a more efficient rate then we do. it's not that outlandish.
"Coffee! Coffeecoffeecoffee! Coffee! Not as strong as Meth-amphetamine, but it lets you keep your teeth!"
This is just a generic discussion about Human intelligence and Machine intelligence and where the line between them grows fuzzy.
In some senses, scientists figure that human brains are just vastly complicated, organic computers. Hypothetically, a vastly complicated, gigantic regular computer could replicate human intelligence, although it is still the stuff of science fiction.
There are still a number of differences between the two though, both have advantages and disadvantages. Machines are much better and hard computing and number crunching, both in speed and accuracy. They also have the advantage of being able to add computing power very easily and perhaps most importantly, with proper care, a fully capable AI would have an incredibly long lifespan if not an infinite one. Human brains aren't to be scoffed at though, not only are the benefits of sentinece and sapience are both numerous and fruitful but they are more powerful than most currently existing computers, capable of understanding concepts completely alien to computer and they are also very small.
On the topic of human-like Artifcial intelligence, are machines with perfectly replicated human intelligence (and all that implies) possible? To be a perfect human mind they would need to be sentient, sapient, creative and perhaps most importantly, they would need to be able to make mistakes.
In most works of fiction, ultra intelligent A.I.s seem to be more machine-like in their thinking, calculating without emotion and what not, so they are not perfect replicas despite their sentience.
So what are the real world implications of perfectly human AI or even just sentient AI?
Would they be considered to have a soul even though they are entirely manufactured?
Could they be considered human if their was no distinction between their personality and a human personality?
What impact would this have on religion? I doubt that a manufactured AI could achieve a place in any religious afterlife.
/end rant
Point is, a perfectly human artifcial intelligence or even just the regular kind you always see talking over everything in science fiction. Discuss.
Without good, no evil. Without want, no lack. Without desire, no need.