What if it becomes fully self-aware? Should we give it rights? No, point aside... it would be pretty nice to have self-aware bots. Think about it... they just be given a task, and they themselves can figure it out. If they need to be moved, they can figure the new work again.
It's totes awesome.
What's thinking?
Why "should"?
[1] This facsimile operated in part by synAC.We won't have to worry until it asks whether it has a soul.
Sidetracking a little here. Have scientists tried installing a "voicebox" on a robot capable of independent decision making and seeing whether it could learn a language or formulate its own? I'd find something like that fascinating.
It can figure out mazes, then. This is a good step as far as AI goes, but rats can do that too and they have yet to conquer the human race.
edited 27th Jun '11 2:53:09 PM by Myrmidon
Kill all math nerdsIt's not that simple, but some other projects are working on that.
[1] This facsimile operated in part by synAC.That's fucking cool.
Fascinating, thanks for the link. Being brought up with Hollywood Science leads me to expect more then is possible.
Turns out that language is actually really freaking hard, we just benefit from a highly developed and fully general ability to learn languages as infants. I've heard (possibly anecdotal) things like experiments where babies who heard nothing but gibberish constructed a functional language out of it, and children in deaf families who "babble" in sign language.
Of course, getting that worked out took a few hundred thousand years. I suspect our creations will do it a little faster. Very interesting article, Tzetze.
I wouldn't call conversation with chatterbots rational. (Well, except in that it may be locally deterministic.)
edited 27th Jun '11 9:30:41 PM by Tzetze
[1] This facsimile operated in part by synAC.Fine, keep your arbitrary distinctions >___>
It's not arbitrary. Have you ever talked to one? You can't talk about anything requiring any depth of knowledge, and they seem schizophrenic. A Turing Test wasn't originally just fooling you, it was fooling you after getting to know them.
[1] This facsimile operated in part by synAC.Yeah, I have. Although it could be experiences of Omegle and Yahoo Answers lowering my standards for what's passable as "human".
@Tzetze:
I don't know, but I know what it isn't...
There are existing A.I.s which can do that already. So it was kind of obvious in my mind. Do you have a counterpoint, perhaps?
"We have done the impossible and that makes us mighty." - Malcolm Reynolds@melloncollie: Regarding the Loebner Prize, I read some of the chat transcripts. They are not very convincing.
"We have done the impossible and that makes us mighty." - Malcolm ReynoldsDunno why we're holding "self-awareness" up to human standards right off the bat. Even the Cracked article compares them to cats. That's the level they're aiming for, and last I checked cats don't even pass the mirror test that's often used for animals. I'd be pretty darned impressed if we created a robot of even cat-level though.
edited 28th Jun '11 2:41:26 AM by Clarste
So? There are existing AI that can diagnose people's diseases, but I don't how that would be a required capability of AI.
[1] This facsimile operated in part by synAC.But can it tell a story?
If you don't like a single Frank Ocean song, you have no soul.Can a hamster tell a story?
Welcome To TV Tropes | How To Write An Example | Text-Formatting Rules | List Of Shows That Need Summary | TV Tropes Forum | Know The StaffOther ones can, though they're not very good stories.
There seems to be a misconception that programming an AI to do one particular thing naturally makes it capable of other aspects of intelligence.
[1] This facsimile operated in part by synAC.There's also a misconception that intelligence is an easy to define and unambiguous term.
Blind Final Fantasy 6 Let's PlayLook at the cute little robot, it'll be exterminating humanity in no time.
Fight smart, not fair.
So I was reading this cracked article, and came across this quote:
I'm not considering this self-awareness yet (not until it knows what "thinking", "self" and "awareness" is), but are you? Is it possible that in a few years we can have a perfectly rational conversation with an AI - a.k.a. passing the Turing-test?
I personally am not convinced yet: the walking and self-adjusting robot is modifying a walking body that needs to walk further and further. It creates an image from existing information - this is not thinking - and applies changes to said body, which happens to be his own. Given that an AI should be able to control multiple "units" at the same time, I would even question the notion of considering a given robotic body as an AI's "own". Since if the AI is controlling it via Wi Fi for instance and it walks into an explosion, a new one can be created easily.
Let's discuss this example of a next step in "moving in the direction of consciousness".
edited 27th Jun '11 7:10:26 AM by Sati1984
"We have done the impossible and that makes us mighty." - Malcolm Reynolds