Follow TV Tropes

Following

Robot demonstrates self-awareness?

Go To

Sati1984 Browncoat from Hungary Since: May, 2010
#1: Jun 27th 2011 at 6:54:05 AM

So I was reading this cracked article, and came across this quote:

Lipson and his team have created a self-aware robot called Starfish, which taught itself basically everything with no outside assistance — to walk, navigate difficult obstacles and even adjust to injury (when scientists shortened a leg of the robot, it changed its gait to compensate). But it's the method by which it makes these decisions that's so worrying: Starfish doesn't just blindly follow schematics. It judges what actually needs to be done by constructing a conception of itself in its "brain," then makes structural decisions based on what it thinks it is, fundamentally, as a robot. The scientists say it's not exactly conscious yet, in that it is not "thinking about itself thinking," but it is independently moving "in the direction of consciousness, like a cat — that kind of level."

I'm not considering this self-awareness yet (not until it knows what "thinking", "self" and "awareness" is), but are you? Is it possible that in a few years we can have a perfectly rational conversation with an AI - a.k.a. passing the Turing-test?

I personally am not convinced yet: the walking and self-adjusting robot is modifying a walking body that needs to walk further and further. It creates an image from existing information - this is not thinking - and applies changes to said body, which happens to be his own. Given that an AI should be able to control multiple "units" at the same time, I would even question the notion of considering a given robotic body as an AI's "own". Since if the AI is controlling it via Wi Fi for instance and it walks into an explosion, a new one can be created easily.

Let's discuss this example of a next step in "moving in the direction of consciousness".

edited 27th Jun '11 7:10:26 AM by Sati1984

"We have done the impossible and that makes us mighty." - Malcolm Reynolds
Inhopelessguy Since: Apr, 2011
#2: Jun 27th 2011 at 6:56:52 AM

What if it becomes fully self-aware? Should we give it rights? No, point aside... it would be pretty nice to have self-aware bots. Think about it... they just be given a task, and they themselves can figure it out. If they need to be moved, they can figure the new work again.

It's totes awesome.

nzm1536 from Poland Since: May, 2011
#3: Jun 27th 2011 at 7:01:05 AM

Honestly, any Cracked article about the future is overly paranoid and exaggerated. Self-aware robots seem like a distant future to me. Let's wait

"Take your (...) hippy dream world, I'll take reality and earning my happiness with my own efforts" - Barkey
Tzetze DUMB from a converted church in Venice, Italy Since: Jan, 2001
DUMB
#4: Jun 27th 2011 at 2:44:22 PM

It creates an image from existing information - this is not thinking

What's thinking?

Given that an AI should be able to control multiple "units" at the same time, I would even question the notion of considering a given robotic body as an AI's "own".

Why "should"?

[1] This facsimile operated in part by synAC.
JPanzerj Admiral of the Leet from United Kingdom Since: Sep, 2009
Admiral of the Leet
#5: Jun 27th 2011 at 2:47:53 PM

We won't have to worry until it asks whether it has a soul.

Sidetracking a little here. Have scientists tried installing a "voicebox" on a robot capable of independent decision making and seeing whether it could learn a language or formulate its own? I'd find something like that fascinating.

Myrmidon The Ant King from In Antartica Since: Nov, 2009
The Ant King
#6: Jun 27th 2011 at 2:52:58 PM

It can figure out mazes, then. This is a good step as far as AI goes, but rats can do that too and they have yet to conquer the human race.

edited 27th Jun '11 2:53:09 PM by Myrmidon

Kill all math nerds
Tzetze DUMB from a converted church in Venice, Italy Since: Jan, 2001
DUMB
#7: Jun 27th 2011 at 3:02:31 PM

Sidetracking a little here. Have scientists tried installing a "voicebox" on a robot capable of independent decision making and seeing whether it could learn a language or formulate its own?

It's not that simple, but some other projects are working on that.

[1] This facsimile operated in part by synAC.
Quoth Pink's alright, I guess. Since: Apr, 2010
Pink's alright, I guess.
JPanzerj Admiral of the Leet from United Kingdom Since: Sep, 2009
Admiral of the Leet
#9: Jun 27th 2011 at 4:42:23 PM

[up][up] Fascinating, thanks for the link. Being brought up with Hollywood Science leads me to expect more then is possible. [lol]

Shinziril Since: Feb, 2011
#10: Jun 27th 2011 at 9:03:27 PM

Turns out that language is actually really freaking hard, we just benefit from a highly developed and fully general ability to learn languages as infants. I've heard (possibly anecdotal) things like experiments where babies who heard nothing but gibberish constructed a functional language out of it, and children in deaf families who "babble" in sign language.

Of course, getting that worked out took a few hundred thousand years. I suspect our creations will do it a little faster. Very interesting article, Tzetze.

melloncollie Since: Feb, 2012
#11: Jun 27th 2011 at 9:28:52 PM

Is it possible that in a few years we can have a perfectly rational conversation with an AI - a.k.a. passing the Turing-test?

It's been done

Tzetze DUMB from a converted church in Venice, Italy Since: Jan, 2001
DUMB
#12: Jun 27th 2011 at 9:30:23 PM

I wouldn't call conversation with chatterbots rational. (Well, except in that it may be locally deterministic.)

edited 27th Jun '11 9:30:41 PM by Tzetze

[1] This facsimile operated in part by synAC.
melloncollie Since: Feb, 2012
#13: Jun 27th 2011 at 9:31:48 PM

Fine, keep your arbitrary distinctions >___>

Tzetze DUMB from a converted church in Venice, Italy Since: Jan, 2001
DUMB
#14: Jun 27th 2011 at 9:34:37 PM

It's not arbitrary. Have you ever talked to one? You can't talk about anything requiring any depth of knowledge, and they seem schizophrenic. A Turing Test wasn't originally just fooling you, it was fooling you after getting to know them.

[1] This facsimile operated in part by synAC.
melloncollie Since: Feb, 2012
#15: Jun 27th 2011 at 9:38:25 PM

Yeah, I have. Although it could be experiences of Omegle and Yahoo Answers lowering my standards for what's passable as "human".

Sati1984 Browncoat from Hungary Since: May, 2010
#16: Jun 28th 2011 at 2:20:19 AM

@Tzetze:

What's thinking?

I don't know, but I know what it isn't...

Why "should"?

There are existing A.I.s which can do that already. So it was kind of obvious in my mind. Do you have a counterpoint, perhaps?

"We have done the impossible and that makes us mighty." - Malcolm Reynolds
Sati1984 Browncoat from Hungary Since: May, 2010
#17: Jun 28th 2011 at 2:23:54 AM

@melloncollie: Regarding the Loebner Prize, I read some of the chat transcripts. They are not very convincing.

"We have done the impossible and that makes us mighty." - Malcolm Reynolds
Clarste One Winged Egret Since: Jun, 2009 Relationship Status: Non-Canon
One Winged Egret
#18: Jun 28th 2011 at 2:41:05 AM

Dunno why we're holding "self-awareness" up to human standards right off the bat. Even the Cracked article compares them to cats. That's the level they're aiming for, and last I checked cats don't even pass the mirror test that's often used for animals. I'd be pretty darned impressed if we created a robot of even cat-level though.

edited 28th Jun '11 2:41:26 AM by Clarste

Tzetze DUMB from a converted church in Venice, Italy Since: Jan, 2001
DUMB
#19: Jun 28th 2011 at 7:09:41 AM

There are existing A.I.s which can do that already.

So? There are existing AI that can diagnose people's diseases, but I don't how that would be a required capability of AI.

[1] This facsimile operated in part by synAC.
Erock Proud Canadian from Toronto Since: Jul, 2009
Proud Canadian
#20: Jun 28th 2011 at 7:11:44 AM

But can it tell a story?

If you don't like a single Frank Ocean song, you have no soul.
BobbyG vigilantly taxonomish from England Since: Jan, 2001
Tzetze DUMB from a converted church in Venice, Italy Since: Jan, 2001
DUMB
#22: Jun 28th 2011 at 7:20:38 AM

But can it tell a story?

Other ones can, though they're not very good stories.

There seems to be a misconception that programming an AI to do one particular thing naturally makes it capable of other aspects of intelligence.

[1] This facsimile operated in part by synAC.
storyyeller More like giant cherries from Appleloosa Since: Jan, 2001 Relationship Status: RelationshipOutOfBoundsException: 1
More like giant cherries
#23: Jun 28th 2011 at 10:31:59 PM

There's also a misconception that intelligence is an easy to define and unambiguous term.

Blind Final Fantasy 6 Let's Play
Deboss I see the Awesomeness. from Awesomeville Texas Since: Aug, 2009
I see the Awesomeness.
#25: Jun 29th 2011 at 3:15:31 AM

Look at the cute little robot, it'll be exterminating humanity in no time.

Fight smart, not fair.

Total posts: 36
Top