Follow TV Tropes

Following

Deconstructing 'AI is a Crapshoot'

Go To

G.G. Since: Dec, 1969
#1: Feb 20th 2011 at 9:35:56 PM

I am no expert on AI but I would to think that this trope is lot more complex than most writers give it credit for. For the more techinical tropers, how can I deconstruct this trope or explore its ramifications?

Durazno Since: Jan, 2001 Relationship Status: Drift compatible
#2: Feb 20th 2011 at 11:09:31 PM

It seems to me that a deconstruction of this one would either be playing it straight but doing it well (the AI is acting in a way we didn't expect because of a believable oversight or mistake in its design) or averting it (the AI does exactly what it's supposed to within its limits.)

Or would it ask what the world would really be like if A.I.s could really go homicidal at the drop of a hat?

AirofMystery Since: Jan, 2001
#3: Feb 21st 2011 at 3:36:42 AM

Or alternately, have the AI be very well meaning but incompetent.

G.G. Since: Dec, 1969
#4: Feb 21st 2011 at 8:02:18 AM

What about the case of gone horribly right? HAL was designed to make sure tha tthe mission to Jupiter was a success but ti ended up killing the humans to do so.

[up] Aren't computers like that anyway? I read in another thread, I elieve it was Nornagest who menitoned that computers are precise and needed to have precise input or something like that.

lordGacek KVLFON from Kansas of Europe Since: Jan, 2001
KVLFON
#5: Feb 21st 2011 at 9:32:22 AM

Or make it a runaway computer who kills people, but actually isn't evil, more like panicked and acting in self-defense.

"Atheism is the religion whose followers are easiest to troll"
Yej See ALL the stars! from <0,1i> Since: Mar, 2010
See ALL the stars!
#6: Feb 21st 2011 at 9:40:18 AM

Isn't that what HAL did?

Da Rules excuse all the inaccuracy in the world. Listen to them, not me.
SilentReverence adopting kitteh from 3 tiles right 1 tile up Since: Jan, 2010
adopting kitteh
Theram A travelling scholar Since: Jan, 2011
A travelling scholar
#8: Feb 21st 2011 at 10:06:49 AM

Throw away the notion of "de-constructing" it, and substitute it with "explore". Everybody is trying to "de-construct" something these days, making their task harder than it would be, were they to write under the premise of exploration.

SFNMustDie Since: Dec, 1969
#9: Feb 21st 2011 at 11:36:21 AM

Simple way to deconstruct it:

When the early days of computing created lots of malevolent A.I.s, the government banned computers and the world was forced to ignore the possibility of expanding technologically in that area.

RalphCrown Short Hair from Next Door to Nowhere Since: Oct, 2010
Short Hair
#10: Feb 21st 2011 at 11:43:22 AM

You're right, this topic is much more complex than writers bother to explore. They treat computers as humans, and there's usually some human trait or motive assigned to it. Even HAL, lacking almost any outward humanity, had a soothing voice.

With AI you're dealing with a constructed intelligence. If you give it the ability to modify its own programming (which is one definition of intelligence), it will very soon begin working in a way its creators didn't envision. It will find heuristics (ways to solve problems) that humans wouldn't even consider, much less implement. It will become a new, distinct, and rapidly evolving entity, although I'd hesitate to call it life.

There are limitless ways to portray AI, namely because we have no idea what it would eventually become. You could put governors on the AI so that it can't work any faster than a human. You could put ethical barriers in place, such as Asimov's Three Laws of Robotics. You could give it interchangeable personalities to suit the current owners. As with any inhuman entity, you can use it to highlight human qualities. Whatever you do, though, don't treat it as a psychopath in a box.

Under World. It rocks!
Theram A travelling scholar Since: Jan, 2011
A travelling scholar
#11: Feb 21st 2011 at 12:14:59 PM

Wintermute in the Neuromancer Trilogy could be a very rewarding source of inspiration for you.

Earnest from Monterrey Since: Jan, 2001 Relationship Status: Drift compatible
#12: Feb 21st 2011 at 5:12:43 PM

One of the things I though was really interesting in the Megaman X game was that X was put on ice for decades as he was made to run ethical simulations. Dr. Light basically put him through the AI equivalent of an extended debugging / personality counseling to make sure this trope didn't happen. And it worked.

An exploration of this trope might compare AI's that go evil to a "teenage" phase where the lack of ethical AI peers lead to a larger than average chance the AI goes rogue.

Alternately, the first sign that an AI is truly intelligent is that it can go bad; the only "AI's" that don't go rogue are under so many limiters that they barely qualify as sentient.

storyyeller More like giant cherries from Appleloosa Since: Jan, 2001 Relationship Status: RelationshipOutOfBoundsException: 1
More like giant cherries
#13: Feb 21st 2011 at 7:09:02 PM

One interesting thing you could do is make an AI that can't talk or understand language. That's generally one of the less realistic parts of fictional AI depictions anyway.

Also, another thing to keep in mind is that it sometimes doesn't take much to give the appearance of intelligence in Real Life. For example, a chess program is simply using brute force, but it can still come up with unique strategies and traps that give the appearance of genius.

Generally, the less human interaction a computer has, the smarter it will appear. This is because we are familiar with human stuff and will notice the slightest discrepancy.

edited 21st Feb '11 7:12:32 PM by storyyeller

Blind Final Fantasy 6 Let's Play
SFNMustDie Since: Dec, 1969
#14: Feb 21st 2011 at 7:36:47 PM

I'd love to see an evil AI that doesn't actually KNOW it's an AI (ie an AI that goes evil despite being in an Ontological Mystery).

Durazno Since: Jan, 2001 Relationship Status: Drift compatible
#15: Feb 21st 2011 at 7:44:48 PM

An old science fiction novel called The Two Faces of Tomorrow had an interesting take on the problem, actually. It starts with a worldwide self-modifying computer network that is just smart enough to innovate in idiotic and dangerous ways, and the people in charge decide that it either needs to be upgraded or scrapped before there's some kind of a disaster.

As a sort of dry run, they install the new AI in a space station and deliberately engineer a worst-case scenario - apart from its functions in maintaining the station, they give it no directives other than to preserve its own functioning. Then they start screwing with it. The idea being that if it doesn't go "kill all humans" in this situation, then it certainly won't when they give it the full suite of safeguards.

It does.

The interesting thing here is that it isn't even really aware of the humans at first, though. After a week or two of trolling, it finally realizes that some "shapes" are consistently disrupting its functions and does its best to clear up the infestation with the tools it has.

66Scorpio Banned, selectively from Toronto, Canada Since: Nov, 2010
Banned, selectively
#16: Feb 27th 2011 at 7:05:03 PM

Unless the hardware is damaged, software does exactly what it is programmed to. There can be problems with random, probalistic and fuzzy logic functions, but otherwise an AI will react in a predictable manner based on its programming. However, there can be errors in programming usually resulting from the sheer mass of code. So what makes AI a crapshoot is a random factor or an error from complexity. In theory, sufficient debugging, beta testing or whatever should solve the latter problem.

However, a sub-type of complexity error arises out of machine learning in that there is no way to predict the sequence in which an AI will gain experience or what those experiences will entail. This is harder to test and debug. An interesting example is from Asimov's I Robot (which has little to do with the movie of the same name). There is one story about a robot that can read minds and it eventually shuts down after it grasps an understanding of emotional harm, and that creates all sorts of conflicts in the Three Laws (which Asimov created to specifically avoid the trope of AI turning on its creator).

Another point is that most AI is more A than I. That is, it is artificial and simulates certain cognitive processes but it doesn't do them in the same way. If it actually did create intelligence, it would likely be subject to all the same human frailties of thought and emotion (Ever notice how the programs in the Matrix keep their promises while the humans don't?).. Most A.I.s produce results that appear intelligent to an outside viewer, but in reality there is nothing actually there that is intelligent. In terms of moral reasoning, most A.I.s are just as empty and they are merely faking it as a matter of their programming. To that extent, most A.I.s are latent sociopaths but have further programming that also fakes a moral sense that ultimately controls behaviour (which is about 80% of the diagnostic traits of sociopathy compared to 20% or so that are mental processes). To give a human example, look at Dexter. Emotionally empty, fakes it to fit in, reprogrammed for "good".

I agree that "deconstruct" has a lot of philosophical baggage that goes along with it, so "explore" is probably a more practical way to look at it.

It seems to me that what you are ultimately exploring is a two pronged exercise in comparisons and contrasts: 1) between human consciousness and AI and 2)between virtues and vices, or functional and dysfunctional mental/moral/emotional processes.

Whether you think you can, or you think you can't, you are probably right.
Add Post

Total posts: 16
Top