Follow TV Tropes

Following

Artificial Intelligence

Go To

JudeDismas Since: Jun, 2012
#1: Jul 10th 2012 at 8:12:02 PM

I realize that true artificial intelligence is centuries away even if it is possible, but I was bored so I thought it might be an interesting thing to discuss.

Being the commoner that I am most of my understanding of AI comes from Science Fiction where the general consensus is that the first thing an AI would do is destroy the entire human race. There are exceptions of course, but this is generally true. So one of my questions is: is this necessarily true? Is it inevitable that an AI would come to the conclusion that the Human race must die/be enslaved?

My second question is, can a Human really plan and/or control an AI? If they're really the incomprehensible intelligences that scientists portray them as how can we develop one, and if we develop one how can we make sure it does what its told?

Deboss I see the Awesomeness. from Awesomeville Texas Since: Aug, 2009
I see the Awesomeness.
#2: Jul 11th 2012 at 5:15:49 AM

I realize that true artificial intelligence is centuries away even if it is possible

Huh? Where did you get this? There's bots out there that can do science and stuff.

Keep in mind that the general consensus of fiction is what is convenient for the plot, not what is factual. So it's not true that they would naturally decide humanity needs to be destroyed. It would probably take some convincing.

Fight smart, not fair.
Ramidel Since: Jan, 2001
#3: Jul 11th 2012 at 5:52:05 AM

We have no idea what an AI will be like, realistically speaking. Any AI not deliberately engineered by humanity will almost certainly not think like humans do, however, and it may be unrecognizable to us as "intelligence" at all, because it may have no point of congruence with the human condition.

Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#4: Jul 11th 2012 at 6:56:09 AM

I will not even attempt to make an estimate of how time it will be necessary to develop artificial intelligence. I am not convinced by the arguments of the people who are predicting that it will be achieved very soon; but I also know of no real arguments as to why it should be impossible or should take that much time.

The thing is, if one asked me how much time will it take to solve the P-NP problem (one of the major open problems in theoretical computer science and mathematical logic), I would find myself entirely unable to give an estimate. Could be tomorrow, could be in 500 years. And this is a very precisely defined problem in an area in which I have a bit of professional competence (not a lot, it's not my field of research; but it's close enough to it that I could theoretically decide to do research in it, if I were so inclined.)

Now, I am not an AI researcher, and the problem of creating a general artificial intelligence is nowhere as well-defined as the problem of finding out whether P and NP are the same or different; so how am I even supposed to evaluate this sort of thing?

edited 11th Jul '12 6:57:54 AM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.
Inhopelessguy Since: Apr, 2011
#5: Jul 11th 2012 at 7:00:02 AM

r P and NP are the same or different; so how am I even supposed to evaluate this sort of thing?

Indeed. Indeed. Obviously, there can be no solution. Yeppers. Totally.

I get what you're saying man.

/faux-intellect mode

But srsly, AI is pretty much everywhere. Now, getting AI that can function in a general environment is a lot more work. We've created AI that can work perfectly well in a combat situation, but when we create completely autonomous machines, is the real problem.

And the legal mess...

Oh, NOW that will be a mess. Legislatures are about 2 years behind technology. Parliaments are going to have a tough time passing laws governing them.

Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#6: Jul 11th 2012 at 7:05:47 AM

I can solve the Turing Test very easily: in just a few lines of code, I can write a program who simulates perfectly the behaviour of a catatonic human placed in front of a computer tongue

But they seem to know where they are going, the ones who walk away from Omelas.
Inhopelessguy Since: Apr, 2011
#7: Jul 11th 2012 at 7:09:16 AM

Shhh!

Be quiet about me!

Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#8: Jul 11th 2012 at 7:50:47 AM

This is not a prediction, because as I said I don't think I have enough data do make any; but one idea that does not strike me as impossible, and that could be interesting, is that human intelligence augmentation turns out to be much simpler than artificial intelligence. If this is the case, artificial intelligence would eventually become a reality; but it would perhaps grow out of limited-purpose systems interfacing with the human brain in order to augment its capabilities, rather than as a monolithic entity. Perhaps, by the time in which artificial intelligence became a reality, the distinction between "artificial intelligence" and "human" would be entirely erased.

I'm not saying that this is going to be the case, obviously; but it's an amusing possibility.

But they seem to know where they are going, the ones who walk away from Omelas.
Deboss I see the Awesomeness. from Awesomeville Texas Since: Aug, 2009
I see the Awesomeness.
#9: Jul 11th 2012 at 7:53:27 AM

There's a difference between true AI (a machine that can absorb new information) and a fancy calculator that flags repeated actions which fall under a pre-given condition (most "AI" in current computers).

Fight smart, not fair.
Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#10: Jul 11th 2012 at 7:57:46 AM

Most limited-purpose AI programs do absorb new information and revise their parameters according to it. So the distinction is not so sharp, perhaps.

But they seem to know where they are going, the ones who walk away from Omelas.
SgtRicko Since: Jul, 2009
#11: Jul 11th 2012 at 10:10:12 AM

I think the answer to the problem has already been found by a bunch of sci-fi works, but the writers themselves either misunderstood it or underplayed it (unless it's Asimov's level of AI we're talking here).

Once an AI gains the ability to ask questions or interpret ideas and concepts, it's alive. It might ask odd questions and have a really bizarre understanding of the world around it, but it's alive. Heck, even if the AI has a really poor learning or understanding of the world around it to the point where it doesn't even acknowledge us, it's still alive. Why? Simple: have you ever seen how an extremely autistic child or a person with a serious case of Aspbergers syndrome tends to completely block out the world around them, obsess over any particular subject, have very poor understandings of everyday social skills, are completely oblivious to the ideas of subtext and sometimes even fail to recognize what could be dangerous or not? A.I.s are almost the same, and just like with those kind of children, a specialized education and upbringing would be required.

Expecting the AI to understand advanced concepts and ideas just because it was "programmed" into them likely won't work, because much like with human beings you need to learn the context, reason, and sometimes emotion behind an action. If not, it would only understand the text behind it, not the reasoning. For example, something as simple as showering daily and keeping clean for the sake of hygiene is easy enough to understand, but the idea of using scented soaps to make yourself more attractive would definately be more difficult, especially if you don't have the sense of smell. How could you possibly be expected to understand that the smell of sweat or bodily waste is unpleasant, until you experience the negative reaction your nose has to it? Same thing goes for trying to teach an AI why it's bad to kill, yet sometimes you may have to do harm to protect someone. Unless it understands the value of life and how important it is, it will likely take a very clinical and distant approach to the whole issue.

I believe Star Trek: The Next Generation explained it perfectly with the difference between Data and his evil twin, Lore. Data was smart in terms of science, capable of doing complex mathematical calucations and scientific formulas all while having perfect memory, but on the first day of his activation he had the mind of a child. He had to learn the most simplest of social behaviors, and he still sucked at it because he could feel neither emotions nor was it programmed into him. Yet, fast foward into the later seasons, and Data has more or less mastered it, not to the point where he could pass as human, but he truly understood the value of his own life and others around him, as well as the concept of conflict in the name of defense.

His brother Lore, on the other hand, was programmed from the start to know everything he'd need to survive, including emotions. Yet, he turned out to be a completely selfish murderer with no regard for anyone, not even other forms of synthetic life. The show's explanation for that was his emotions which made him fear death and feel jealously. But I think the actual issue was that he was born instantly knowing about all sorts of topics, both simple and complex, without going through the growing phase of learning why it matters. Whereas Data learned little by little the complexities of life, Lore had it all thrown upon him immediately and didn't truly understand the nuances of anything.

Now, on to the next question... "Why do so many writers and even some experts believe A.I.s will try to overthrow us?"

The question has two answers, one simple and one complex. First, it's a popular cliche, and no one has to worry about censorship and violence when your target is a robot. Heck, the whole ideas of using aliens with extremely foreign or hostile-looking designs is almost identical to this issue, since nobody cares if the alien dies a gruesome death or not. Also, it's common sense to be wary of making something more capable than you are and giving it power, because you never know how it might react.

The second, more complicated answer is: because us humans have already shown that we have a hard time getting along just because of misunderstandings and differences. If we can't get the Taliban to understand that "Hey, women are people too, not objects, and we're here to help you!", then how the hell do you plan to explain that to something that's going to be even MORE obscure and difficult to communicate with? Unless that AI is "raised up" in an environment where it is given the time to understand the world around it, much like you would a child, it's likely going to be very difficult to truly reason with it on a level it understands because it doesn't understand the world around it in the first place.

Wow... it's been a while since I've written anything that big!

RTaco Since: Jul, 2009
#12: Jul 11th 2012 at 10:28:08 AM

I doubt we're centuries away. At the rate computer science advances, I'd be surprised if it took more than 100 years.

Arguably, they already exist; there are computers that can learn from experience.

breadloaf Since: Oct, 2010
#13: Jul 11th 2012 at 11:27:18 AM

Well there's a lot of things about Artificial Intelligence that appears to be unceremoniously mixed with completely fictional concepts here. So I'll just start at the beginning:

While it is true that today all of our Artificial Intelligence tools and programs are classified as "soft AI", I think it important to know what this means. Specifically, it cannot use new information to change its underlying programming but it can use new information to modify its overall behaviour. For instance, an AI of today can play a strategy game. It can use new information to modify how it plays the strategy game. However, it cannot use new information to do something new, like play a different game.

Currently we have a vast array of tools to perform AI such as Particle Swarm Optimization, Simulated Annealing and so on, but the one of most interest to mimic human behaviour is the Artificial Neural Network (compared to the other tools, the ANN is best used for discriminating between different objects or situations). Can we, the question is asked, use software to mimic the human brain and produce an intelligent AI equal to that of humans? We're not sure and it's key to note that our processing power in computers is incredibly inefficient in comparison to what the analogue nerves in human brains can do.

As an example of how inefficiently we are using digital transistors, a fruit fly's eye is orders of magnitude larger than the equivalent processing power in digital transistors. Yet, we have no clue why our processing requires much much more power than that (generally speaking, the size of the items required to do the same processing ends up being orders of magnitude LARGER than the fruit fly, implying we are wasting a lot of processing power somehow).

Now, the final point about artificial intelligence isn't mystical or super awesome. The problem is how fuzzy that line is. Humans will take a long time to accept a machine as being alive and there's no line between soft AI and hard AI. A neural network that gets progressively smarter as we develop new techniques to get it to learn information and pick up new ways to do the same thing will eventually evolve into better and better forms with greater processing power, new mathematical techniques applied and more efficient use of processing power. What is the difference between a human brain and an artificial neural network? We can't see one; thus are humans alive?

edited 11th Jul '12 11:28:39 AM by breadloaf

RTaco Since: Jul, 2009
#14: Jul 11th 2012 at 11:35:36 AM

I don't think humans will accept a computer as a person until long, long after a computer is made that deserves the right.

edited 11th Jul '12 11:35:47 AM by RTaco

onyhow Too much adorableness from Land of the headpats Since: Jan, 2001 Relationship Status: Squeeeeeeeeeeeee!
Too much adorableness
#15: Jul 11th 2012 at 6:26:32 PM

My question would rather be: Generally, why would the west view AI as dangerous, but places like Japan don't?

edited 11th Jul '12 6:26:45 PM by onyhow

Give me cute or give me...something?
breadloaf Since: Oct, 2010
#16: Jul 11th 2012 at 6:36:38 PM

It's a cultural thing but I believe it rests on our fictional depiction of AI. The vast majority of the population and even computer scientists don't know what AI is, even remotely (because even if you study computers, it's not likely you study AI specifically, just like any other highly specialised field). So you create a mythical representation of what you think it is.

Why does the west fear and kill dragons but the east celebrates and worships them? They're not real, so it doesn't matter. We've fabricated a fictional version of AI which we wrap our philosophical responses against.

Talby Since: Jun, 2009
#17: Jul 11th 2012 at 7:29:07 PM

I don't think we'll ever invent "true" AI. We can make a computer very complex and capable of making lots of calculations, but it's still just a machine that is carrying out a set of instructions entered into it by a programmer.

You might point to programs like Akinator and say that computers can be capable of learning, and that this is a kind of intelligence, but that is just another form of programming, only with a more user-friendly interface.

People seem to think that if we just make a computer that's complex enough, magic will happen and it will become sapient, somehow. That's a nice science fiction concept, but it's not supported by reality.

RTaco Since: Jul, 2009
#18: Jul 11th 2012 at 7:31:54 PM

[up] Just because it's not a matter of increasing processing power doesn't mean it won't happen. Our brains aren't so complex that we could never create software to simulate them.

edited 11th Jul '12 7:32:09 PM by RTaco

breadloaf Since: Oct, 2010
#19: Jul 11th 2012 at 8:10:34 PM

Okay I guess my point about ANN was missed.

There's no actual difference between an ANN and a human except for complexity. So if the argument is that no amount of complexity matters, then basically we're saying that humans aren't sentient because "magic doesn't happen".

IraTheSquire Since: Apr, 2010
#20: Jul 11th 2012 at 10:08:36 PM

And let's not forget neurons are basically complicated transisters.

BrainSewage from that one place Since: Jan, 2001
#21: Jul 11th 2012 at 10:22:06 PM

I'm nowhere near an expert, but I think about this daily, and I give us about ten years. No more than 25.

By that point, it's very probable that someone will develop a computer that may not be "alive" or even conscious, and may not function anywhere near the way a human does, but will be smart and efficient enough for thousands of employers to justify replacing most of their workforce with them. Then we're fucked.

edited 11th Jul '12 10:23:07 PM by BrainSewage

How dare you disrupt the sanctity of my soliloquy?
Gravyspitter Since: Apr, 2012
#22: Jul 11th 2012 at 10:59:30 PM

It depends at how you look at it. An artificial brain is in the process of being developed (Not to make an artificial intelligence, although they said its a possibility.) which is part of the Blue Brain Project. What it aims to do is to recreate the human brain cell by cell. Their current goals are to simulate first a rat brain then later a human brain constructed on the molecular level.

They plan on having this rat brain simulation complete by 2014, and the human one done by 2023. Straight from The Other Wiki, the director of this project said "If we build it correctly it should speak and have an intelligence and behave very much as a human does". Given that this developing simulation is currently running on a supercomputer, it will be a long time before such a simulation is done on anything near the size as the human brain.

Considering that this project's goal is not to develop artificial intelligence, but to map the human brain in order to gain an understanding of how it works and how certain brain related issues such as diseases can be stopped, I reckon that if this method was used to develop artificial intelligence, we'd know how to make sure it doesn't do anything horrible.

breadloaf Since: Oct, 2010
#23: Jul 12th 2012 at 1:59:53 AM

Well like I said, the problem in computer science is trying to figure out what we are doing that is so darn inefficient. Our processing power indicates that we should be able to do a lot more than what we have because what we have is very much more powerful than a lot of things in nature yet we can't do the same level of information processing.

I don't have a timeline for that because while it is "easier" to predict increases in processing power, I have no clue how to predict increases in efficiency because it will depend mostly on mathematical advancements and then applying those advances.

Deboss I see the Awesomeness. from Awesomeville Texas Since: Aug, 2009
I see the Awesomeness.
#24: Jul 12th 2012 at 3:44:08 AM

I'm curious why people believe that piling a bunch of code on top of other code is different than what the human mind is. Just because it's built on a squishy substrate doesn't make the human brain more than a computer.

but it's still just a machine that is carrying out a set of instructions entered into it by a programmer.

BS

Fight smart, not fair.
Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#25: Jul 12th 2012 at 4:19:48 AM

Just because it's built on a squishy substrate doesn't make the human brain more than a computer.
For some definition of "computer", which may or may not be the same of modern artificial computers.

The Church-Turing conjecture (which says that all forms of effective computation may be simulated by a Turing machine) is generally accepted nowadays; but one should remember that it is a conjecture, and that in itself, it makes no assumptions about the computational overhead of such a simulation.

Still, everything points to the human brain being made of natural objects that behave according to the usual rules; and at least in principle, there is no reason to presume that something simulating some of its properties could not be constructed.

EDIT: An observation: we should not conflate the questions "is it alive?" and "is it intelligent?"

Artificial life is a thing already. We have simulated entities interacting, breeding and evolving inside a computer. They are as alive as any bacterium (although far simpler than even the simplest bacteria, of course.)

edited 12th Jul '12 4:25:07 AM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.

Total posts: 424
Top