TV Tropes Org

Forums

On-Topic Conversations:
When's the singularity?
search forum titles
google site search
Wiki Headlines
It's time for the second TV Tropes Halloween Avatar Contest, theme: cute monsters! Details and voting here.
Total posts: [150]  1  2  3  4  5
6

When's the singularity?:

 126 Tzetze, Fri, 3rd Jun '11 2:08:43 PM from a converted church in Venice, Italy
DUMB
Why is there only one AI in this scenario?
 127 Barkey, Fri, 3rd Jun '11 2:22:03 PM from Bunker 051 Relationship Status: [TOP SECRET]
War Profiteer
I guess it's assuming that this is the first self-aware AI to ever exist?
The AR-15 is responsible for 95% of all deaths each year. The rest of the deaths are from obesity and drone strikes.
 128 Carciofus, Sat, 4th Jun '11 3:27:06 AM from Alpha Tucanae I
Is that cake frosting?
One thing that I find a little unrealistic in most singularity scenarios is that they seem to assume some sort of near-instantaneous transition from "non self-aware, unintelligent computer program" to "self-improving, supremely intelligent AI".

Since we know that intermediate states are possible (ourselves, for example), what reasons do we even have to think that this might be the case?

EDIT: I suppose that this might make some sense if one assumed that, once you have an intelligent program, you can make it more intelligent just by granting it more processing power — so that, for example, you can just distribute the algorithm over more processors or use a better computer to get a more intelligent AI.

But this assumption seems unrealistic to me. Intelligence, no matter how you want to define it, is probably not just a matter of computational power. For example, I know that IQ tests are only very rough measures of intelligence, but suppose that this were the case: then if you gave a person the double of the usual time to resolve an IQ test then they should get the double of their usual score, at least according to some metric, and so on, and by giving them an arbitrarily long time it should be possible for them to achieve an arbitrarily high score.

I know of no research along these lines, but I really doubt that this is the case.

edited 4th Jun '11 3:45:24 AM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.

Needs to be more Evil
Two things:

1) Computers are already much better than us at a lot of things, they can carry out complicated "mental" exercises without errors, have a much larger and accurate memory, can do complex calculations very quickly, can crunch a large amount of statistics, etc. If they can reach are level in general reasoning, then their advantage in other domains means they'll blow us out of the water.

To put it in RPG terms, humans have 10 INT 10 WIS, computers now have 30 INT 1 WIS. Making an AGI means making a computer with 10 WIS, so it would probably have at least 30 INT.

2) Humans don't have access to their own source code, and the process that created them, evolution, is slow, messy and uncertain. If humans are capable of creating an AGI, it means that they have a technical theory of AI sufficiently good to be understood by humans; so a human-level AI would, by definition, be able to understand it too. It can then build on that theory to make the theory better, and make itself better (for example by optimizing it's code, improving the way it runs on distributed system, stripping out useless bits from it's code like "do not flood the lab with neurotoxins", etc.
Point that somewhere else, or I'll reengage the harmonic tachyon modulator.
 130 Carciofus, Sat, 4th Jun '11 5:41:23 AM from Alpha Tucanae I
Is that cake frosting?
Computers are already much better than us at a lot of things, they can carry out complicated "mental" exercises without errors, have a much larger and accurate memory, can do complex calculations very quickly, can crunch a large amount of statistics, etc. If they can reach are level in general reasoning, then their advantage in other domains means they'll blow us out of the water.
I am not all that sure that this is the case. I don't know about you, but for the sort of stuff I do super-fast symbolic manipulation is not all that helpful. Usually, the difficult part is understanding what sort of manipulation is useful: once I have that, if the computation is a little on the heavy side I can code a quick script already — but honestly, most of the time it's not all that useful. Being able to do that sort of calculation directly would be handy, sure, but it would not turn me into some sort of superhuman genius.

In any case, if that advantage is all that you want you can get it more easily than through creating an AI: we already have calculators capable of doing that, all we'd need is some way of interfacing them directly with a human brain — tricky, definitely, but nowhere near as difficult as developing a genuine general AI.

Humans don't have access to their own source code, and the process that created them, evolution, is slow, messy and uncertain. If humans are capable of creating an AGI, it means that they have a technical theory of AI sufficiently good to be understood by humans; so a human-level AI would, by definition, be able to understand it too. It can then build on that theory to make the theory better, and make itself better (for example by optimizing it's code, improving the way it runs on distributed system, stripping out useless bits from it's code like "do not flood the lab with neurotoxins", etc.
The part I bolded is the one I am not convinced about. Knowing something does not necessarily entail knowing how to improve it in such a significant way. Furthermore...

Look, suppose that tomorrow someone comes up with a way of writing a program capable of dog-level intelligence. That would be a major breakthrough, and I mean major as in "give that gal or guy all Turing awards ever" major. But you cannot reasonably expect to give a dog the blueprints of its own brain and have it return you the blueprints of a better brain. Furthermore, you cannot just try to run a dog brain at 2x the speed in order to get a more intelligent dog, and do it again and again until you get one which is comparable (or superior!) to a human — you'd get a dog that thinks very quickly, sure, but that's a different thing.

edited 4th Jun '11 5:48:11 AM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.

 131 Jinren, Sat, 4th Jun '11 5:58:03 AM from beyond the Wall
Bear in mind that it's not just a matter of having access to the code and being able to improve it. Some models for AI development are actively reliant on the half-finished AI being an integral part of the process, counting on it being explicitly programmed to do this stuff. There are several ways this could work; it's quite possible - I would say more likely - that "thought", however one defines that, would come about after the stage where a dumb expert system is prepped with the ability to select among its components for intelligent behaviour.

I was actually idly musing something along these lines only a couple of days ago - what would be the result of trying to get expert systems to do as much of the work as possible, setting up "farmers" that select for selectivity, to farm other systems that select for intelligence, etc.? An annoying idea to have right after one's free access to a supercomputer expires.

edited 4th Jun '11 6:06:50 AM by Jinren

 132 Carciofus, Sat, 4th Jun '11 6:12:35 AM from Alpha Tucanae I
Is that cake frosting?
OK. But even in that case, what guarantees that our system will be able to improve itself iteratively without reaching any sort of "limit"?

This is a subject about which, as far as I know, no one knows very much so far; but as far as I can see, it is not impossible that such a self-improving AI — if it can be built to begin with — would hit some barrier that it cannot surpass by itself sooner or later.
But they seem to know where they are going, the ones who walk away from Omelas.

Needs to be more Evil
[up][up][up]It's not just ultra-fast symbol manipulation, it's also not making mistakes of carelessness, not being distracted, having a gigantic memory, never needing to take time to look things up, etc. - the kind of stuff, that, combined, would make a "human-level" AGI more powerful than a human, unless it was also below human level in other aspects.

I agree cyborgs could possibly have those advantages (I know I wouldn't mind!), I'm just saying why an AGI would likely be superior to humans.

The dog AI: yeah, chances are it wouldn't self-improve much. I guess I mean that by the time we have human-level AGI, it's at least theoretically possible to have that AGI work on designing a better AGI, so reaching superhuman AGI is not very far. And having an AI design that includes it reprogramming parts of itself (also known as "learning skills") is not very far-fetched.

It is possible to have human-level AGI without having superhuman AGI shortly afterwards, but that includes both not having the AGI work on designing a better AGI (even tough it would make sense), and having an AGI design that doesn't feature self-modification and self-improvement (and something like that seems likely).

[up]Yup, it's quite possible there will be a limit to how strong a particular AI design can become, but it would be an *amazing* coincidence if that level happened to be human-level intelligence. That's a coincidence that's common in works of fiction, for obvious reasons, but reality doesn't follow the same rules.

edited 4th Jun '11 6:19:11 AM by SlightlyEvilDoctor

Point that somewhere else, or I'll reengage the harmonic tachyon modulator.
 134 Tzetze, Sat, 4th Jun '11 9:13:31 AM from a converted church in Venice, Italy
DUMB
it's also not making mistakes of carelessness, not being distracted, having a gigantic memory, never needing to take time to look things up,

Can you really assume that an AI would have these things? Carelessness happens when an intelligence judges on wrong or insufficient information that it doesn't need to think about something very hard. Why would an AI be immune to that? Garbage in, garbage out. Not being distracted? We have that in humans already, in the form of mental disorders like obsessive-compulsion. Distraction allows us to keep from being caught off guard while we're working on something else. As for looking things up, humans can do that already, by being well-read. Yes, a machine might have more memory than we do, but any reasonable knowledge representation model takes non-constant time to get information, so more memory could very well mean slower lookups. These are not negative traits of a general intelligence - in fact, I would say that they are necessary traits.

edited 4th Jun '11 9:15:04 AM by Tzetze

Needs to be more Evil
I would be surprised if an AI would do things such as forget where it left the keys, forget a semicolon at the end of a line of code, forget to fill out a field in a form, or forget the first name of that cute girl from marketing. I also strongly doubt that if you ask what time it is to an AI that's counting the flowers on the wall, it will have to start over because it lost count of where it was at.

The human mind is full of bugs, some of them are cheap heuristic, some of them compensate for other bugs, some of them are just because there are certain situations are ancestors never had to deal with (such as say multiplying large numbers). There's no reason a designed mind would have all those bugs, some of that stuff is really *easy* to solve from an engineering perspective.

edited 4th Jun '11 9:29:14 AM by SlightlyEvilDoctor

Point that somewhere else, or I'll reengage the harmonic tachyon modulator.
 136 Tzetze, Sat, 4th Jun '11 9:34:01 AM from a converted church in Venice, Italy
DUMB
I would be surprised if an AI would do things such as forget where it left the keys, forget a semicolon at the end of a line of code, forget to fill out a field in a form, or forget the first name of that cute girl from marketing. I also strongly doubt that if you ask what time it is to an AI that's counting the flowers on the wall, it will have to start over because it lost count of where it was at.

Why?

(such as say multiplying large numbers)

What's the point of moving that ability from a calculator to the main brain? Saving four milliseconds of communication time?

The human mind is full of bugs,

Such as?
Needs to be more Evil
Here, for starters.
Point that somewhere else, or I'll reengage the harmonic tachyon modulator.
 138 Jinren, Sat, 4th Jun '11 10:22:42 AM from beyond the Wall
While some things are obviously going to be solvable - e.g. hooking into regular computer programs for fast calculations, backing up the state of a subcore before letting it get distracted by outside input, copying said subcore to another mind to give it instant expertise in a limited field - I have the suspicion (not based on any solid knowledge of this subject) that many of those cognitive biases are going to be just as applicable to a machine if it uses any sort of heuristic methods at any point in the chain. It'd be hard to find a way to completely remove "prejudices" from something like a neural network, without destroying the mechanism that makes it useful.

 139 Yej, Sat, 4th Jun '11 10:23:44 AM from <0,1i>
See ALL the stars!
I think you'd certainly cut down on the forgetfulness, carelessness and so on, if you don't eliminate it entirely. It's there because we're using heuristics to get a reasonable answer quickly, because we don't have time to calculate the real one. The computer would have time to find the real one, since it's probably going to be running noticeably faster than us.
Da Rules excuse all the inaccuracy in the world. Listen to them, not me.
crazy and proud of it
How are we defining singularity?

In my opinion, the first singularity happened when fertilizer could be produced industrially, and we're still trying to cope with the effects of that singularity. The second singularity happened when the Internet caught on, and we haven't fully exploited that one either.

Needs to be more Evil
[up][up][up]Agreed, which is why I talked about "mistakes of carelessness"; an AI would be unlikely to miscalculate how much change to give back, but might have entirely new bugs, such as being bad at noticing sarcasm, or accidentally exterminating mankind.

edited 4th Jun '11 10:29:03 AM by SlightlyEvilDoctor

Point that somewhere else, or I'll reengage the harmonic tachyon modulator.
Nihilist Hippie
How are we defining singularity?

When will artificial general intelligence be invented?

That's how I was defining it.
"Had Mother Nature been a real parent, she would have been in jail for child abuse and murder." -Nick Bostrom
 143 Carciofus, Sat, 4th Jun '11 10:43:59 AM from Alpha Tucanae I
Is that cake frosting?
I think you'd certainly cut down on the forgetfulness, carelessness and so on, if you don't eliminate it entirely. It's there because we're using heuristics to get a reasonable answer quickly, because we don't have time to calculate the real one. The computer would have time to find the real one, since it's probably going to be running noticeably faster than us.
Eh, searching for many of the main Real Life problem would still require exponential time, at least if we assume that P != NP. You can have as much processing power as you want, but you are not going to solve the knapsack problem for a very big input through brute force.

What you can do is use some heuristics that work sort of well, most of the time — for example, for the above mentioned knapsack problem the greedy algorithm tends to usually find a decent solution, though not necessarily the optimal one, and it is a very "human" tactic. But of course, you can develop scenarios in which the greedy algorithm gives you a stupid answer. That's the equivalent, on a tiny scale, of a cognitive bias, I'd think.

But in any case, I think that we are in agreement that the human mind, as it is now, is not the absolute best possible form of general intelligence. What I am not that convinced about is that building from scratch a better one will be as easy as singularity enthusiasts sometimes seem to assume.

Also, in answer to post 133:
It is possible to have human-level AGI without having superhuman AGI shortly afterwards, but that includes both not having the AGI work on designing a better AGI (even tough it would make sense), and having an AGI design that doesn't feature self-modification and self-improvement (and something like that seems likely).
The problem is whether a primitive AGI would be able to improve itself to such a degree. The only form of general intelligence that we are aware of is the human one, and humans at large do not seem capable of doing this all that easily — it might be possible for us to improve ourselves, in principle, but you cannot just put an average guy in a room with a lot of neurology textbooks and a workbench and tell him "ok, now work on building a better brain than the one you have". He would fail. I would definitely fail. Perhaps a genius might do it, given enough time, but the first AGI will probably not be genius-level.

Yup, it's quite possible there will be a limit to how strong a particular AI design can become, but it would be an *amazing* coincidence if that level happened to be human-level intelligence. That's a coincidence that's common in works of fiction, for obvious reasons, but reality doesn't follow the same rules.
I was not suggesting that. I was objecting to the oft-mentioned suggestion of "build a primitive, self-improving AGI fragment, and then let it do all the work for you". While automated coding is likely to play a role in the future of computer science, and in the future of artificial intelligence in particular, this idea, at least if formulated in the usual way, strikes me as a trick to avoid thinking about the truly difficult problems. You may as well start talking about "emergent behaviour" or "genetic programming" — to mention other two concepts which might play some role in the development of an AI, but that are often mentioned as a catch-all for "I don't know how this problem might be solved, but I hope that the computer will solve it somehow for me".

Before starting talking about how to build a self-improving AGI, we need to be able to create an AI with the planning and reasoning abilities of a dog. As far as I know, we do not have much of a clue about how to do that. There are problems that we need to solve in order to do that, hellishly hard ones.

edited 4th Jun '11 10:49:00 AM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.

Nihilist Hippie
Before starting talking about how to build a self-improving AGI, we need to be able to create an AI with the planning and reasoning abilities of a dog. As far as I know, we do not have much of a clue about how to do that. There are problems that we need to solve in order to do that, hellishly hard ones.

IMO dog intelligence isn't too far from human intelligence. By the time we have dog-level AI most of the problems are solved.

edited 4th Jun '11 10:49:42 AM by LoveHappiness

"Had Mother Nature been a real parent, she would have been in jail for child abuse and murder." -Nick Bostrom
 145 Carciofus, Sat, 4th Jun '11 10:49:45 AM from Alpha Tucanae I
Is that cake frosting?
I agree. Although no one really knows much about this, so it's mostly a guess. Also, even if going from "dog" to "human" might not be as hard as going from "computer" to "dog", this does not mean that it is going to be easy, not by any stretch of imagination.

edited 4th Jun '11 10:54:54 AM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.

Nihilist Hippie
this does not mean that it is going to be easy, not by any stretch of imagination

http://en.wikipedia.org/wiki/Moravec's_paradox

edited 4th Jun '11 10:56:31 AM by LoveHappiness

"Had Mother Nature been a real parent, she would have been in jail for child abuse and murder." -Nick Bostrom
 147 Jinren, Sat, 4th Jun '11 10:55:36 AM from beyond the Wall
You may as well start talking about "emergent behaviour" or "genetic programming"

...these are well-known ideas with many useful applications, and almost certainly essential to the task in question. I really don't see the problem: genetic programming is just adding a layer of abstraction on top of dividing tasks into functions, by first making the computer do the tedious work of sorting through the millions of available possibilities. As long as your goals are properly defined, this really isn't that much hazier than doing it by hand.

Before starting talking about how to build a self-improving AGI, we need to be able to create an AI with the planning and reasoning abilities of a dog

By the same logic as above... you have that the wrong way around. You'll definitely need self-improving software to come within orders of magnitude of that level. If you have a doglike AI, the work is already done.

edited 4th Jun '11 10:59:31 AM by Jinren

 148 Carciofus, Sat, 4th Jun '11 11:01:36 AM from Alpha Tucanae I
Is that cake frosting?
...these are well-known ideas with many useful applications, and almost certainly essential to the task in question. I really don't see the problem.
The problem is that they are often mentioned as an excuse to stop thinking about the real problems (I am not talking about the researchers here, I am talking of what seems commonplace in Internet discussions). I agree that they are useful and important ideas, and I agree that they might play a role; but if you want to use them, you should describe in detail how you want to use them to solve the problem at hand, and why they would work well in these circumstances. Otherwise, they are only buzzwords.

By the same logic as above... you have that the wrong way around. You'll definitely need self-improving software to come within orders of magnitude of that level. If you have a doglike AI, the work is already done.
So you want to create a self-improving intelligence, something that as far as I know does not exist in nature with the possible exception of human beings, in order to create a doglike intelligence? Eh, it might work, but I am not all that convinced that it is the easiest way around the problem. I have nothing against self-improving subsystems, and I agree that they might be a part of the solution; but they alone are not going to be all the solution, I think.

edited 4th Jun '11 11:14:12 AM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.

Nihilist Hippie
This is an interesting video on self-improving AI

"Had Mother Nature been a real parent, she would have been in jail for child abuse and murder." -Nick Bostrom
Needs to be more Evil
Carciofus: ah, ok, rereading the exchange I see we may have been talking slightly past each other ... when you said

avatar: Carciofus Plays Frungy with gusto One thing that I find a little unrealistic in most singularity scenarios is that they seem to assume some sort of near-instantaneous transition from "non self-aware, unintelligent computer program" to "self-improving, supremely intelligent AI".

... I focused more on the fact that we might get "self-improving supremely intelligent AI", i.e. the "second half" of the scenario, whereas you mostly have doubts about the first, i.e. the feasability of having any kind of AI.

Without talking about how likely we are to have AI in the near future, my argument is that once we do, we will probably quickly (say a year or two) have super-intelligent AI that is way above us - if there is a transition, it will be quick; as you say, "near instantaneous".

That doesn't tell us much about how likely having AI period is, and I have much of an opinion about that (there is substantial disagreement among experts in the field).
Point that somewhere else, or I'll reengage the harmonic tachyon modulator.
The system doesn't know you right now, so no post button for you.
You need to Get Known to get one of those.
Total posts: 150
 1  2  3  4  5
6


TV Tropes by TV Tropes Foundation, LLC is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Permissions beyond the scope of this license may be available from thestaff@tvtropes.org.
Privacy Policy