Follow TV Tropes

Following

When's the singularity?

Go To

Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#26: Jun 2nd 2011 at 12:33:28 AM

@Tzetze: Yeah, good point (although I think it should be y-t > 0 — immortality being discovered just at the instant you die would be kind of annoying).

Also, another problem with this kind of prediction is that it tries to estimate future events which are dependent on what people will or won't do.

If one wants some technological achievement to be reached soon, they should work for it — either individually, or by trying to raise attention to the issue, or by trying to increase the global education levels, or by fighting poverty in developing nations (and hence giving more people the opportunity of doing research in the field), or... well, you get my drift.

If you*

want something, work for it. Don't spend your time sitting on your ass and trying to guess, through rather dubious techniques, how much time other people will need to do it for you.

EDIT:

Nah, extrapolating growth in general is silly (Kurzweil's guilty of it a lot), you can fudge the data much too easily. I'm just talking about extrapolating brain scanning resolution and the cost of computing power, and they don't need to go "to infinity" or anything, just reach a point where it's technically feasible to scan a human brain and simulate it.
Well, that's the problem. We don't know how to make such an accurate scan right now, and we don't even know how much accurate it should be. We don't know how much computing power we need, although we know that it is much more than what we have now, and we do not need which scientific advancements should be made in order to get that much power.

There are many possible chains of scientific breakthroughs that might lead us to be able to simulate a human brain, and all of them include a number of highly non-trivial steps. Thus, I am very, very suspicious of estimates of this sort.

Also, I am of course not sure, but I get the impression that simulating a human brain could be the long way around, and that — if we ever crack the problem — it will be first done through more abstract models.

There is plenty of interesting work which is being done along these lines: for example, the AIXI model of general artificial intelligence looks interesting and mathematically sound to me, although writing an efficient implementation might end up being far harder than expected.

Also, AIXI would be a self-improving general-purpose problem solver, and I am not entirely sure if this would be the same as "an intelligent being".

edited 2nd Jun '11 12:47:43 AM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.
SlightlyEvilDoctor Needs to be more Evil Since: May, 2011
Needs to be more Evil
#27: Jun 2nd 2011 at 1:31:17 AM

For the resolution of Neuroimaging: I'm mostly basing myself off this, which shows resolution going up 'till '2000; a quick check on Google Scholar finds that there has ben some progress since then (down to 50 micrometers, according to Wikipedia a neuron is between 4 and 100 micrometers). The whole area looks promising and "in full stride" (there are a lot of different approaches being tried), so for me in the coming decades higher resolution, better models and lower cost is the default case; I think it's unlikely the area will hit a wall the way chip design hit a wall once components were so small quantum effects started showing up.

I agree brain scanning simulation might be the "long way around", there may be shortcuts like AI or simpler brain models or something, which is why I consider the brain scanning scenario as an upper bound for when things get fucked up big time.

edited 2nd Jun '11 1:33:47 AM by SlightlyEvilDoctor

Point that somewhere else, or I'll reengage the harmonic tachyon modulator.
Diamonnes In Riastrad from Ulster Since: Nov, 2009
In Riastrad
#28: Jun 2nd 2011 at 5:47:23 AM

Thread Hop: Dunno when, but it won't be the first, and won't be the last. There have been loads of Singularities throughout human history. Agriculture comes to mind.

My name is Cu Chulainn. Beside the raging sea I am left to moan. Sorrow I am, for I brought down my only son.
Yej See ALL the stars! from <0,1i> Since: Mar, 2010
See ALL the stars!
#29: Jun 2nd 2011 at 5:50:52 AM

Tomorrow and yesterday, but never today.

Da Rules excuse all the inaccuracy in the world. Listen to them, not me.
MajorTom Since: Dec, 2009
#30: Jun 2nd 2011 at 5:55:26 AM

^^ More or less. An adult from 1900 wouldn't recognize the world today just as an adult from 1800 wouldn't recognize the world in 1900. Thus technically The Singularity has occurred many times already throughout human history.

The fantastical thought that somehow were due for a singular singularity event where all problems vanish and technology has somehow transcended all that came before (in effect becoming magical) is dreaming at best.

Yej See ALL the stars! from <0,1i> Since: Mar, 2010
See ALL the stars!
#31: Jun 2nd 2011 at 5:59:23 AM

Technology is already partially magical. Imagine giving an iPhone to someone in 1995, for instance.

Da Rules excuse all the inaccuracy in the world. Listen to them, not me.
Diamonnes In Riastrad from Ulster Since: Nov, 2009
In Riastrad
#32: Jun 2nd 2011 at 5:59:51 AM

[up][up]Of course that isn't to say things don't get better, just not instantly perfect, yeah?

[up]Better yet, bringing a machine gun to the Crusades. This Is My Boomstick indeed.

edited 2nd Jun '11 6:01:00 AM by Diamonnes

My name is Cu Chulainn. Beside the raging sea I am left to moan. Sorrow I am, for I brought down my only son.
Barkey Since: Feb, 2010 Relationship Status: [TOP SECRET]
#33: Jun 2nd 2011 at 6:57:12 AM

I've always daydreamed that I could bring a 249 to the Revolutionary War. I'd make that conflict pretty revolutionary alright... Hide behind a tree, wait for the first volley from a formation of redcoats, and then turn around and mow them all down with half an assault pack.

breadloaf Since: Oct, 2010
#34: Jun 2nd 2011 at 10:53:36 AM

Well just remember to bring lots of ammunition because you'll have to resort to a lot of child labour to maintain that m-249. Just picture rows upon rows of kids hammering out bullets for you.

As for the singularity, I'm just not sure what it's supposed to entail. For one thing, it's not likely we get the technologies in the order, or even the types of technologies that we expect or want today. While we're talking about sentient AI and nano-machines, you fast forward 50 years and we're doing something completely different.

Jinren from beyond the Wall Since: Oct, 2010
#35: Jun 2nd 2011 at 11:32:56 AM

"a self-improving general-purpose problem solver"*

Sounds like the very definition of the first step of the Singularity. Once the problem-solving tools are able to improve themselves, they will start to get faster at doing so and more powerful at solving problems. Will it result in strong AI? Who knows but it will result in accelerated development of general technologies.

As far as I'm concerned, this sort of thing is enough of a goal, and a lot more realistic and attainable than some abstract concept like strong AI that too many people haven't finished deciding is or isn't possible. I for one will be trying to contribute by working on it directly.

edited 2nd Jun '11 11:56:47 AM by Jinren

Tzetze DUMB from a converted church in Venice, Italy Since: Jan, 2001
DUMB
#36: Jun 2nd 2011 at 11:49:29 AM

[[quoteblock]]text to quote[[/quoteblock]]

Hopefully AIXI is more interesting than the General Problem Solver was.

[1] This facsimile operated in part by synAC.
Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#37: Jun 2nd 2011 at 11:55:41 AM

For the quote boxes, just write [[ quoteblock]] what you want to write [[ /quoteblock]], but without the spaces after "[[".

Like this:

what you want to write.

As for your question, the general framework of AIXI (I still have to study it properly, though) is that you have an agent who interacts with the environment in some predetermined ways, and receives a payoff after each interaction depending on its actions. Assuming that the environment is computable, you can prove that AIXI will eventually find the way to optimize its choices — and, most importantly, that it will do this in an asymptotically optimal way: you can prove mathematically that it is not possible to write a program which would solve this problem faster than this would do, except for some multiplication constant.

In a certain sense, the algorithm would modify itself until it becomes the best one for optimizing that particular function — and in much, much faster time than the one you would get through, let's say, a straightforward application of genetic programming techniques.

That's a really cool idea, and it seems that many people are interested in it, and rightfully so. However, at the moment it's only a mathematical model, as far as I know, and no efficient implementation has been written*

, and even if you did you'd still have the problem of finding a definition of "payoff function" that makes some sense for whatever you are trying to do.

And, as the folks who do friendly AI point out, this is not exactly a trivial matter in general.

EDIT: ninja'd on quoteblock.

Hopefully AIXI is more interesting than the General Problem Solver was.
I agree. I think that it might be an interesting idea, at least as a theoretical framework; but I am not in that particular field, and of course these are very difficult matters.

Also, I don't think that the General Problem Solver was a failure. It did not insta-solve the problem of AI, as it was initially hoped; but it was a huge contribution to the whole area of automated theorem solving, which is a very important subject with a lot of practical applications already.

edited 2nd Jun '11 12:03:12 PM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.
del_diablo Den harde nordmann from Somewher in mid Norway Since: Sep, 2009
Den harde nordmann
#38: Jun 2nd 2011 at 12:27:28 PM

The singularity happened in England 1712.
The second singularity is something we have yet to experience?

A guy called dvorak is tired. Tired of humanity not wanting to change to improve itself. Quite the sad tale.
Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#39: Jun 2nd 2011 at 12:38:18 PM

The singularity, the second coming and Ragnarok have been booked for the same day.

That's going to cause a little bit of confusion when they happen.

edited 2nd Jun '11 12:38:48 PM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.
Barkey Since: Feb, 2010 Relationship Status: [TOP SECRET]
#40: Jun 2nd 2011 at 12:43:33 PM

Is there anyone besides me who is a little nervous about this entire proposition? I'm not terribly comfortable with just letting AI do everything for us, there isn't really any reason for us to even be around after that.

I just don't see a whole lot of value in a life where you don't work, AI's make better.. everything.. than humans(particularly scientists) et cetera.

When all we are is essentially a drain on the efficiency of a bunch of sentient machines, what reason is there to even have us around? We'll just become a society of limpdicked decadent bags of flesh. I don't really see that as a good thing.

Thorn14 Gunpla is amazing! Since: Aug, 2010
Gunpla is amazing!
#41: Jun 2nd 2011 at 12:45:00 PM

I dont think AI will ever destroy all work people do, but I imagine will be used to take care of the crap no one wants to do perhaps.

Aondeug Oh My from Our Dreams Since: Jun, 2009
Oh My
#42: Jun 2nd 2011 at 12:46:28 PM

A life without work would likely depress me...I feel horrible if I don't do anything work like for a day. I must do something.

At the same time work annoys me.

I guess I want both work and lazy nothingness whenever my fickle self decides it wants it...

If someone wants to accuse us of eating coconut shells, then that's their business. We know what we're doing. - Achaan Chah
SlightlyEvilDoctor Needs to be more Evil Since: May, 2011
Needs to be more Evil
#43: Jun 2nd 2011 at 12:47:36 PM

[up][up][up]Yep, chances are an AI that isn't explicitly programmed to truly care about humans (which as far as I can tell, is harder than "just" making an AI) will eventually exterminate mankind.

edited 2nd Jun '11 12:49:16 PM by SlightlyEvilDoctor

Point that somewhere else, or I'll reengage the harmonic tachyon modulator.
Barkey Since: Feb, 2010 Relationship Status: [TOP SECRET]
#44: Jun 2nd 2011 at 12:48:38 PM

At least I'll most definitely be dead before they can replace my job with automatons... Or at the very least retired for a long time. My job would probably be one of the last ones to be automated, since it's the one that involves the most discretion.

^

Yeah, in a society like what the singularity talks about, we're not terribly cost-effective or efficient to keep around. We'd most likely just be exterminated. Not that I would have a problem with it, a society like that deserves what it gets because without ambition or advancing on our own merits, we wouldn't be human anyway. Just big fleshy sacks of shit and organs.

edited 2nd Jun '11 12:50:16 PM by Barkey

Yej See ALL the stars! from <0,1i> Since: Mar, 2010
See ALL the stars!
#45: Jun 2nd 2011 at 12:51:07 PM

[up][up] AI capable of human-like behavior will almost certainly have morals.

Da Rules excuse all the inaccuracy in the world. Listen to them, not me.
Aondeug Oh My from Our Dreams Since: Jun, 2009
Oh My
#46: Jun 2nd 2011 at 12:51:41 PM

...but what if they develop morals we don't like?

If someone wants to accuse us of eating coconut shells, then that's their business. We know what we're doing. - Achaan Chah
Barkey Since: Feb, 2010 Relationship Status: [TOP SECRET]
#47: Jun 2nd 2011 at 12:56:37 PM

^^

Doesn't mean they'll be our morals. If I were a machine, I'd just see us as a big waste of time. We would go down the path of all the other creatures that aren't cute and cuddly enough to be pets, aren't used for food or hide by humans, and don't have any biproducts that we want.

Meaning they would bulldoze our homes to make room for useful facilities and kill us if we ever got in their way.

Don't. Create. AI.

edited 2nd Jun '11 12:56:50 PM by Barkey

Jinren from beyond the Wall Since: Oct, 2010
#48: Jun 2nd 2011 at 12:57:08 PM

^^ What if your kids turn out to be serial killers? What if a developing nation decided after becoming wealthy that actually they'd exterminate the original G8 for kicks?

Making sure that the products of our society, biological and synthetic, are morally upright is at least half of the task of producing them at all.

Producing AI can only be a good thing for the entire world.

edited 2nd Jun '11 12:59:21 PM by Jinren

Yej See ALL the stars! from <0,1i> Since: Mar, 2010
See ALL the stars!
#49: Jun 2nd 2011 at 12:58:31 PM

[up][up] ...Except if you give them mirror neurons, at which point game theory kicks in and the end result would (hopefully) be The Computer Is Your Friend.

[up] Wrong trope. You want Famous Last Words. tongue

edited 2nd Jun '11 1:00:29 PM by Yej

Da Rules excuse all the inaccuracy in the world. Listen to them, not me.
SlightlyEvilDoctor Needs to be more Evil Since: May, 2011
Needs to be more Evil
#50: Jun 2nd 2011 at 12:59:14 PM

What if the AI's morals are incomprehensible? What if through recursive self-improvement, it becomes as powerful compared to us as we are compared to ants? Do ants reassure themselves by telling each other humans have morals?

edited 2nd Jun '11 1:01:44 PM by SlightlyEvilDoctor

Point that somewhere else, or I'll reengage the harmonic tachyon modulator.

Total posts: 150
Top