Follow TV Tropes

Following

Would The Singularity be allowed to happen?

Go To

Ekuran Since: Feb, 2010 Relationship Status: watch?v=dQw4w9WgXcQ
#26: Mar 21st 2012 at 11:59:26 PM

[up][up][up]

There is more to intelligence than raw processing power, I think — that's what makes AI design difficult: it's not just a matter of writing a simple program and throwing massive amounts of computation at it, it's a matter of writing a complex program. If I remember correctly, John McCarthy wrote somewhere that in principle, modern computers should have more than enough computational power to house a streamlined human-level intelligence (albeit not to simulate a human brain in detail, of course.) We just don't know how yet.

Ah. You're talking about software issues, rather than hardware issues.

We haven't been able to "significantly" advance the complexity/comprehension of programs yet because we haven't been able to think of ways to do so with our severely slow brains/computers. We can still solve it by using the latter to advance the former.

Besides, writing a complex program isn't really necessary if you can just increase the interactions and amount of programs involved to replicate the complexity of the more complex program in question.

You should probably have a Wiki Walk on the Other Wiki to see all the little nuances of how this can be done. See also, the Geth.

Now, supposing that we succeeded in creating an AI, either through understanding intelligence from a formal perspective or through the brute force method of simulating a whole human brain from the neuron level upwards, there are a few obvious improvements that we could certainly make. More speed. More memory, and with better indexing algorithms. Hard-coded mathematical subroutines. Perhaps even memory transfer and acquisition, if we can understand memory well enough.

All of this, and more, is nifty and potentially interesting, I think. But it seems to me none of this is a radical change, not one of the same sort as the transition from non-sapient programs to sapient A.I.s would be.

Well, actually, no. Those would by definition be a radical change, even if it wouldn't exactly be the same sort as a "non-sapient/sapient transition". You also don't seem to know what a post-singularity entity can really do. Think about having multiple layers of thoughts (or far more than humanly possible), or multiple perspectives in multiple bodies with far better and outright new senses, Electronic Telepathy, hardware changes such as a wider ability to procure information (like advancing/expanding our senses as I mentioned before, which in and of itself increases complexity and comprehension, as you can't understand something if you aren't even able to know it), have a larger storage for it (increased mental capacity, or being able to remember more), easily remember it (perfect-ish memory), and then apply it at astounding speeds (increased processing power), and a whole lot of other weird shit that would make our limited perspectives quite laughable.

This isn't even mentioning what I pointed out above.

Assuming that they even exist, we cannot just reach eventual truly higher-than-human modes of consciousness through self-improvement and design, I think. That would be possible only if we could, at least in principle, understand them; and then they would scarcely be higher-than-human. Pure evolution might perhaps be able to reach them, as it made nonsapient beings into us; but I am not sure of this.

If it can be done (and it has been done with our ancestors, according to you at least, as I don't put too much stock in this whole "sapience cut-off point" thing, which is also why I think anything that can think is a person, even if they're a highly limited person), we can do it. Also, [lol] at the higher-than-human and pure evolution lines.

edited 22nd Mar '12 12:19:53 AM by Ekuran

Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#27: Mar 22nd 2012 at 12:51:26 AM

We haven't been able to "significantly" advance the complexity/comprehension of programs yet because we haven't been able to think of ways to do so with our severely slow brains/computers. We can still solve it by using the latter to advance the former.
What are you talking about? We certainly have been able to advance the complexity of programs and their comprehension abilities.

All I am saying is that the difference between sapience and non-sapience is not merely a matter of "processing power" — well, not unless you define this term in such a general way that it does not really mean much of anything.

Besides, writing a complex program isn't really necessary if you can just increase the interactions and amount of programs involved to replicate the complexity of the more complex program in question.
You are talking about modularity in programming. It is an useful idea, as the first person ever to write the first assembler knew already; but it is not magic. In order to write a complex program, you need first to know what it is supposed to do, and how. We might be able to do something like that for sapience, eventually; but if there was some hypothetical algorithm which is intrinsically beyond our human minds' abilities, not because of their power but because of design, then we have no chance at all to understand and implement it.

Those would by definition be a radical change, even if it wouldn't exactly be the same sort as a "non-sapient/sapient transition".
I simply have a stricter idea of what "radical change" means, I think. None of the things you mention (and obviously, I was quite familiar with all of them, and more beyond. I've read Orions Arm, you knowtongue) strike me as all that radical. Your hypothetical superintelligent being would have massive advantages over me when it comes to reasoning, just like a racing car has massive advantages over me when it comes to running; but ultimately, everything that you mention has analogues in my own experience — you are not telling me anything which is truly unimaginable, which is pretty obvious seeing as if something were really unimaginable neither you nor me could imagine it.

I certainly have multiple layers of thought. I have senses, obviously, and I can use instruments to expand their range or abilities.. I can communicate with other intelligent beings, and obviously I can think and learn.

If I were augmented to the kind of entity you describe, all of these abilities would be greatly improved and their limitations reduced. But the resulting entity would not be beyond my comprehension, not in the same sense in which I am beyond the comprehension of a chicken.

Also, [lol] at the higher-than-human and pure evolution lines.
The problem is, neither I nor you nor any human can think what is unthinkable to human beings. A "state change" of the same sort of the nonsapience/sapience transition would require, I think, some element which is unthinkable to the "lower consciousness level." Perhaps, assuming that such a thing as a further transition even exists (and I am not convinced that it does, by the way), pure evolution might be able to reach it eventually. Attempts to direct this hypothetical evolution by setting appropriate fitness measures, however, would be entirely useless — if I can think of it, even vaguely, it is not higher-than-human. And yeah, I know that evolution is not linear, and does not necessarily move towards higher consciousness. But at least it has a chance to do so.

edited 22nd Mar '12 12:56:26 AM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.
Ekuran Since: Feb, 2010 Relationship Status: watch?v=dQw4w9WgXcQ
#28: Mar 22nd 2012 at 1:24:41 AM

I don't think "state changes" really exist. There is no actual higher-than-human/lower-than-human or higher/lower consciousness, evolution won't get us there, there is nothing intrinsically beyond a human (or any other thinking entity) minds' abilities, and absolutely nothing is unthinkable. I'm just bad at saying what I mean, in which case I'm sorry for misleading you.

All I've been trying to say is that, yes, a superintelligent being would be beyond our "comprehension" in the exact same sense in which we're beyond the "comprehension" of a chicken, since there never was an actual limit on "comprehension" in the first place. So yes, it really is simply a matter of "processing power". In other words, I think the whole non-sapience/sapience/"super"-sapience divide is bullshit.

What are you talking about? We certainly have been able to advance the complexity of programs and their comprehension abilities.

I meant to say that we haven't been able to (significantly) advance their complexity to the degree that they match a human brain's complexity in that regard.

edited 22nd Mar '12 1:31:16 AM by Ekuran

Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#29: Mar 22nd 2012 at 1:36:51 AM

I am also doubtful that beyond-human state changes exist. I am not dismissing the possibility outright, because if they existed I would be unable to spot them; but I see no reason to think they exist.

But below-human state changes certainly exist. One, for example, is that one between creatures which operate simply through action and reaction (like the sensitive plant) and creatures capable of forming memories and evaluating outcomes.

Another one, I would argue, is the one between creatures which are incapable of explicit symbolic representation (like most animals) and the ones who are capable of doing that, like humans and perhaps, to a lesser degree, apes and crows and so on.

These differences are not a matter of improving on existing abilities. They are entirely new ones.

edited 22nd Mar '12 1:38:55 AM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.
Ekuran Since: Feb, 2010 Relationship Status: watch?v=dQw4w9WgXcQ
#30: Mar 22nd 2012 at 1:54:31 AM

Like creating entirely senses with advanced technology, right?

Actually, no. Their abilities were indeed increased, you just have to expand your definition of ability, as it was just specifically their comprehension of everything else. This doesn't make their "state of being" or "consciousness" intrinsically higher or lower than anyone or anything else, though, as they just were able to absorb and store more information, and more easily make connections between the information they acquired, such as explicit symbolic representation.

One, for example, is that one between creatures which operate simply through action and reaction (like the sensitive plant) and creatures capable of forming memories and evaluating outcomes.

That is probably the sentient/non-sentient divide, which is a bit more concrete, but I have a few problems with it that aren't relevant to the discussion at hand.

edited 22nd Mar '12 1:56:02 AM by Ekuran

Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#31: Mar 22nd 2012 at 2:13:24 AM

Like creating entirely senses with advanced technology, right?
No. That's no different from using technology to expand the range of senses and capabilities. You know, like writing and communication technologies are allowing us to have near-instantaneous communication at a range which is far beyond the possibilities of verbal conversation.

Explicit symbolic representation cannot be modeled simply by taking a brain which is incapable of it and giving it more of what it already has. It is not about making more connections between memories, it is about treating the connections themselves as objects of thought. A chicken might perhaps recognize the analogy between one apple, one seed, and so on. But it cannot derive the purely abstract notion of "one", of this I am pretty sure.

But in any case, I agree that we are getting quite a bit offtopic. To return to the singularity, and leaving the issue of the human/animal divide behind us: I do not think that any of the improvements on the human intellect which have been proposed so far alters significantly what I consider essential to human nature.

Which is why, even though I approve of many of Transhumanism's practical objectives (although not its apparent obsession with attempting -and failing- to predict trends instead of, you know, shaping them), I frankly dislike the name and much of the philosophy behind it. From my point of view, the idea of "transcending humankind" is nonsense, and it would be undesirable anyway. I like being human.

What I might be interested in is improving it: becoming a better human, now that's something I can get behind of.

edited 22nd Mar '12 2:17:06 AM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.
TenTailsBeast The Ultimate Lifeform from The Culture Since: Feb, 2012
#32: Mar 22nd 2012 at 2:51:58 AM

"A chicken might perhaps recognize the analogy between one apple, one seed, and so on. But it cannot derive the purely abstract notion of "one", of this I am pretty sure."

[1] Newborn chicks are capable of performing simple arithmetic. A basic sense of number is very useful for animals.

I vowed, and so did you: Beyond this wall- we would make it through.
Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#33: Mar 22nd 2012 at 3:07:02 AM

They are counting quantities. That's not the same as recognizing the abstract concept of number, I think.

Returning to the question of whether the Singularity would be allowed to happen: I am pretty sure that if it does (and that's a big if), it won't be because of the singularitarian movement as a whole. I find that it is extremely passive — for the most part, it seems to me that it does not try to do things. Mostly, it seems to be about sitting around and speculating idly.

But they seem to know where they are going, the ones who walk away from Omelas.
Ekuran Since: Feb, 2010 Relationship Status: watch?v=dQw4w9WgXcQ
#35: Mar 22nd 2012 at 3:21:37 AM

No. That's no different from using technology to expand the range of senses and capabilities. You know, like writing and communication technologies are allowing us to have near-instantaneous communication at a range which is far beyond the possibilities of verbal conversation.

Your right, that's why I said "Actually, no."

Explicit symbolic representation cannot be modeled simply by taking a brain which is incapable of it and giving it more of what it already has. It is not about making more connections between memories, it is about treating the connections themselves as objects of thought. A chicken might perhaps recognize the analogy between one apple, one seed, and so on. But it cannot derive the purely abstract notion of "one", of this I am pretty sure.

There you go again, thinking that anything that can think is incapable of thinking something. What, pray tell, actually limits us besides our inefficient brains? Nothing, of course, besides your abstract notions of comprehension.

There is nothing to stop anything that thinks from "treating the connections themselves" as objects of thought besides any "inherent qualities" such as the inability to comprehend something, which have no actual proof that they exist.

I do not think that any of the improvements on the human intellect which have been proposed so far alters significantly what I consider essential to human nature.

You have assumed there was such a thing as human (or any other type of being) nature in the first place, which is quite folly in my opinion.

Which is why, even though I approve of many of Transhumanism's practical objectives (although not its apparent obsession with attempting -and failing- to predict trends instead of, you know, shaping them), I frankly dislike the name and much of the philosophy behind it. From my point of view, the idea of "transcending humankind" is nonsense, and it would be undesirable anyway. I like being human.

What I might be interested in is improving it: becoming a better human, now that's something I can get behind of.

There is no actual "transcendence" because there is nothing to objectively define humankind, or anyotherkind for that matter. There is no "improvement", or "degradation", only change.

edited 22nd Mar '12 3:25:00 AM by Ekuran

Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#36: Mar 22nd 2012 at 3:51:56 AM

What, pray tell, actually limits us besides our inefficient brains?
Perhaps the design of our brains?

You can improve the efficiency of a train as much as you want, but you won't get an airplane. You'll get a very fast and ecological train; but it won't get off the tracks and start flying around (not beyond maglev, in any case.)

Real Life is not based on Tim Taylor Technology.

You have assumed there was such a thing as human (or any other type of being) nature in the first place, which is quite folly in my opinion.
To me it seems that it is folly to assume the opposite; but yeah, we are getting sidetracked here.

edited 22nd Mar '12 3:54:29 AM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.
Ekuran Since: Feb, 2010 Relationship Status: watch?v=dQw4w9WgXcQ
#37: Mar 22nd 2012 at 4:12:25 AM

Perhaps the design of our brains?

You can improve the efficiency of a train as much as you want, but you won't get an airplane. You'll get a very fast and ecological train; but it won't get off the tracks and start flying around (not beyond maglev, in any case.)

There is nothing in the "design" of our brains that intrinsically limits it in thought, besides it's inefficiency. Anything you bring up to counter this observation (such as the physical qualities of a train and airplane, kind of like our brains) has no objective proof, and in fact would be doing me a favor, as they're purely physical limitations, like the actual "design" (or qualities) of our brains.

To me it seems that it is folly to assume the opposite.

I somehow doubt you've objectively defined what it "means to be human", and doubt anyone ever will, cause it probably doesn't exist.

edited 22nd Mar '12 4:14:42 AM by Ekuran

TenTailsBeast The Ultimate Lifeform from The Culture Since: Feb, 2012
#38: Mar 22nd 2012 at 4:24:41 AM

"You can improve the efficiency of a train as much as you want, but you won't get an airplane."

Nope you get better. [1]

"A vactrain (or vacuum tube train) is a proposed, as-yet-unbuilt design for future high-speed railroad transportation. This would entail building maglev lines through evacuated (air-less) or partly evacuated tubes or tunnels. Though the technology is currently being investigated for development of regional networks, advocates have suggested establishing vactrains for transcontinental routes to form a global network. The lack of air resistance could permit vactrains to use little power and to move at extremely high speeds, up to 4000–5000 mph (6400–8000 km/h), or 5–6 times the speed of sound at sea level and standard conditions, according to the Discovery Channel's Extreme Engineering program "Transatlantic Tunnel".

Theoretically, vactrain tunnels could be built deep enough to pass under oceans, thus permitting very rapid intercontinental travel. Vactrains could also use gravity to assist their acceleration. If such trains went as fast as predicted, the trip between London and New York would take less than an hour, effectively supplanting aircraft as the world's fastest mode of public transportation."

I vowed, and so did you: Beyond this wall- we would make it through.
Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#39: Mar 22nd 2012 at 4:26:59 AM

So, in order to bring "objective proof", I should present you with a thought that no human mind could possibly conceive or understand? Because you know, I see one problem with that plan... tongue

But in any case, I am not committing myself to the necessity of the existence of such limits. It may well be that the human mind, or a sufficiently streamlined and enlarged version thereof, is the most complete thinking machine possible. What I am saying is that it is not certain that it is so.

they're purely physical limitations, like the actual "design" (or qualities) of our brains.
Well, if you put it like this, everything is purely a physical limitation, up to and including the difference between you and an orange. I am not saying that there is something magical that blocks the development of true superminds; I am saying that the space of all minds that a human mind could think of, or of the minds that a mind designed by a human mind could think of, or so on is not necessarily the space of all possible minds.

Look, the human brain evolved in response to certain specific evolutionary pressures. It proved itself surprisingly versatile; but still, it is essentially a tool for finding the best bananas and boning the hottest monkeys.*

Thinking that it can be the start of a chain of improved designs which will eventually be able to achieve anything that a material mind could possibly achieve does not seem to me all that different from thinking that by improving enough on the design of the wings of a butterfly we can obtain a spaceship for Mars.

EDIT:

[up] Better, perhaps, but still not a plane. You have a point, however, that it is not true that a plane is intrinsically better than a train; and similarly, it is not clear to me for what reason the space of all possible minds could even be linearly ordered.

edited 22nd Mar '12 4:29:44 AM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.
TenTailsBeast The Ultimate Lifeform from The Culture Since: Feb, 2012
#40: Mar 22nd 2012 at 4:55:38 AM

"Thinking that it can be the start of a chain of improved designs which will eventually be able to achieve anything that a material mind could possibly achieve does not seem to me all that different from thinking that by improving enough on the design of the wings of a butterfly we can obtain a spaceship for Mars."

It seems reasonable to me, though, that we will come to comprehend the principles of operation underlying intelligence. Once we understand how our brain does it, we'll likely be ale to improve upon it. Also one thing to take into account is that many "smart" are actually simple for computers (which makes sense from an evolutionary perspective), when we get to this point the number of and performance of these kind of activities by computers will improve astronomically. Moreover, a "human-level" AI would almost by default have vastly superhuman cognitive powers, including holographic memory/total recall, supercomputer calculation abilities, hyperspeed thought, direct knowledge sharing, etc. It's very likely they will be capable of considerably different forms of thought. No matter how you shake it, Strong AI would have a different mode of consciousness and would surpass every genius and savant that ever lived. If this isn't enough to be radical, what is? Another thing. Copyable sapient software? Think of the radical economic ramifications.

edited 22nd Mar '12 5:03:36 AM by TenTailsBeast

I vowed, and so did you: Beyond this wall- we would make it through.
Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#41: Mar 22nd 2012 at 5:13:11 AM

I certainly agree that a human-level AI, or a similarly augmented human, would have a great amount of advantages over a standard early 21th century human.

But they seem to know where they are going, the ones who walk away from Omelas.
TenTailsBeast The Ultimate Lifeform from The Culture Since: Feb, 2012
#42: Mar 22nd 2012 at 5:33:22 AM

I think sapience is not really so much a matter of "phase-shift" as it is a sort of combinatorial explosion. If that makes sense.

I vowed, and so did you: Beyond this wall- we would make it through.
RTaco Since: Jul, 2009
#43: Mar 22nd 2012 at 8:38:15 AM

I'd just like to throw in that there's not really a line between sapience and non-sapience. Looking at the intelligence of apes (and other smart animals, like crows), there's not really anything unique to us; we're just a more extreme example on the scale of brainpower.

edited 22nd Mar '12 8:38:47 AM by RTaco

Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#44: Mar 22nd 2012 at 9:11:34 AM

I am open to the possibility that some nonhuman animals may have some access to some of the components of sapience. Crows and apes certainly have some capability for symbolic manipulation, for example.

Still, it seems to me that symbolic manipulation requires some special algorithms — it's not something you can get by taking a design for a non-sapient brain and just increasing it in power and efficiency.

But they seem to know where they are going, the ones who walk away from Omelas.
Ekuran Since: Feb, 2010 Relationship Status: watch?v=dQw4w9WgXcQ
#45: Mar 22nd 2012 at 11:57:48 AM

So, in order to bring "objective proof", I should present you with a thought that no human mind could possibly conceive or understand? Because you know, I see one problem with that plan... tongue

But in any case, I am not committing myself to the necessity of the existence of such limits. It may well be that the human mind, or a sufficiently streamlined and enlarged version thereof, is the most complete thinking machine possible. What I am saying is that it is not certain that it is so.

The question was rhetorical. It was meant to show how you can't actually prove any of your assumptions. It also means I can't objectively say that anything that can think can think of anything, but I at least seem to know that.

You, on the other hand, do seem certain that there are different "levels" of thought (at least anything up to human complexity).

Well, if you put it like this, everything is purely a physical limitation, up to and including the difference between you and an orange. I am not saying that there is something magical that blocks the development of true superminds; I am saying that the space of all minds that a human mind could think of, or of the minds that a mind designed by a human mind could think of, or so on is not necessarily the space of all possible minds.

Look, the human brain evolved in response to certain specific evolutionary pressures. It proved itself surprisingly versatile; but still, it is essentially a tool for finding the best bananas and boning the hottest monkeys.

Thinking that it can be the start of a chain of improved designs which will eventually be able to achieve anything that a material mind could possibly achieve does not seem to me all that different from thinking that by improving enough on the design of the wings of a butterfly we can obtain a spaceship for Mars.

There are physical qualities to a brain, and those can be changed, improved. The brain (shockingly) is what allows you to think. The orange (also shockingly) lacks a brain, and thus seems unable to think. Thus, everything is indeed purely a physical limitation.

Now, unless you come up with a counterargument that thoughts themselves (which are mental, and thus don't have any physical qualities, which seem to be the only things that determine mental qualities such as the speed of thoughts or memory, and even comprehension) can somehow be "intrinsically higher or lower thoughts" (which you can't actually prove exist besides saying "just cause"), I think you've lost this debate.

[up]Oh, but it is. It has to be.

Well, not just those qualities (there are other ones needed, like the ability to store memory), but yes, the physical qualities of a brain do indeed determine all the qualities of a mind/thoughts, and that there are no "special" algorithms that determine the "complexity of a thought" that would allow "sapience" as all singular thoughts are equal, which my long-winded explanation shows.

Sure, some problems are intractable (problems that can technically be solved if given enough time, which we don't actually have enough of as it takes too long to be useful, i.e. after we're long gone), and I'm guessing the capability for symbolic manipulation is one such problem for most animals, but you haven't actually proved that they inherently don't have the capability for symbolic manipulation besides, again, saying the equivalent of "just cause".

Game. Set. Match.

edited 22nd Mar '12 12:10:48 PM by Ekuran

Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#46: Mar 22nd 2012 at 12:23:30 PM

Game. Set. Match.
Look, I am not offended or anything — I've been genuinely enjoying this conversation, honestly — but do you have any idea of how silly it is of you to score your own "points" in an internet conversation, or even to treat it like something that can be "won" or "lost"?

Furthermore, it seems to me that you are deliberately misunderstanding what I am talking about, or at least not bothering to address my arguments at all; and the topic has strayed quite a bit already anyway. If we want to keep discussing this issue, perhaps we should make a new thread for this?

edited 22nd Mar '12 12:24:39 PM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.
Ekuran Since: Feb, 2010 Relationship Status: watch?v=dQw4w9WgXcQ
#47: Mar 22nd 2012 at 12:41:41 PM

This has mostly been about trying to prove you wrong, not necessarily that I was right. In fact, I'm mostly just trying to see if I can change your mind, just to see if I can. The debate is kind of fun, though, and I'm sorry if I did actually offended you.

And we should probably make a new thread, since this is a bit off-topic.

edited 22nd Mar '12 12:45:43 PM by Ekuran

Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#48: Mar 22nd 2012 at 12:47:49 PM

No worries, I was not offended. This debate has been fun, and yeah, we could perhaps continue it in another thread if we want.

But they seem to know where they are going, the ones who walk away from Omelas.
Add Post

Total posts: 48
Top