All I am saying is that the difference between sapience and non-sapience is not merely a matter of "processing power" — well, not unless you define this term in such a general way that it does not really mean much of anything.
I certainly have multiple layers of thought. I have senses, obviously, and I can use instruments to expand their range or abilities.. I can communicate with other intelligent beings, and obviously I can think and learn.
If I were augmented to the kind of entity you describe, all of these abilities would be greatly improved and their limitations reduced. But the resulting entity would not be beyond my comprehension, not in the same sense in which I am beyond the comprehension of a chicken.
edited 22nd Mar '12 12:56:26 AM by Carciofus
But they seem to know where they are going, the ones who walk away from Omelas.I don't think "state changes" really exist. There is no actual higher-than-human/lower-than-human or higher/lower consciousness, evolution won't get us there, there is nothing intrinsically beyond a human (or any other thinking entity) minds' abilities, and absolutely nothing is unthinkable. I'm just bad at saying what I mean, in which case I'm sorry for misleading you.
All I've been trying to say is that, yes, a superintelligent being would be beyond our "comprehension" in the exact same sense in which we're beyond the "comprehension" of a chicken, since there never was an actual limit on "comprehension" in the first place. So yes, it really is simply a matter of "processing power". In other words, I think the whole non-sapience/sapience/"super"-sapience divide is bullshit.
I meant to say that we haven't been able to (significantly) advance their complexity to the degree that they match a human brain's complexity in that regard.
edited 22nd Mar '12 1:31:16 AM by Ekuran
I am also doubtful that beyond-human state changes exist. I am not dismissing the possibility outright, because if they existed I would be unable to spot them; but I see no reason to think they exist.
But below-human state changes certainly exist. One, for example, is that one between creatures which operate simply through action and reaction (like the sensitive plant) and creatures capable of forming memories and evaluating outcomes.
Another one, I would argue, is the one between creatures which are incapable of explicit symbolic representation (like most animals) and the ones who are capable of doing that, like humans and perhaps, to a lesser degree, apes and crows and so on.
These differences are not a matter of improving on existing abilities. They are entirely new ones.
edited 22nd Mar '12 1:38:55 AM by Carciofus
But they seem to know where they are going, the ones who walk away from Omelas.Like creating entirely senses with advanced technology, right?
Actually, no. Their abilities were indeed increased, you just have to expand your definition of ability, as it was just specifically their comprehension of everything else. This doesn't make their "state of being" or "consciousness" intrinsically higher or lower than anyone or anything else, though, as they just were able to absorb and store more information, and more easily make connections between the information they acquired, such as explicit symbolic representation.
That is probably the sentient/non-sentient divide, which is a bit more concrete, but I have a few problems with it that aren't relevant to the discussion at hand.
edited 22nd Mar '12 1:56:02 AM by Ekuran
Explicit symbolic representation cannot be modeled simply by taking a brain which is incapable of it and giving it more of what it already has. It is not about making more connections between memories, it is about treating the connections themselves as objects of thought. A chicken might perhaps recognize the analogy between one apple, one seed, and so on. But it cannot derive the purely abstract notion of "one", of this I am pretty sure.
But in any case, I agree that we are getting quite a bit offtopic. To return to the singularity, and leaving the issue of the human/animal divide behind us: I do not think that any of the improvements on the human intellect which have been proposed so far alters significantly what I consider essential to human nature.
Which is why, even though I approve of many of Transhumanism's practical objectives (although not its apparent obsession with attempting -and failing- to predict trends instead of, you know, shaping them), I frankly dislike the name and much of the philosophy behind it. From my point of view, the idea of "transcending humankind" is nonsense, and it would be undesirable anyway. I like being human.
What I might be interested in is improving it: becoming a better human, now that's something I can get behind of.
edited 22nd Mar '12 2:17:06 AM by Carciofus
But they seem to know where they are going, the ones who walk away from Omelas."A chicken might perhaps recognize the analogy between one apple, one seed, and so on. But it cannot derive the purely abstract notion of "one", of this I am pretty sure."
[1] Newborn chicks are capable of performing simple arithmetic. A basic sense of number is very useful for animals.
I vowed, and so did you: Beyond this wall- we would make it through.They are counting quantities. That's not the same as recognizing the abstract concept of number, I think.
Returning to the question of whether the Singularity would be allowed to happen: I am pretty sure that if it does (and that's a big if), it won't be because of the singularitarian movement as a whole. I find that it is extremely passive — for the most part, it seems to me that it does not try to do things. Mostly, it seems to be about sitting around and speculating idly.
But they seem to know where they are going, the ones who walk away from Omelas.Reasons to to take transhumanism seriously. Critical viewpoint.
I vowed, and so did you: Beyond this wall- we would make it through.Your right, that's why I said "Actually, no."
There you go again, thinking that anything that can think is incapable of thinking something. What, pray tell, actually limits us besides our inefficient brains? Nothing, of course, besides your abstract notions of comprehension.
There is nothing to stop anything that thinks from "treating the connections themselves" as objects of thought besides any "inherent qualities" such as the inability to comprehend something, which have no actual proof that they exist.
You have assumed there was such a thing as human (or any other type of being) nature in the first place, which is quite folly in my opinion.
What I might be interested in is improving it: becoming a better human, now that's something I can get behind of.
There is no actual "transcendence" because there is nothing to objectively define humankind, or anyotherkind for that matter. There is no "improvement", or "degradation", only change.
edited 22nd Mar '12 3:25:00 AM by Ekuran
You can improve the efficiency of a train as much as you want, but you won't get an airplane. You'll get a very fast and ecological train; but it won't get off the tracks and start flying around (not beyond maglev, in any case.)
Real Life is not based on Tim Taylor Technology.
edited 22nd Mar '12 3:54:29 AM by Carciofus
But they seem to know where they are going, the ones who walk away from Omelas.You can improve the efficiency of a train as much as you want, but you won't get an airplane. You'll get a very fast and ecological train; but it won't get off the tracks and start flying around (not beyond maglev, in any case.)
There is nothing in the "design" of our brains that intrinsically limits it in thought, besides it's inefficiency. Anything you bring up to counter this observation (such as the physical qualities of a train and airplane, kind of like our brains) has no objective proof, and in fact would be doing me a favor, as they're purely physical limitations, like the actual "design" (or qualities) of our brains.
I somehow doubt you've objectively defined what it "means to be human", and doubt anyone ever will, cause it probably doesn't exist.
edited 22nd Mar '12 4:14:42 AM by Ekuran
"You can improve the efficiency of a train as much as you want, but you won't get an airplane."
Nope you get better. [1]
"A vactrain (or vacuum tube train) is a proposed, as-yet-unbuilt design for future high-speed railroad transportation. This would entail building maglev lines through evacuated (air-less) or partly evacuated tubes or tunnels. Though the technology is currently being investigated for development of regional networks, advocates have suggested establishing vactrains for transcontinental routes to form a global network. The lack of air resistance could permit vactrains to use little power and to move at extremely high speeds, up to 4000–5000 mph (6400–8000 km/h), or 5–6 times the speed of sound at sea level and standard conditions, according to the Discovery Channel's Extreme Engineering program "Transatlantic Tunnel".
Theoretically, vactrain tunnels could be built deep enough to pass under oceans, thus permitting very rapid intercontinental travel. Vactrains could also use gravity to assist their acceleration. If such trains went as fast as predicted, the trip between London and New York would take less than an hour, effectively supplanting aircraft as the world's fastest mode of public transportation."
I vowed, and so did you: Beyond this wall- we would make it through.So, in order to bring "objective proof", I should present you with a thought that no human mind could possibly conceive or understand? Because you know, I see one problem with that plan...
But in any case, I am not committing myself to the necessity of the existence of such limits. It may well be that the human mind, or a sufficiently streamlined and enlarged version thereof, is the most complete thinking machine possible. What I am saying is that it is not certain that it is so.
Look, the human brain evolved in response to certain specific evolutionary pressures. It proved itself surprisingly versatile; but still, it is essentially a tool for finding the best bananas and boning the hottest monkeys.*
Thinking that it can be the start of a chain of improved designs which will eventually be able to achieve anything that a material mind could possibly achieve does not seem to me all that different from thinking that by improving enough on the design of the wings of a butterfly we can obtain a spaceship for Mars.
EDIT:
Better, perhaps, but still not a plane. You have a point, however, that it is not true that a plane is intrinsically better than a train; and similarly, it is not clear to me for what reason the space of all possible minds could even be linearly ordered.
edited 22nd Mar '12 4:29:44 AM by Carciofus
But they seem to know where they are going, the ones who walk away from Omelas."Thinking that it can be the start of a chain of improved designs which will eventually be able to achieve anything that a material mind could possibly achieve does not seem to me all that different from thinking that by improving enough on the design of the wings of a butterfly we can obtain a spaceship for Mars."
It seems reasonable to me, though, that we will come to comprehend the principles of operation underlying intelligence. Once we understand how our brain does it, we'll likely be ale to improve upon it. Also one thing to take into account is that many "smart" are actually simple for computers (which makes sense from an evolutionary perspective), when we get to this point the number of and performance of these kind of activities by computers will improve astronomically. Moreover, a "human-level" AI would almost by default have vastly superhuman cognitive powers, including holographic memory/total recall, supercomputer calculation abilities, hyperspeed thought, direct knowledge sharing, etc. It's very likely they will be capable of considerably different forms of thought. No matter how you shake it, Strong AI would have a different mode of consciousness and would surpass every genius and savant that ever lived. If this isn't enough to be radical, what is? Another thing. Copyable sapient software? Think of the radical economic ramifications.
edited 22nd Mar '12 5:03:36 AM by TenTailsBeast
I vowed, and so did you: Beyond this wall- we would make it through.I certainly agree that a human-level AI, or a similarly augmented human, would have a great amount of advantages over a standard early 21th century human.
But they seem to know where they are going, the ones who walk away from Omelas.I think sapience is not really so much a matter of "phase-shift" as it is a sort of combinatorial explosion. If that makes sense.
I vowed, and so did you: Beyond this wall- we would make it through.I'd just like to throw in that there's not really a line between sapience and non-sapience. Looking at the intelligence of apes (and other smart animals, like crows), there's not really anything unique to us; we're just a more extreme example on the scale of brainpower.
edited 22nd Mar '12 8:38:47 AM by RTaco
I am open to the possibility that some nonhuman animals may have some access to some of the components of sapience. Crows and apes certainly have some capability for symbolic manipulation, for example.
Still, it seems to me that symbolic manipulation requires some special algorithms — it's not something you can get by taking a design for a non-sapient brain and just increasing it in power and efficiency.
But they seem to know where they are going, the ones who walk away from Omelas.But in any case, I am not committing myself to the necessity of the existence of such limits. It may well be that the human mind, or a sufficiently streamlined and enlarged version thereof, is the most complete thinking machine possible. What I am saying is that it is not certain that it is so.
The question was rhetorical. It was meant to show how you can't actually prove any of your assumptions. It also means I can't objectively say that anything that can think can think of anything, but I at least seem to know that.
You, on the other hand, do seem certain that there are different "levels" of thought (at least anything up to human complexity).
Look, the human brain evolved in response to certain specific evolutionary pressures. It proved itself surprisingly versatile; but still, it is essentially a tool for finding the best bananas and boning the hottest monkeys.
Thinking that it can be the start of a chain of improved designs which will eventually be able to achieve anything that a material mind could possibly achieve does not seem to me all that different from thinking that by improving enough on the design of the wings of a butterfly we can obtain a spaceship for Mars.
There are physical qualities to a brain, and those can be changed, improved. The brain (shockingly) is what allows you to think. The orange (also shockingly) lacks a brain, and thus seems unable to think. Thus, everything is indeed purely a physical limitation.
Now, unless you come up with a counterargument that thoughts themselves (which are mental, and thus don't have any physical qualities, which seem to be the only things that determine mental qualities such as the speed of thoughts or memory, and even comprehension) can somehow be "intrinsically higher or lower thoughts" (which you can't actually prove exist besides saying "just cause"), I think you've lost this debate.
Oh, but it is. It has to be.
Well, not just those qualities (there are other ones needed, like the ability to store memory), but yes, the physical qualities of a brain do indeed determine all the qualities of a mind/thoughts, and that there are no "special" algorithms that determine the "complexity of a thought" that would allow "sapience" as all singular thoughts are equal, which my long-winded explanation shows.
Sure, some problems are intractable (problems that can technically be solved if given enough time, which we don't actually have enough of as it takes too long to be useful, i.e. after we're long gone), and I'm guessing the capability for symbolic manipulation is one such problem for most animals, but you haven't actually proved that they inherently don't have the capability for symbolic manipulation besides, again, saying the equivalent of "just cause".
Game. Set. Match.
edited 22nd Mar '12 12:10:48 PM by Ekuran
Furthermore, it seems to me that you are deliberately misunderstanding what I am talking about, or at least not bothering to address my arguments at all; and the topic has strayed quite a bit already anyway. If we want to keep discussing this issue, perhaps we should make a new thread for this?
edited 22nd Mar '12 12:24:39 PM by Carciofus
But they seem to know where they are going, the ones who walk away from Omelas.This has mostly been about trying to prove you wrong, not necessarily that I was right. In fact, I'm mostly just trying to see if I can change your mind, just to see if I can. The debate is kind of fun, though, and I'm sorry if I did actually offended you.
And we should probably make a new thread, since this is a bit off-topic.
edited 22nd Mar '12 12:45:43 PM by Ekuran
No worries, I was not offended. This debate has been fun, and yeah, we could perhaps continue it in another thread if we want.
But they seem to know where they are going, the ones who walk away from Omelas.
Ah. You're talking about software issues, rather than hardware issues.
We haven't been able to "significantly" advance the complexity/comprehension of programs yet because we haven't been able to think of ways to do so with our severely slow brains/computers. We can still solve it by using the latter to advance the former.
Besides, writing a complex program isn't really necessary if you can just increase the interactions and amount of programs involved to replicate the complexity of the more complex program in question.
You should probably have a Wiki Walk on the Other Wiki to see all the little nuances of how this can be done. See also, the Geth.
All of this, and more, is nifty and potentially interesting, I think. But it seems to me none of this is a radical change, not one of the same sort as the transition from non-sapient programs to sapient A.I.s would be.
Well, actually, no. Those would by definition be a radical change, even if it wouldn't exactly be the same sort as a "non-sapient/sapient transition". You also don't seem to know what a post-singularity entity can really do. Think about having multiple layers of thoughts (or far more than humanly possible), or multiple perspectives in multiple bodies with far better and outright new senses, Electronic Telepathy, hardware changes such as a wider ability to procure information (like advancing/expanding our senses as I mentioned before, which in and of itself increases complexity and comprehension, as you can't understand something if you aren't even able to know it), have a larger storage for it (increased mental capacity, or being able to remember more), easily remember it (perfect-ish memory), and then apply it at astounding speeds (increased processing power), and a whole lot of other weird shit that would make our limited perspectives quite laughable.
This isn't even mentioning what I pointed out above.
If it can be done (and it has been done with our ancestors, according to you at least, as I don't put too much stock in this whole "sapience cut-off point" thing, which is also why I think anything that can think is a person, even if they're a highly limited person), we can do it. Also, at the higher-than-human and pure evolution lines.
edited 22nd Mar '12 12:19:53 AM by Ekuran