Follow TV Tropes

Following

Would The Singularity be allowed to happen?

Go To

nnokwoodeye Since: Jan, 2001
#1: Mar 19th 2012 at 6:45:17 AM

I was thinking about all this internet censoring thing and a thought occurred to me: if governments and corporation can't even handle the information revolution, how would they react to a true game changing technology?

For example, if someone would invent an efficient renewable power source would the government just let the oil companies go bankrupt? Aren't they "too big to fail"? And what about an immortality serum? Shouldn't the "fruit of the tree of life" be made illegal for violating god will?

Is it possible that those kinds of technologies have already been discovered but the governments of the world are keeping them secret because they don't want the current social structure to change?

Am I too paranoid?

edited 19th Mar '12 6:48:31 AM by nnokwoodeye

BestOf FABRICATI DIEM, PVNC! from Finland Since: Oct, 2010 Relationship Status: Falling within your bell curve
FABRICATI DIEM, PVNC!
#2: Mar 19th 2012 at 7:53:57 AM

To answer your last question, yes, that would seem to be the case.

A game-changing technology would quite likely be so strong in terms of how much utility it provides that stopping it in a capitalistic context is simply impossible.

The problem we have is that most inventions aren't the kind that breaks someone's game instantly; instead, they're small steps towards a certain direction that must be accumulated to reach a viable product, and baby step technologies like these can be blocked if someone buys the patent and just locks it up in a vault or something.

So it's possible that "big oil" could be buying patents for green tech and preventing them from being utilised to advance the cause of ending our dependence on oil.

If such a thing as an immortality serum were invented, whether or not it is used would depend on where it's invented; but if the research that results in the invention is published, it becomes impossible to stop it. If it's invented in the USA or Saudi Arabia, religious interest groups would possibly be able to block it; but if the research that enables it is public, someone in Scandinavia will just make the serum and alter it sufficiently to avoid charges of copyright infringement.

Quod gratis asseritur, gratis negatur.
AceofSpades Since: Apr, 2009 Relationship Status: Showing feelings of an almost human nature
#3: Mar 19th 2012 at 8:02:10 AM

I agree that you are being ridiculously paranoid.

I also don't think that the supposedly revolutionary technologies people say are being hidden are. From a business perspective, it just doesn't make sense. Being the first to develop and market a new technology can make a man and their company incredibly fucking wealthy. Big Oil companies are as likely, far as I can tell, to develop new energy technology because hey, first in the door generally means first to make a profit once it's made useable and affordable by the profit. There is a reason that they're making electric cars, after all. People want that kind of tech in their vehicles. Also, much of our technological development has come from government programs. Namely, things like NASA has gifted us with a lot of little things we don't even think about.

The other stuff is so pie in the sky fantasy that I don't really see it as having much merit in this particular conversation. Suffice to say, I highly doubt that there is any grand conspiracy to keep us from advancing. There's profit it in it, and the only reason to keep things secret is so that you and not someone else can profit from the idea. And hell, if someone did market an endless energy resource, you can bet that patent would be up for sale so quick that person would be rich for life and their descendants would never want for anything.

Qeise Professional Smartass from sqrt(-inf)/0 Since: Jan, 2011 Relationship Status: Waiting for you *wink*
Professional Smartass
#4: Mar 19th 2012 at 10:22:12 AM

At least the immortality drug wouldn't go unused. Or are you telling me if you were one of the people hiding it you wouldn't make and use it for yourself?

Laws are made to be broken. You're next, thermodynamics.
TamH70 Since: Nov, 2011 Relationship Status: Faithful to 2D
#5: Mar 19th 2012 at 1:04:16 PM

Hmm. Just because you are paranoid does not mean that there is not some bastard out to get you. Poor people would not get a sniff of anything close to the Singularity. That's for rich folks.

As for the big oil companies killing tech that competes with them? Ever heard of this film, http://en.wikipedia.org/wiki/Who_Killed_the_Electric_Car%3F

Big Oil has form.

Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#6: Mar 19th 2012 at 1:18:48 PM

Corporations have long been more of an obstacle against innovation than anything else. I used to dabble in the field of data compression (might take it up again later, it was fun) and the amount of nonsense that researchers have to go through in order to avoid patents (like the one on arithmetic coding, for example) is truly staggering.

And according to what I hear, the field of cryptography has it even worse...

In any case, I agree with Best Of. The problem is not that companies might purposefully prevent the development of a truly game-changing innovation, the problem is that they put obstacles to the development of the technologies which might allow such development.

This said, I think that the very concept of singularity is very badly defined. On one hand, if I may get mathematical for a moment, exponential growths have no vertical asymptotes. Even assuming for the sake of discussion that technological progress is exponential or even more-than-exponential (and for the record, I don't really believe that,) this does not imply the existence of any "threshold development" which would insta-change our world beyond all understanding.

And on the other, singularity-like thinking seems to assume that once a certain degree of technological development is reached, everything will be all right and we will all live in Happy Fairy Candy Land. I find this hard to believe I mean, compared to a human being from the Paleolithic, I live in Happy Fairy Candy Land. I have a ludicrously comfortable dwelling. I have access to all the food I need, and more, as I have had for all my life so far and as I will for the foreseeable future. No predators except, perhaps, other human beings pose any threat to me, and I can reasonably expect to live until my 70s. At least. But still, I think that we can all agree that our world — and even the part of it that we access as first-world people — is far from perfect yet.

Why would this be any different from a hypothetical post-"singularity" entity?

edited 19th Mar '12 1:20:53 PM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.
AceofSpades Since: Apr, 2009 Relationship Status: Showing feelings of an almost human nature
#7: Mar 19th 2012 at 1:20:08 PM

And yet, the electric car is a thing that's being developed. First adopters and all that, they get the monies.

Besides, if someone did develop an efficient renewable resource it'd be hard to stop them from developing it. And, well, it's kind of like the coal industry. Oil would just slowly become obsolete for the purposes of an energy source. (I still see this as a number of years off, personally, but definitely in need of more political support.) As for the government "letting it go bankrupt" uh, in the long run there wouldn't be much the government could do about an inefficient energy resource being replaced by a newer, better one. Coal and such slowly went out of favor; a new resource would basically cause the same thing. It's merely a matter of how fast it does so.

TenTailsBeast The Ultimate Lifeform from The Culture Since: Feb, 2012
#8: Mar 19th 2012 at 1:22:26 PM

[1] Article on this very subject. Yesterday.

edited 19th Mar '12 1:22:49 PM by TenTailsBeast

I vowed, and so did you: Beyond this wall- we would make it through.
Ekuran Since: Feb, 2010 Relationship Status: watch?v=dQw4w9WgXcQ
#9: Mar 20th 2012 at 11:49:55 AM

[up][up][up]Perfection is a joke, and it's quite insulting to say that The Singularity will lead to an absolutely perfect "Happy Fairy Candy Land" as the people behind it aspire to constantly improve themselves and the world in general. There would still be problems (like that whole death of the universe thing), if only in far less forms and intensity.

As for the whole comparison to the Paleolithic human being, I'm not buying it. You're both still human and vastly limited, while the post-"singularity" entity is not. Yes, they will probably have limits too, but the whole point of being one is that they wouldn't be as limited as us.

And I'm guessing most people define the singularity as either the moment an AI makes a better one, or when it has equaled/surpassed humans in general.

edited 20th Mar '12 11:54:53 AM by Ekuran

AceofSpades Since: Apr, 2009 Relationship Status: Showing feelings of an almost human nature
#10: Mar 20th 2012 at 12:06:28 PM

And we're less limited than the Paleolithic human, for the most part. Far more knowledge about how the world works, far more access to food, technology the Paleolithic guy would likely see as magic, and we've mastered flight to the point that we have limited space faring technology. I think the comparison is apt here.

Also, that's not how I've seen the Singularity defined. What I've heard has to do with humans surpassing their current limits, rather than an AI surpassing humans.

edited 20th Mar '12 12:06:50 PM by AceofSpades

Ekuran Since: Feb, 2010 Relationship Status: watch?v=dQw4w9WgXcQ
#11: Mar 20th 2012 at 12:18:21 PM

Still human. Yeah, those things are awesome (and, in fact, let us live far better lives, which is also awesome), but the comparison is still moot if we literally/physically haven't changed (besides some minor differences in height/hygiene/life-span/etc).

I was mostly using my personal definition it, but I think an AI surpassing humans will lead to humans surpassing their current limits.

edited 20th Mar '12 12:19:47 PM by Ekuran

Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#12: Mar 20th 2012 at 12:23:46 PM

How do you define "surpassing"?

Intellectual faculties are not an ordered set, not any more than physical faculties are. Under many points of view, computers have already surpassed humans, and by much — even a cheap laptop far surpasses anybody when it comes to mental calculation, memory, reaction time, or playing certain games.

And on the other hand, our oh-so-powerful brains would have more than a little difficulty in replicating the musical pattern-recognition abilities of a tiny little songbird.

I suppose that if an AI is able to pass the Turing Test, then it will have definitely surpassed humans in everything; but that's much more than is required, I think — humans would utterly fail the Songbird Test, but still there is little doubt that we are more intelligent than them overall.

EDIT:

it's quite insulting to say that The Singularity will lead to an absolutely perfect "Happy Fairy Candy Land"
It may be so. But still, it seems to me that this is what many Singularity-enthusiasts seem to assume (apart from the Friendly AI crowd, which has the common sense to notice the possible dangers).

And as for "more than human", I don't buy it. A human being with a far faster mind and better memory than me would be a more clever human that I am, but he or she would not be ontologically different from me — exactly as I am not ontologically different from a severely brain-damaged individual.

I am in favor of intelligence augmentation — heck, I am even in favor of uploading, assuming that it is possible, as my understanding of the concept of soul is entirely compatible with it. But I do not see them as "transcending humankind", I see them (assuming that they are even possible, and I am not committing to this) as improving humankind.

edited 20th Mar '12 12:36:45 PM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.
Ekuran Since: Feb, 2010 Relationship Status: watch?v=dQw4w9WgXcQ
#13: Mar 20th 2012 at 2:17:44 PM

I tend to use processing power, as although there are many more parts to it, this strikes me as the most important.


While it does seem like they've surpassed the human mind (and in some ways, they have), don't be fooled. A human's mind processes far more information than any other computer/brain known to man, we just don't notice it, and are really inefficient at using it consciously in most cases. Computers tend to have the opposite problem, cause they don't think about anything else but the problem at hand as they're not general AI. Its basically a case of efficiency vs raw power, and the later is just more useful.

Of course, having both is what we're trying to do, so it doesn't really matter in the end.


The Turing Test isn't that useful in determining whether an AI is sentient/sapient or not. We already have computers that can mimic humans, but they're more gimmicky than anything substantial in AI development (thus far, at least).


You're listening to the pipe dreams of the Vocal Minority my friend. Try not to stereotype us next time.


Never said they were "more than human".

You should take this into consideration, though. A human is essentially an ape with a far faster mind and better memory. A post-singularity entity would be the same to a human (and probably to a far larger degree). While it's true that all three (in my opinion) are people (am slightly annoyed that some "people" think that term only applies to humans), I somehow doubt they will act just like us.

On the other hand, this doesn't make them "more" or "inherently better" (that's the same bullshit sentiment as thinking we want some crappy little utopian fairy land), just mentally augmented. Or "different", if you prefer.

edited 20th Mar '12 2:24:24 PM by Ekuran

Pykrete NOT THE BEES from Viridian Forest Since: Sep, 2009
NOT THE BEES
#14: Mar 20th 2012 at 2:36:21 PM

OP: The reason businesses have been having trouble adjusting to recent advances is that the specific advances made have been in easy distribution and (relatively) anonymous and masked transaction, which makes it difficult for them to corner consumers into doing what they want. The kind of technology Singularity types wank over, however, would be so intrusive that I'm not worried at all about business failing to adapt to it; I'm worried about them doing it too well.

Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#15: Mar 20th 2012 at 2:41:35 PM

Cleverbot does not really mimic a human being. It is a fun gimmick, but it is all-too-evident that it has no clue of what it's talking about. Actually, I kind of think that natural language processing is a harder problem than general AI — it involves a lot of idiosyncrasies about the peculiar ways in which human minds generate and process language, after all.

You're listening to the pipe dreams of the Vocal Minority my friend. Try not to stereotype us next time.
Point taken, although at times it does not really look like the minority from here. But fair is fair, since I expect you guys not to stereotype theists (although in principle, nothing prevents the existence of a singularitarian theist — heck, I cold even qualify myself, if by "singularitarian" one means "thinks that AI could be possible and could have huge benefits" rather than "thinks that AI is inevitable and could have huge benefits") I will try to do the same to y'all smile

A human is essentially an ape with a far faster mind and better memory.
I am not sure. It seems to me that sapience is really a kind of a state transition — that seems like a hard cut-off point to me. Give a monkey a faster mind and a better memory, and you'll certainly have an unusually quick-witted ape; but I am not sure if it would start questioning the purpose of its own existence and so on. But on the other hand, if you give a human a faster mind and a better memory, you'll get an unusually quick-witted human, not something that could think thoughts that no human being could possibly think.

edited 20th Mar '12 2:47:37 PM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.
feotakahari Fuzzy Orange Doomsayer from Looking out at the city Since: Sep, 2009
Fuzzy Orange Doomsayer
#16: Mar 20th 2012 at 2:52:14 PM

^ TV Tropes is a bit of a bubble—there are really smart Tropers and slightly smart Tropers, but people of average intelligence tend not to hang out here. In public school, I met a lot of people who could not comprehend some of my thought processes (and I don't think that was just a matter of me being bad at explaining things, since the most intelligent students usually could understand.) It would seem arrogant to place myself as the cutoff point—it's more probable that something more intelligent than me would similarly have thoughts I couldn't understand.

Edit: Actually, maybe I should take that back—maybe it's just that I'm used to thinking in different frameworks because I've read so many stories by authors with different mindsets. The ideas that most confused other students tended to be products of mindsets from stories they'd never read, rather than logical outgrowths of their own mindsets (although it's still worth noting that the smarter students adapted much more easily to thinking in such terms, even if they hadn't read the same authors I had.)

edited 20th Mar '12 2:56:24 PM by feotakahari

That's Feo . . . He's a disgusting, mysoginistic, paedophilic asshat who moonlights as a shitty writer—Something Awful
Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#17: Mar 20th 2012 at 3:06:42 PM

but people of average intelligence tend not to hang out here.
Um, hello. tongue

No, really. As far as I can tell, my thought processes are no faster than average, and my memory is actually rather worse than average. I perhaps have a slight edge in that I can focus pretty well when I want to, and I have a passable intuition on a few specific subjects; but that's about it.

It would seem arrogant to place myself as the cutoff point
I would put the cutoff point a bit back, actually. It's not a matter of speed, it is a matter of what thoughts one can think to begin with. In theory, a stereotypical dimwitted middle school student is perfectly capable of understanding Kant or Wittgenstein — they would not probably want to spend the effort to do so, but that's not beyond them. However, give a monkey a billion years of subjective time, and they still won't be able to make heads nor tails of the Tractatus: the kind of questions that it addresses is just something that a monkey is genuinely not capable of considering.

edited 20th Mar '12 3:17:54 PM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.
Pykrete NOT THE BEES from Viridian Forest Since: Sep, 2009
NOT THE BEES
#18: Mar 20th 2012 at 3:14:27 PM

Yeah, we're not above average. We're specialized in mass media, but we're pretty darn average.

Ekuran Since: Feb, 2010 Relationship Status: watch?v=dQw4w9WgXcQ
#19: Mar 20th 2012 at 3:39:35 PM

[up][up][up][up]I still don't put much trust in the Turing Test, its too limited. You're right about Cleverbot, though.


Glad we agree.


I worded that wrong. Let me rephrase that: "A human is essentially an ape with a far faster mind/better memory/more processing power/other things that we think go into intelligence/sapience/etc." It's basically the same with a post-singularity entity, although that still doesn't make them "more" than us.

edited 20th Mar '12 3:40:51 PM by Ekuran

Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#20: Mar 20th 2012 at 3:50:47 PM

other things that we think go into intelligence/sapience/etc.
This is the part that worries me. Assuming that it is possible to build a human-equivalent AI (and I think this likely, although I am not entirely sure and I do not put much trust at all in temporal predictions), I can agree that nothing would in principle prevent the creation of human-equivalent beings with faster processing speed and more memory.

But it seems to me that that would not really be a "phase transition" of the some sort of the one between non-sapient beings and sapient beings. There is nothing in principle that such an entity could think that I could not think, although it would take me far more time to do so.

Perhaps it is in principle possible that there might exist entities that stand in the same relation to us as we stand with respect to apes; but I see no reason to think that we, or our successors, could be able to design and construct such an entity. I am somewhat confident that we could eventually design and construct an "accelerated human"; but that would be in the same relationship to this hypothetical super-entity as an "accelerated monkey" would be to us.

edited 20th Mar '12 3:52:18 PM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.
Ekuran Since: Feb, 2010 Relationship Status: watch?v=dQw4w9WgXcQ
#21: Mar 20th 2012 at 5:02:02 PM

Ah. You have a problem with our ability to create an entity whose comprehension is beyond ours.

I think comprehension most likely does lie within our ability to process information (correctly), but I may be wrong. In fact, I think a multitude of factors go into it, like our senses (which can be augmented, or even expanded to include new senses), memory, mental capacity (how much information we can store within our brains/computers/etc), and a bunch of other attributes I don't feel like writing down all day. I still think most of those go under as subsets of processing power or aren't as relevant as it.

In any event, I think the aspect(s) of intelligence that was (were) enhanced in the transition our ancestors made into sentient/sapient beings can be artificially induced and magnified within us and other animals (such as other apes), even if we don't know what they are. It's not like it hasn't been done before (hint: look at the mirror).

Personally, I think any being with the ability to think can ponder and understand anything if given enough time and the mental capacity to do so.

edited 20th Mar '12 5:06:02 PM by Ekuran

feotakahari Fuzzy Orange Doomsayer from Looking out at the city Since: Sep, 2009
Fuzzy Orange Doomsayer
#22: Mar 20th 2012 at 6:44:18 PM

As a side note: Carc, if you're of average intelligence, you're of a very skewed sort of intelligence. You're one of the people whose basic assumptions about reality would not be comprehensible to most of my classmates. (I'm uncertain whether or not you'd understand their basic assumptions about reality—I assumed that you would, and from there that you must be smart if you can understand more than one set of assumptions about reality, but now that I think about it, I haven't followed your posts closely enough to have proof one way or another.)

edited 20th Mar '12 6:45:12 PM by feotakahari

That's Feo . . . He's a disgusting, mysoginistic, paedophilic asshat who moonlights as a shitty writer—Something Awful
Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#23: Mar 21st 2012 at 12:08:46 AM

[up][up]There is more to intelligence than raw processing power, I think — that's what makes AI design difficult: it's not just a matter of writing a simple program and throwing massive amounts of computation at it, it's a matter of writing a complex program. If I remember correctly, John Mc Carthy*

wrote somewhere that in principle, modern computers should have more than enough computational power to house a streamlined human-level intelligence (albeit not to simulate a human brain in detail, of course.) We just don't know how yet.

Now, supposing that we succeeded in creating an AI, either through understanding intelligence from a formal perspective or through the brute force method of simulating a whole human brain from the neuron level upwards, there are a few obvious improvements that we could certainly make. More speed. More memory, and with better indexing algorithms. Hard-coded mathematical subroutines. Perhaps even memory transfer and acquisition, if we can understand memory well enough.

All of this, and more, is nifty and potentially interesting, I think. But it seems to me none of this is a radical change, not one of the same sort as the transition from non-sapient programs to sapient A.I.s would be.

Assuming that they even exist, we cannot just reach eventual truly higher-than-human modes of consciousness through self-improvement and design, I think. That would be possible only if we could, at least in principle, understand them; and then they would scarcely be higher-than-human tongue. Pure evolution might perhaps be able to reach them, as it made nonsapient beings into us; but I am not sure of this.

[up]

You're one of the people whose basic assumptions about reality would not be comprehensible to most of my classmates.
I think that it's more that they would find them uninteresting, and would not want to spend the effort to understand them. Which is just fine, I mean, there are plenty of things that they probably think about that I don't. I remember a person telling me that they "just cannot understand math", and then seguing into a (to me) incomprehensible discussion about the soccer championship. What is more likely, that they are intrinsically unable to understand basic topology and I am intrinsically unable to understand sport championships, or that they don't care about topology and I don't care about championships?

It seems to me that anything that a healthy human mind can think, any other healthy human mind can understand given enough time and effort. It may take more or less time, and we may run into concentration problems; but the difference between two human minds seems to me more akin to the difference between two Turing-complete machines of somewhat different design than the difference between a Turing-complete machine and a non Turing-complete one.

"Understanding more than one set of assumptions about reality" is not some sort of magical ability that only a few rare humans have, I think. It's something that we should use more, perhaps; but as a raw ability, it is already entirely present in any random kid who likes to play pretend.

edited 21st Mar '12 1:31:30 AM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.
Sable Since: Aug, 2011
#24: Mar 21st 2012 at 11:42:13 AM

[up][up]I think you're getting a bit ahead of yourself if you think every one here is "above average". Doesn't means there are no really clever folks here, but they're no more frequent than most other websites. Thanks for your time, back to Singularity talk:

I don't think it can be prevented, if it happens. Exploited by a few for their own benefit, before hitting mainstream? Probable, but killed outright, no. What bother me, though, is the idea that it will necesserily come from AI. Aren't there other fields where a critical discovery could shit paradigms around? the medical field, for exemple?

Edit: Right. I vented out some other shit on you. My aplogies to you.

edited 21st Mar '12 4:26:21 PM by Sable

Boredman hnnnng from TEKSIZ, MERKA (Before Recorded History) Relationship Status: YOU'RE TEARING ME APART LISA
hnnnng
#25: Mar 21st 2012 at 4:18:11 PM

Sable, I think we could do without smug condescension.

OT: As others said here, I think that the idea of a government suppressing new technology is unrealistic and impractical.

cum

Total posts: 48
Top