Follow TV Tropes

Following

2045: The Year Man Becomes Immortal

Go To

Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#1: Feb 10th 2011 at 6:29:11 AM

This is a story that appeared on Time.com that I got off of Yahoo's news feed. I'm spinning it off from the Culture Wars thread because it deserves its own topic.

The Singularity has apparently gotten enough attention from the mainstream press to appear in a major article. Despite already being familiar with the issues involved, it's still thrilling to read.

Basically the exponential growth of technology is likely to pass a plateau into strong AI by 2030, with machines vastly exceeding human intelligence by 2045.

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
Ettina Since: Apr, 2009
#2: Feb 10th 2011 at 6:42:13 AM

Yeah, right. And we're supposed to have a moon base and flying cars by now.

If I'm asking for advice on a story idea, don't tell me it can't be done.
Kayeka from Amsterdam (4 Score & 7 Years Ago)
#3: Feb 10th 2011 at 6:44:14 AM

[up]Much what I was about to say.

Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#4: Feb 10th 2011 at 6:46:49 AM

Did you actually read the article? Here's a nice quote: "Kurzweil likes to point out that your average cell phone is about a millionth the size of, a millionth the price of, and a thousand times more powerful than the computer that he had at MIT 40 years ago. Flip that forward 40 years and what does the world look like?"

edited 10th Feb '11 7:00:27 AM by Fighteer

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
lordGacek KVLFON from Kansas of Europe Since: Jan, 2001
KVLFON
#5: Feb 10th 2011 at 6:53:53 AM

Ah, that's Kurzweil again? I expected it'd be about the Y-Man spreading the Gospel. Such rowdy boys.

"Atheism is the religion whose followers are easiest to troll"
Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#6: Feb 10th 2011 at 6:56:11 AM

The article mentions both the Singularity University and the Singularity Institute but oddly omits Yudkowsky from its list of associated notable personages. Also, apparently they have a name: singularitarians. That's quite a mouthful.

Given the strong likelihood that I'll be alive in 2045, I cannot wait to see how this particular prediction turns out.

Edit, because I can't resist: Embrace our new robot overlords!

edited 10th Feb '11 7:01:10 AM by Fighteer

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
Meeble likes the cheeses. from the ruins of Granseal Since: Aug, 2009
likes the cheeses.
#7: Feb 10th 2011 at 7:12:36 AM

I shudder to think at what will happen when some of the things in that article come close enough to being a reality that our politicians decide to make it their next big thing to scream about.

If there's one thing that humans are good at, it's letting people who don't really know what they're talking about scare the crap out of them.

Visit my contributor page to assist with the "I Like The Cheeses" project!
ChurchillSalmon Since: Dec, 2010
#8: Feb 10th 2011 at 7:15:51 AM

Related.

Not entirely fair but I just couldn't resist.

TBH, I agree with [up], if there's anything critically devastating to AI it's politicizing the technology. Someone takes the luddite stance, someone else takes the AI for us so we can dominate stance, someone wants to implement communism and someone else has a brilliant idea for perfect capitalism. Not to mention the consequences to the political system itself, the risk of either saturating democracy with silicon voters or making A.I.s non-citizens is another possible problem. Ultimately it all comes to status games, if there's a lot of money and power in it for anyone (and there is), the field of AI will utterly eclipse the sihtstorm that is climate research.

edited 10th Feb '11 7:21:55 AM by ChurchillSalmon

Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#9: Feb 10th 2011 at 7:19:55 AM

Well, one thing is for sure. If it does come down to a hyperintelligent AI turning against us and starting a Robot War, humanity is doomed. It won't be like Terminator where a plucky band of resistance fighters struggles against the Machine Army. It will be more like The Matrix, except without the idiotic "human beings plugged into power plants" thing. Which is why the Singularity Institute has dedicated its mission to producing a Friendly AI - one with ingrained ethical constraints that derive from logic.

If politicians go into crazy mode over this, the article mentions that one possible result would be to push AI research underground, making an apocalyptic scenario more likely rather than less, as the ethicists and rationalists would find it much harder to work.

[up] [lol] Of course, anyone who's passed high school geometry knows you can't extrapolate a line from one data point, or a trend from two — especially if one is zero. Also, the trend line is ignoring the fact that it was not only zero yesterday, but the day before that, ad infinitum. Otherwise the poor girl had negative husbands.

In response to your edit, I don't think the issue of "robot suffrage" is likely to appear. While a sizeable chunk of AI research comes in modeling the human brain (they've got an accurate model of a rat brain going according to the article, with 10K neurons), there's no inherent reason to believe that our future AI companions will have anything resembling human desires or priorities. For one thing, there's no point in stopping the curve at merely human intelligence, nor is that likely to be possible even if we tried.

When it comes down to the question of a million human voters versus one hyperintelligent AI, the question would be more one of whether we get any say, not whether it does.

edited 10th Feb '11 7:25:16 AM by Fighteer

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
ChurchillSalmon Since: Dec, 2010
#10: Feb 10th 2011 at 7:26:50 AM

Why would the AI start a robot war instead of going all Godwin on us and making itself the machine messiah? If Eliezer can succeed on the AI-box with thousands of dollars on the line there's little reason (outside pop culture where intelligence does never apply to emotions) not to expect a real superintelligence to simply create the most virulent meme ever.

Ah, but the issue is not technology, it's politics. I see no reason not to upload human minds and this eliminates the categorical differences between humans and AI. I don't trust people to understand what exactly makes a person before those issues would occur and our fascination with democracy simply breaks with AI but it's so ingrained in our thinking that I expect a lot of problems even if self-improving A.I.s don't get too involved.

edited 10th Feb '11 7:29:56 AM by ChurchillSalmon

Meeble likes the cheeses. from the ruins of Granseal Since: Aug, 2009
likes the cheeses.
#11: Feb 10th 2011 at 7:27:15 AM

I can especially see resistance coming from some of the hyper religious lawmakers, as the elimination of death would have a fair chance of making religion pretty much irrelevant.

edited 10th Feb '11 7:35:38 AM by Meeble

Visit my contributor page to assist with the "I Like The Cheeses" project!
Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#12: Feb 10th 2011 at 7:28:59 AM

[up][up] I'm more talking about us deciding to rid ourselves of the "monstrosity" we've created, which is unlikely to go well. "SMASH THE COMPUTER!" and all that jazz.

You are of course correct that a transhuman AI would notionally be able to push all our buttons merely by talking, never mind taking up some theoretical arms against us.

[up] The article mentions that, somewhat in passing, but it's easy to read if you're careful. Poor fundamentalist bastards, they've got quite a shock coming.

Edit: Oops, edited the wrong post! Sorry...

edited 10th Feb '11 7:30:22 AM by Fighteer

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
Bur Chaotic Neutral from Flyover Country Since: Dec, 2009 Relationship Status: Not war
#13: Feb 10th 2011 at 7:30:02 AM

Do not presume to know the delicate, sensitive complexities of the robot mind!

Aaah, this is one of those reasons I want to live forever. It'd be so fun to observe how far people got.

i. hear. a. sound.
Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#14: Feb 10th 2011 at 7:38:23 AM

I should add that it's quite likely that our notional transhuman AI would be capable of quickly and efficiently developing solutions to things like climate change, disease, human longevity, and the like. Whether we actually listen is a different story — what if the AI predicts that we'll all die in 50 years unless we cut the human population by half? What do we sacrifice of ourselves as humans if this turns out to be true? Do we take the risk of not listening and having 100% of us die?

In fact, I'm curious to hear some responses to this question. What if saving the human race requires some drastically amoral measure like killing off half the population (presumably allowing the other half to achieve Crystal Spires and Togas or whatever)?

edited 10th Feb '11 7:38:58 AM by Fighteer

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
Greenmantle V from Greater Wessex, Britannia Since: Feb, 2010 Relationship Status: Hiding
V
#15: Feb 10th 2011 at 7:42:47 AM

@ Fighteer:

Poor fundamentalist bastards, they've got quite a shock coming.

...and so will the BNP-style Nationalists!

What if saving the human race requires some drastically amoral measure like killing off half the population (presumably allowing the other half to achieve Utopia or whatever)?

It'll be done. Reluctantly, and with much complaint (and probably bloodshed). Whether the computer has got it right enough, this is another matter...

Keep Rolling On
Meeble likes the cheeses. from the ruins of Granseal Since: Aug, 2009
likes the cheeses.
#16: Feb 10th 2011 at 7:47:28 AM

Poor fundamentalist bastards, they've got quite a shock coming.

The problem with fundamentalists is that they tend to react violently when pushed from a position of power and status to near irrelevancy. tongue

To answer the remove 50% of the human race question, if it was truly the only solution to the problem I would probably agree to it, though I would volunteer to be one of those that get wiped out.

That seems like the kind of decision people should only make if they're willing to put their own necks on the line.

edited 10th Feb '11 7:53:17 AM by Meeble

Visit my contributor page to assist with the "I Like The Cheeses" project!
ChurchillSalmon Since: Dec, 2010
#17: Feb 10th 2011 at 7:48:07 AM

In the least convenient world, yes we should kill half of us but it would fall into endless bickering over who is allowed to live and who has to die and always someone is offering a better solution which strangely seems to include themselves in the survivors (and those who would sacrifice themselves are the ones we probably would prefer to have alive) until we run out of time, or implement a ridiculous solution where people who can best game the system are rewarded.

As for whether we should listen, I'm almost absolutely positive that an intelligence without the built-in fail of humans should be given consideration above anyone else, if we know and agree with its goals. A paperclip maximizer should be killed with fire as soon as possible but if it is provably friendly and superhuman I would go as far as to give it all authority over humans. After all, if it's provably friendly it won't abuse it. In reality it seems far more likely that it simply serves some lost cause though.

Deboss I see the Awesomeness. from Awesomeville Texas Since: Aug, 2009
I see the Awesomeness.
#18: Feb 10th 2011 at 7:48:31 AM

I generally prefer not to extrapolate technological understanding from past understandings. Science isn't something that works on a linear basis. Nor is claiming that machining refinements to be advancing technology going to give you accurate predictions.

Fight smart, not fair.
SilentStranger Failed Comic Artist from Sweden Since: Jun, 2010
Failed Comic Artist
#19: Feb 10th 2011 at 7:50:48 AM

Ohhhhhh, we're getting closer to the world of Blade Runner!

I dont know why they let me out, I guess they needed a spare bed
Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#20: Feb 10th 2011 at 7:51:08 AM

[up][up] Deboss, actually, technology grows on an exponential curve, not a linear one. The trend line has been consistent across nearly every area of human technology since the early 1900s. There's no current reason to expect that it will not continue to apply, and every reason to prepare for that eventuality. Otherwise you come out looking like Arbitrary Skepticism.

edited 10th Feb '11 7:51:14 AM by Fighteer

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
storyyeller More like giant cherries from Appleloosa Since: Jan, 2001 Relationship Status: RelationshipOutOfBoundsException: 1
More like giant cherries
#21: Feb 10th 2011 at 8:01:01 AM

How do you quantify something like technological progress anyway?

Blind Final Fantasy 6 Let's Play
Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#22: Feb 10th 2011 at 8:05:56 AM

The article goes into some detail, but basically it's working around Moore's Law, transistor density and clock speeds.

Big quote follows:

Kurzweil's interest in humanity's cyborganic destiny began about 1980 largely as a practical matter. He needed ways to measure and track the pace of technological progress. Even great inventions can fail if they arrive before their time, and he wanted to make sure that when he released his, the timing was right. "Even at that time, technology was moving quickly enough that the world was going to be different by the time you finished a project," he says. "So it's like skeet shooting - you can't shoot at the target." He knew about Moore's law, of course, which states that the number of transistors you can put on a microchip doubles about every two years. It's a surprisingly reliable rule of thumb. Kurzweil tried plotting a slightly different curve: the change over time in the amount of computing power, measured in MIPS (millions of instructions per second), that you can buy for $1,000.

As it turned out, Kurzweil's numbers looked a lot like Moore's. They doubled every couple of years. Drawn as graphs, they both made exponential curves, with their value increasing by multiples of two instead of by regular increments in a straight line. The curves held eerily steady, even when Kurzweil extended his backward through the decades of pretransistor computing technologies like relays and vacuum tubes, all the way back to 1900. (Comment on this story.)

Kurzweil then ran the numbers on a whole bunch of other key technological indexes - the falling cost of manufacturing transistors, the rising clock speed of microprocessors, the plummeting price of dynamic RAM. He looked even further afield at trends in biotech and beyond - the falling cost of sequencing DNA and of wireless data service and the rising numbers of Internet hosts and nanotechnology patents. He kept finding the same thing: exponentially accelerating progress. "It's really amazing how smooth these trajectories are," he says. "Through thick and thin, war and peace, boom times and recessions." Kurzweil calls it the law of accelerating returns: technological progress happens exponentially, not linearly.

edited 10th Feb '11 8:07:03 AM by Fighteer

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
Deboss I see the Awesomeness. from Awesomeville Texas Since: Aug, 2009
I see the Awesomeness.
#23: Feb 10th 2011 at 8:07:48 AM

If it wasn't built around the exploitation of a fixed laws, it might be different. There's no reason to think understanding will stop growing, there is reason to think there's a plateau because it's built around physics. Plotting based on trends rather than trying to understand the rules governing the curve is going to get something broken.

Fight smart, not fair.
Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#24: Feb 10th 2011 at 8:08:39 AM

Deboss, if anything, they're getting broken upwards. We already have tech on the horizon that will blow through current models of peak transistor density.

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
QQQQQ from Canada Since: Jul, 2011
#25: Feb 10th 2011 at 8:10:23 AM

Ja, so much potential for humanity and technology..! waii


Total posts: 99
Top