Follow TV Tropes

Following

2045: The Year Man Becomes Immortal

Go To

Yej See ALL the stars! from <0,1i> Since: Mar, 2010
See ALL the stars!
#76: Feb 10th 2011 at 12:00:52 PM

Hold on, though, aren't computers made by people?
Not usually, no. They might be assembled by people, but the actual chips are manufactured by robots, AFAIK.

Also, the JVM might be treated as non-deterministic because in a multi-thread environment, you can't predict the scheduler.

edited 10th Feb '11 12:02:35 PM by Yej

Da Rules excuse all the inaccuracy in the world. Listen to them, not me.
Iaculus Pronounced YAK-you-luss from England Since: May, 2010
Pronounced YAK-you-luss
#77: Feb 10th 2011 at 12:08:36 PM

[up]I was referring to the design process.

What's precedent ever done for us?
Yej See ALL the stars! from <0,1i> Since: Mar, 2010
See ALL the stars!
#78: Feb 10th 2011 at 12:09:53 PM

Probably heavily automated, if not entirely automated by various algorithms.

Da Rules excuse all the inaccuracy in the world. Listen to them, not me.
Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#79: Feb 10th 2011 at 12:10:16 PM

[up][up]To a certain extent, yes, but there are very sophisticated tools that approach weak AI that help the chip developers do their jobs; there's no way a human brain could encompass the quantum-level complexity of an entire CPU architecture.

edited 10th Feb '11 12:10:22 PM by Fighteer

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
breadloaf Since: Oct, 2010
#80: Feb 10th 2011 at 12:21:56 PM

The JVM is non-deterministic because you can't guarantee it's results. Ever run stability testing for that piece of junk?

The design process of computers and computer systems and software is run by people. The singularity is meant for us to approach a time where we heavily use AI to design computer systems. Then eventually stronger AI would design the systems for us.

SilentStranger Trivia Depository from Parts Unknown (4 Score & 7 Years Ago)
Trivia Depository
#81: Feb 10th 2011 at 12:23:56 PM

Ever read Transmetropolitan? I think thats a fairly accurate prediction of what a hi-tech future would be like.

Tongpu Since: Jan, 2001
#82: Feb 10th 2011 at 12:33:06 PM

What do we sacrifice of ourselves as humans if this turns out to be true?
The poor and the minorities will be sacrificed.

Deboss I see the Awesomeness. from Awesomeville Texas Since: Aug, 2009
I see the Awesomeness.
#83: Feb 10th 2011 at 1:00:08 PM

If we're still talking about the military using programing by the time this gets posted, the F-35 uses C++.

Fight smart, not fair.
Arilou Taller than Zim from Quasispace Since: Jan, 2001
Taller than Zim
#84: Feb 10th 2011 at 1:11:42 PM

Most actual computer scientists I've talked with are kind of sceptical of the entire idea. (Mainly that hard AI is even possible in a form that would be of any use whatsoever)

"No, the Singularity will not happen. Computation is hard." -Happy Ent
Tzetze DUMB from a converted church in Venice, Italy Since: Jan, 2001
DUMB
#85: Feb 10th 2011 at 1:57:44 PM

Yeah, that's an important thought. I'm not sure what immediately practical uses of human-level AI would be that couldn't just be performed by a human. Getting them killed, maybe.

[1] This facsimile operated in part by synAC.
storyyeller More like giant cherries from Appleloosa Since: Jan, 2001 Relationship Status: RelationshipOutOfBoundsException: 1
More like giant cherries
#86: Feb 10th 2011 at 2:02:50 PM

Another question is how you would define human level AI anyway.

Blind Final Fantasy 6 Let's Play
Arilou Taller than Zim from Quasispace Since: Jan, 2001
Taller than Zim
#87: Feb 10th 2011 at 2:21:48 PM

I mean, in a sense we're already producing sentient computing units every day.

They're called "children".

"No, the Singularity will not happen. Computation is hard." -Happy Ent
Deboss I see the Awesomeness. from Awesomeville Texas Since: Aug, 2009
I see the Awesomeness.
#88: Feb 10th 2011 at 2:42:52 PM

If a strong AI would be that much of a load/irritating, I recommend against further pursuit.

Fight smart, not fair.
SomeSortOfTroper Since: Jan, 2001
#89: Feb 10th 2011 at 3:47:55 PM

the quantum-level complexity of an entire CPU architecture.

The quantum level complexity...?? What the-??

No, look I was going to be on-topic, I promise but I caught sight of the time. I have a host of questions about the field of singularism (though sometimes I confuse bit with transhumanism) but some are probably answerable and some are complicated etc. However there is one I'd like to leave on:

  • Define "breakdown of our ability to judge what would happen" and why, actually, is it linked to exponentials.

This seems to get confused and changed over and for something held as such a certainty, it seems to get fuzzy on its axioms and seriously defining things.

Pykrete NOT THE BEES from Viridian Forest Since: Sep, 2009
NOT THE BEES
#90: Feb 10th 2011 at 3:57:05 PM

Nuclear power plants and your electric grid and traffic has been okay for the decades they've had only computers run them. Also all our passenger flights are computer run. I think you guys take "computer error kills a billion people" totally out of context. It can't happen because any mission-critical system is built with fail-safes. Nobody cares if windows locks up because you don't die. People do care if an autopilot system in an aircraft locks up, so it is designed with fail safes.

Let me put it like this, you can trust "humans" to be somehow more reliable and result in 10 000 deaths a year, or you can rely on a computer for 0 deaths a year and then once every 20 years a problem crops up that kills 1000 people.

I never said that it was a bad thing to have computers doing a large chunk of the work. Just that having human operators sanity-checking things and able to intervene manually if absolutely necessary would be a rather silly thing not to have.

It can't happen because any mission-critical system is built with fail-safes.

Please never work in QA for anything mission-critical. Even failsafes need failsafes, and eventually one of those failsafes should probably be a dude with an ax.

Tangent128 from Virginia Since: Jan, 2001 Relationship Status: Gonna take a lot to drag me away from you
#91: Feb 10th 2011 at 7:50:57 PM

I've found, just from programming simple toy projects, that edge cases pile up fast. And in programs, most errors have compounding effects.

Computational power is not sufficient (though likely necessary) for strong AI. Ascribe a proposed AI program any motives you like- friendly, hostile, paperclip, whatever. Give it all the CPU cycles you want, and you still have no guarantee that it can implement its will on the world- it needs an accurate understanding of how the world works for that, which requires good input data.

Do you highlight everything looking for secret messages?
petrie911 Since: Aug, 2009
#92: Feb 10th 2011 at 8:42:06 PM

I'm not really sold on the whole singularity idea. There really are physical limits to what can be done. Not to mention from that description lord Gacek posted earlier, the singularitarians sound like arrogant pricks. But perhaps the article was being uncharitable.

Also, singularitarians is an absolutely terrible name. The internal rhyme just completely kills any weight behind it. Singularitans would have been better.

And speaking of Gacek's post...

The atmosphere was a curious blend of Davos and UFO convention.

I read that as Davros. Which gives the whole subject a rather interesting bent.

Belief or disbelief rests with you.
jewelleddragon Also known as Katz from Pasadena, CA Since: Apr, 2009
Also known as Katz
#93: Feb 10th 2011 at 8:49:43 PM

tl;dr

Everyone knows that Kurzweil has been predicting the same stuff for decades, right? He wrote a book around 1990 and another one around 2000 that were predicting the same things he's predicting now.

He's got a great grasp on technology, a tenuous grasp on psychology, and a nonexistent grasp on philosophy.

storyyeller More like giant cherries from Appleloosa Since: Jan, 2001 Relationship Status: RelationshipOutOfBoundsException: 1
More like giant cherries
#94: Feb 10th 2011 at 9:09:05 PM

^^^ One reason I am skeptical of strong AI is that there is no real reason why people would even seek to replicate a human mind in the first place. You don't have to be human-like to act intelligently.

Blind Final Fantasy 6 Let's Play
Barkey Since: Feb, 2010 Relationship Status: [TOP SECRET]
#95: Feb 10th 2011 at 9:42:27 PM

^

I think there doesn't need to be a reason, people would just do it as an achievement in and of itself.

Myrmidon The Ant King from In Antartica Since: Nov, 2009
The Ant King
#96: Feb 11th 2011 at 7:36:07 AM

Man I wish I could make a meaningful comment other than "it sounds nice and I'm hopeful, but I'm not staking my bets on it".

Kill all math nerds
Ukonkivi Over 10,000 dead.:< Since: Aug, 2009
Over 10,000 dead.:<
#97: Feb 11th 2011 at 7:42:40 AM

I'll be lucky if it comes before I die. Extremely lucky.

Genkidama for Japan, even if you don't have money, you can help![1]
breadloaf Since: Oct, 2010
#98: Feb 11th 2011 at 11:02:02 AM

Please never work in QA for anything mission-critical. Even failsafes need failsafes, and eventually one of those failsafes should probably be a dude with an ax.

Well aside from the fact that I'd never voluntarily work in QA, I think you're missing the point. We're not running some simple system where a "dude with an axe" can just shut it down just in case. What are you going to do in a landing of a space shuttle coming in from orbit? Autopilot fails but don't worry, dude with axe will solve it! No, if the computer fails, and manual control is not good enough you die. Systems will only get more complex in the future until humans are incapable of doing anything by hand.

I don't know if we can get sentient strong AI but we can get our weak AI tools to do heck of a lot of things in the next 50 years. Even if they never become sentient the work they are capable of accomplish is very great.

willyolio Since: Jan, 2001
#99: Feb 12th 2011 at 10:44:51 PM

i kind of wish it would become affordable before i die, but i don't think that's likely. On the other hand, the idea of immortality seems almost guaranteed at some point in the future, barring some humanity-destroying disaster.

Add Post

Total posts: 99
Top