I was referring to the design process.
What's precedent ever done for us?Probably heavily automated, if not entirely automated by various algorithms.
Da Rules excuse all the inaccuracy in the world. Listen to them, not me.To a certain extent, yes, but there are very sophisticated tools that approach weak AI that help the chip developers do their jobs; there's no way a human brain could encompass the quantum-level complexity of an entire CPU architecture.
edited 10th Feb '11 12:10:22 PM by Fighteer
"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"The JVM is non-deterministic because you can't guarantee it's results. Ever run stability testing for that piece of junk?
The design process of computers and computer systems and software is run by people. The singularity is meant for us to approach a time where we heavily use AI to design computer systems. Then eventually stronger AI would design the systems for us.
Ever read Transmetropolitan? I think thats a fairly accurate prediction of what a hi-tech future would be like.
If we're still talking about the military using programing by the time this gets posted, the F-35 uses C++.
Fight smart, not fair.Most actual computer scientists I've talked with are kind of sceptical of the entire idea. (Mainly that hard AI is even possible in a form that would be of any use whatsoever)
"No, the Singularity will not happen. Computation is hard." -Happy EntYeah, that's an important thought. I'm not sure what immediately practical uses of human-level AI would be that couldn't just be performed by a human. Getting them killed, maybe.
[1] This facsimile operated in part by synAC.Another question is how you would define human level AI anyway.
Blind Final Fantasy 6 Let's PlayI mean, in a sense we're already producing sentient computing units every day.
They're called "children".
"No, the Singularity will not happen. Computation is hard." -Happy EntIf a strong AI would be that much of a load/irritating, I recommend against further pursuit.
Fight smart, not fair.The quantum level complexity...?? What the-??
No, look I was going to be on-topic, I promise but I caught sight of the time. I have a host of questions about the field of singularism (though sometimes I confuse bit with transhumanism) but some are probably answerable and some are complicated etc. However there is one I'd like to leave on:
- Define "breakdown of our ability to judge what would happen" and why, actually, is it linked to exponentials.
This seems to get confused and changed over and for something held as such a certainty, it seems to get fuzzy on its axioms and seriously defining things.
Let me put it like this, you can trust "humans" to be somehow more reliable and result in 10 000 deaths a year, or you can rely on a computer for 0 deaths a year and then once every 20 years a problem crops up that kills 1000 people.
I never said that it was a bad thing to have computers doing a large chunk of the work. Just that having human operators sanity-checking things and able to intervene manually if absolutely necessary would be a rather silly thing not to have.
Please never work in QA for anything mission-critical. Even failsafes need failsafes, and eventually one of those failsafes should probably be a dude with an ax.
I've found, just from programming simple toy projects, that edge cases pile up fast. And in programs, most errors have compounding effects.
Computational power is not sufficient (though likely necessary) for strong AI. Ascribe a proposed AI program any motives you like- friendly, hostile, paperclip, whatever. Give it all the CPU cycles you want, and you still have no guarantee that it can implement its will on the world- it needs an accurate understanding of how the world works for that, which requires good input data.
Do you highlight everything looking for secret messages?I'm not really sold on the whole singularity idea. There really are physical limits to what can be done. Not to mention from that description lord Gacek posted earlier, the singularitarians sound like arrogant pricks. But perhaps the article was being uncharitable.
Also, singularitarians is an absolutely terrible name. The internal rhyme just completely kills any weight behind it. Singularitans would have been better.
And speaking of Gacek's post...
The atmosphere was a curious blend of Davos and UFO convention.
I read that as Davros. Which gives the whole subject a rather interesting bent.
Belief or disbelief rests with you.tl;dr
Everyone knows that Kurzweil has been predicting the same stuff for decades, right? He wrote a book around 1990 and another one around 2000 that were predicting the same things he's predicting now.
He's got a great grasp on technology, a tenuous grasp on psychology, and a nonexistent grasp on philosophy.
^^^ One reason I am skeptical of strong AI is that there is no real reason why people would even seek to replicate a human mind in the first place. You don't have to be human-like to act intelligently.
Blind Final Fantasy 6 Let's Play^
I think there doesn't need to be a reason, people would just do it as an achievement in and of itself.
Man I wish I could make a meaningful comment other than "it sounds nice and I'm hopeful, but I'm not staking my bets on it".
Kill all math nerdsI'll be lucky if it comes before I die. Extremely lucky.
Genkidama for Japan, even if you don't have money, you can help![1]Well aside from the fact that I'd never voluntarily work in QA, I think you're missing the point. We're not running some simple system where a "dude with an axe" can just shut it down just in case. What are you going to do in a landing of a space shuttle coming in from orbit? Autopilot fails but don't worry, dude with axe will solve it! No, if the computer fails, and manual control is not good enough you die. Systems will only get more complex in the future until humans are incapable of doing anything by hand.
I don't know if we can get sentient strong AI but we can get our weak AI tools to do heck of a lot of things in the next 50 years. Even if they never become sentient the work they are capable of accomplish is very great.
i kind of wish it would become affordable before i die, but i don't think that's likely. On the other hand, the idea of immortality seems almost guaranteed at some point in the future, barring some humanity-destroying disaster.
Also, the JVM might be treated as non-deterministic because in a multi-thread environment, you can't predict the scheduler.
edited 10th Feb '11 12:02:35 PM by Yej
Da Rules excuse all the inaccuracy in the world. Listen to them, not me.