So why you people disagree with "The singularity happened in England 1712."? :P
A guy called dvorak is tired. Tired of humanity not wanting to change to improve itself. Quite the sad tale.Well, the idea of their being multiple singularities is rather at odds with the exponential extrapolation behind the idea of the singularity.
[1] This facsimile operated in part by synAC.What I'm saying is that apart from artistic expression, what job can a human even perform that an AI couldn't have accomplished or automated better? It's like how if we have automated assembly lines building the perfect car, what's the point of a human who wants to work going "I'm going to go build cars."
Sure, it's something to do, but what's the point? It's not work that needs to be done. The level of competency that a Strong AI possesses essentially means that there is absolutely no demand for humans to do much of anything, there's no need and even without the need for work, our quality of work is so much lower than what the machines would produce that we would just be in the way.
It's like being a master widget maker. You're pumping out absolutely perfect widgets at a rate that nobody else can equal. A young man walks up and starts making widgets alongside you, at half the pace and with half the quality in the finished widget. You have the capacity to produce so many widgets that you could easily produce more than we need as it is, so there's no reason for the young man to craft widgets in the first place, all he is doing is wasting resources to produce an inferior product that nobody will want for nothing other than personal enjoyment.
I don't want to live in a world like that, I want the accomplishments of our civilization to be human accomplishments born on the backs of human effort, not by proxy using a strong AI. In this particular case I don't want the easy way, because we would sacrifice much of our own humanity in doing so.
edited 3rd Jun '11 12:00:16 PM by Barkey
Tzetze: I guess it is because jumping from X^2 to X^3 is a quite large jump? Hence when a large jump occurs instead of just the normal steady growth, it can be considered a new singularity.
A guy called dvorak is tired. Tired of humanity not wanting to change to improve itself. Quite the sad tale.A mathematical singularity is a point for which a mathematical object is not defined. The idea behind The Singularity is that if the technological acumen of humanity is graphed, where x is time and y is technological acumen, there's a point in the future where dy/dx is infinite, so y does not exist, and is a singularity. You can't have more than one. "singularity" doesn't mean "paradigm shift".
[1] This facsimile operated in part by synAC.Except I don't think anyone's claiming there will be any physical infinities before, during, or after the "singularity".
"Had Mother Nature been a real parent, she would have been in jail for child abuse and murder." -Nick BostromSo the extrapolators can't even do math right. Tch.
[1] This facsimile operated in part by synAC.It's just a fancy sounding buzzword, no need to read too much into it.
edited 3rd Jun '11 12:19:17 PM by LoveHappiness
"Had Mother Nature been a real parent, she would have been in jail for child abuse and murder." -Nick BostromI can't think of any non-piecewise function that has infinite dy/dx for any defined x.
Da Rules excuse all the inaccuracy in the world. Listen to them, not me.Well yeah, "undefined" would be more accurate.
[1] This facsimile operated in part by synAC.I believe that the Timecube holds these answers, and more.
In any case...
The problem I have with this is that I see it as a fairly arbitrary line. Humans can't accomplish much without our tools anyway.
[1] This facsimile operated in part by synAC.^
When you let the tools make the decisions or get creative is when you start to blur the lines way more than I'm comfortable with. There need to be some very watertight guidelines established and upheld with extreme prejudice before we even start to dabble in the area of Artificial Intelligence.
Even then, I'm not terribly comfortable with the entire concept. Technology should make the things we accomplish easier to accomplish, it shouldn't do it for us.
edited 3rd Jun '11 12:57:26 PM by Barkey
Uh, you're at least sixty years late.
What constitutes "making a decision" or "getting creative"? I don't think that this is a distinction as obvious as you seem to think. As a trivial example, if I search for "one lov" on Google, can Google's AI be said to be "deciding" that I actually meant "one love"?
edited 3rd Jun '11 1:03:27 PM by Tzetze
[1] This facsimile operated in part by synAC.It's tricky, obviously.
And by dabble I meant seriously get anywhere close to achieving such a thing as a self-aware AI.
It's not something I'm qualified to make legislation about or anything, but I reserve the right to have heeby jeebies about the whole concept itself.
At which point they stop being tools and become citizens.
To expand: if we were to "uplift" another species to have human-like cognition, would you object to them becoming valued members of society? (You might, many would.) What about the more likely scenario than strong AI, that augmented human minds become capable of recursive self-improvement? Do they suddenly lose human rights just because they're more powerful than the known comfort zone? A lot of people would have a hard time answering yes to that one, regardless of gut feeling.
If the intelligence is capable of reasoning, I say that its status as originally machine shouldn't have any impact on how we judge it; and unless you're willing to place the same restrictions on humans capable of dramatic self-enhancement (why stop there? why not curtail education, proper nutrition and physical exercise? or just declare Year 0 and stop people wearing glasses?) there shouldn't be any reason to restrict synthetic intelligences from the same. At which point it starts to look a bit more like Luddism for its own sake.
edited 3rd Jun '11 1:31:22 PM by Jinren
^
I would say yes, simply because the risk of what could go wrong is too great.
When the risk is that a Strong AI could end up being not-so-benevolent and running rampant, possibly destroying most of or all of our race, I prefer to not even take that risk. If there's even a 1 percent chance that it could happen, I feel it isn't worth it. For Science! Is awesome when it doesn't threaten our way of life the way this does.
And by threaten our way of life, I'm talking about how easy things would be with Strong AI's just as much as the fact that they could turn and kill us all. Neither outcome is really acceptable to me, and even if it means denying rights to a sentient thing that we've created, I'm willing to go to that length.
I'm not talking about some primitive loving luddist beliefs here, I like our technology today, and there's up and coming stuff that really excites me. The possibility of essentially creating God however, is not one of them.(I'm referring to it as "God" because of the depth of power that such a being theoretically has) We create technology that renders things obsolete at a pretty fast rate these days, but I will never support creating something that makes humans obsolete, even if the entire thing hinges on how the creation feels about that particular subject and the appropriate course of action it wishes to take.
edited 3rd Jun '11 1:45:27 PM by Barkey
I am actually impressed by your intellectual honesty.
...yeah, I have nothing to add, sorry.
edited 3rd Jun '11 1:47:29 PM by Jinren
It's a military thing. I remember when we were discussing Yudkowsky's AI Box experiment, and I really wish I could do the experiment with him. When we discussed it here a while back I decided that there was no way I would let the AI out of the box, simply because I'd treat it as something which has no rights whatsoever, and my only job is to make sure nobody else touches that god damn button. Turning off empathy at will has its perks.
At this point, the fear of apocalyptic A.I.s (even assuming they will be invented) seems a bit... if not unrealistic than still a quite bit premature to come to that conclusion.
edited 3rd Jun '11 1:56:40 PM by LoveHappiness
"Had Mother Nature been a real parent, she would have been in jail for child abuse and murder." -Nick BostromThe legal system, and human emotions, are made to work assuming all involved are humans. Throw superintelligent immortals in the mix, and things go out of balance.
Point that somewhere else, or I'll reengage the harmonic tachyon modulator.^^
I don't feel that way in the slightest, the theoretical power of a strong AI and the amount of workload it would take on, even to the point of fulfilling nearly all of our civilizations major functions without the need for the judgment of a human operator makes it seem pretty logical that we're just a waste of space and resources standing in the way of efficiency.
Theoretically, a strong AI is the prime example of those lyrics "Anything you can do, I can do better." It makes us obsolete.
I'd say perhaps they would keep us as pets, but there's no sure bet that any of the functions served by a pet could be met by us for a human AI. Do they require affection? Because they certainly wouldn't need us to hunt for them or protect them.
edited 3rd Jun '11 2:01:43 PM by Barkey
My point being that even assuming these things will be invented, it's pure speculation to assume a negative outcome from the start. Just highly speculative...
"Had Mother Nature been a real parent, she would have been in jail for child abuse and murder." -Nick BostromI like to think that when the possibility of such a thing happening is even remotely realistic, cautious speculation is a good idea.
Just remember, the concept of a general-purpose calculating machine has not even existed for 100 years and look how far we are already.
edited 3rd Jun '11 6:48:33 AM by Yej
Da Rules excuse all the inaccuracy in the world. Listen to them, not me.