Well, the way modern tech appears to be going, by the time we figure out how to program certain skills into AI, they'll have been fully capable of running society for a while. Not sure if they should, though.
Except for 4/1/2011. That day lingers in my memory like...metaphor here...I should go.The main question with a superintelligent AI running society is which values does it optimise for? GDP is commonly used but because of Goodhart's law an AI optimising for GDP would drive humanity into some sort of consumerist slavery dystopia by spending every resource available, including humans, in increased production. Possibly even destroying all humans and tiling the universe with dollar signs. Optimising for happiness would mean that we get smilies painted on our souls or we get recycled as raw material for tiling the universe with very small, very happy creatures. Optimising for scientific progress would result in the universe being converted into the computer itself, or tiled with very small scientists asking questions and receiving answers (seeing a pattern here?).
To avoid such problems the only way a superintelligent AI could be responsibly optimised is the whole of human desires, morality and volition. At that point we can start talking about what the AI will do to actually benefit us. In that point it's pretty much irrelevant because a superintelligence that shares human morality and everything else we want and is not biased, corruptible nor inefficient (those are pretty much included in the definition of superintelligence) anything it does while in charge would very likely be an improvement over any system that relies on biased, corruptible and stupid humans.
If not talking of superintelligences, using AIs instead of human decisionmakers has already been shown to be potentially more efficient in certain fields if the programmers are capable of doing their jobs because the AI can avoid typical human failure modes and make impartial judgements. But if the AI is flawed, then Garbage In Garbage Out obviously applies.
In general the issue with anything in human coordination is our failure to think instead of playing status games and identity politics and getting corrupted by power and mind-hijacked by rhetoric. Anything that avoids these while retaining human morality would be superior, even if its intelligence per se wasn't above human level. Even an impossibly rational and incorruptible human with absolute authority would suffice but there doesn't exist such a thing, no matter how fond political movements are of their leaders.
I'd set it for scientific advancement.
Fight smart, not fair.Well aside from that one liner, you can set it for a wide number of goals all at once and attempting to maximise all values. But, as with democracy, a big thing is for the AI to know it is making a mistake even if the metrics it is using appear to indicate otherwise, so that it knows, "I clearly need to look at the situation at more angles". I think this would be something along the lines of voting for "Your optimisation doesn't satisfy me because..."
The most ultimate AI would take in concerns individually to create an overall society.
These question aren't any different from the ones we face anyway in politics. The AI would just answer them faster. We set an arbitrary line that we call "poverty" and then it figures out how to run the economy such that no one's below it.
Frankly, I don't think the idea is very interesting because all we'll do is program it to do what we do anyway.
edited 29th Jan '11 2:33:23 AM by Clarste
Except that with any luck, the number of conclusions it draws would come a lot quicker. A.I.s can multitask and analyze faster than a human mind can, and it's immune from other limitations of the human mind (such as the subconscious or hormonal/chemical imbalance).
Also, any utopic ruler would have to place the happiness of its subjects, the advancement of culture, and the generation of idea as all equally-important factors.
Too much of one of the three over the others and you end up either with a society where everyone's miserable but a select few geniuses have figured out how to colonize planets, a society where everyone is happy but have the lifespans of gerbils, or a society composed of a Lotus-Eater Machine where nothing ever changes and life exists purely for the sake of it. The problem with human history is that popular opinion, public image, and/or self-entitlement have held back one or more of these three things. Any machine which is entrusted to run society has to be programmed to find a way to maximize all three at minimal cost to any.
edited 29th Jan '11 3:53:57 AM by KingZeal
It seems innevitable to me. We already have many ssystems that we require for the day to day running of business, agriculture and manufacturing that are run with the aid of AI. I don't think we need it though.
MeFor instance, http://en.wikipedia.org/wiki/High-frequency_trading
70% of the stock market is controlled by computers that can trade a thousand times faster than you.
edited 29th Jan '11 2:13:50 PM by Yej
Da Rules excuse all the inaccuracy in the world. Listen to them, not me.And due to Walmart's innovations, they're beginning to run the trucking system too for corporations that can afford JIT inventory systems. Computers decide what is shipped, when it is shipped and who it is shipped by. No human bothers to do anything unless a mistake comes up and they have to fix the software to correct it.
Doing the same stuff as humans, except faster, is actually a big deal.
I suppose most of the questions surround, how do you fix mistakes and who makes sure that the code is doing something objective. In the case of corporations, their AI systems are all going to be beneficial, because they need it to be to be profitable and competitive. In the case of government, different political groups would be competing to get their view to be the "correct" view. So in that case, perhaps open-source projects would be useful (even if only 0.01% of the population knows how to check said code).
I think an AI-run system should be above party politics and focus on how to best represent the desires of the population.
Well he's talking about WWII when the Chinese bomb pearl harbor and they commuted suicide by running their planes into the ship.A big thing that worries me about an AI run society is that, even if you program an AI to work under certain circumstances, those circumstances may change as little as a week later. Humans are bad enough at responding to political change; I don't feel like putting any sort of power in the hands of an algorithm that isn't as flexible as a human.
And that's not even going to the possibility that the AI floods the building with a deadly neurotoxin or starts executing its directions in ways no one meant it to long after its designers are gone. (Sorry, just had to make the joke.)
edited 2nd Feb '11 8:38:49 PM by Linhasxoc
Well one prevalent theme and view that I see with people in regards to AI-society appears to be the thought that if we use computers to decide anything, then humans will instantly not be involved in the political system at all. There also appears to be an incredible bias towards top-down style systems.
I just wonder, if someone introduced bottom-up style AI-society, would that be more acceptable? The idea is this, each individual person uses various computer/electronic and advanced AI systems to help them in their day to day activities. Where should I shop to get the groceries I want? Where do I drive to get to work fastest and avoid traffic? And then of course, the big thing comes with... who should I vote for in the upcoming elections?
It would depend. On one hand, a computer would be a lot more reliable than the screeching retards we have at the helm now, but on the other hand, as has been stated before, software can have some very unpleasant bugs in them.
I dont know why they let me out, I guess they needed a spare bedInterestingly, on that point, all of our most critical systems are no longer human run. We've chosen catastrophic software bugs over the number of deaths/accidents caused by humans. All passenger planes, trains, rail systems, metro, power grid, nuclear power plants etc are all computer run. But we have a human sitting there just in case (although there have been cases where the computer was right and the human refused to believe it and caused a plane crash, another time a pilot was committing suicide and killed everyone on the plane). So we've reduced social decisions in those areas to a very plain metric... how many deaths? However, with social policy, the metric isn't that simple.
Well, I can tell you that many systems use software that's positively ancient just because it is almost completely bug-free and secure, and the amount of software engineering needed to update it and make it as secure is simply not worth it.
And, come to think of it, what if we did have a (top-down) system like the one from my flavor quote, where you've got multiple AI making the decisions that are variations on one program?
Finally, a bit of thought on bottom-up systems–well, for one thing, we do have software to tell us how we should vote. I honestly don't see how existing software tools for things like that, or other routine tasks like shopping, etc. could be made more useful by making them based on an AI.
They're already based on AI, as is any search functionality and so on. My real point was more along the lines of it being as usual as looking at your wrist watch when you ask a question, you'd query secondary advice from a handheld AI.
Well if you have separate systems, that's more in-line with how I'd expect a top-level AI run society to be like anyway because that's how you'd build the system. You'd put it together, piece by piece, separate tools at first, before merging it into something that is more or less just a bunch of different AI working together to spit out comprehensive answers.
Well I see different expectations out of AI... some of you expect it to be totally useless, others to be a tool used by humans, others as superhuman intelligence to guide us. I can see different variations of oversight from, the AI just puts out a recommendation to the AI has complete and exact control over the system. In terms of the future, and how varied our current government systems are, it's easily possible that all of the above happen.