Seems like too broad a question. What would a government run by humans be like?
[1] This facsimile operated in part by synAC.I think you're underestimating their complexity. I won't go on a killing spree just because a few neurons die. Analogously, large changes in the behavior of such a complex piece of software as AI probably couldn't be caused by a bug.
edited 25th Jan '11 8:12:28 PM by Tzetze
[1] This facsimile operated in part by synAC.Well let me clarify, I wanted a discussion on the perception of how you think AI run society would be like. That is, what do you think is the most likely first scenarios should we have sentient AI? Would do you think we would use the AI to do? What would the AI do? What would it do differently? How would we integrate into our government system?
I'm not asking for realism, I'm merely asking for what it looks like in your mind, what it sparks your imagination to see.
I just don't see how the dicussion is worthwhile. Noone knows anything about the subject, and it's a hopelessly vague and open ended question anyway.
Blind Final Fantasy 6 Let's PlayActually, what I wanted to see was this. I'm tinkering with a story idea about societies developing AI and proposing it for use in government policy.
But what I wanted to see is people, like ones on this board, who basically have the knowledge they have right now, what would they think of this and how would they react to it? You're not going to get any more knowledgeable about AI if we introduced it tomorrow, how would you vote on the issue?
That would depend on the nature of the AI. Which we don't know, because strongn AI don't exist.
[1] This facsimile operated in part by synAC.Ruthlessly pursue advances in technology.
Fight smart, not fair.I imagine the reason people would try living in a computocracy is that an AI could be vastly more intelligent and faster than a human ruler, could implement its decisions immediately via a robot army, is incorruptible and never needs to be rotated out of government when it gets too comfy, and has a dedication and selflessness no human could achieve.
Of course that's the hope. The reality might be different, and it certainly is in most stories that have humans ruled by computers. Us humes squirm at the idea, it doesn't feel right, and a monstrous robot despot makes for a great archenemy.
"Well let me clarify, I wanted a discussion on the perception of how you think AI run society would be like. That is, what do you think is the most likely first scenarios should we have sentient AI? Would do you think we would use the AI to do? What would the AI do? What would it do differently? How would we integrate into our government system?"
1) I'm pretty sure threads like this are supposed to go in World Building.
2) It would probably be initially developed by the military (for aircraft control, ship control, logistics coordination, air-defense coordination, artillery targeting, whatever) or by Google as an aide for search engines and keeping track of personal information.
It would most likely be used the same way we already use computers only more so. Keeping track of and organizing information, mostly. It would also, of course, be under the direct control of a sysadmin, and the nominal control of whoever's in charge (manager of the office it's in, CO of a military unit, whatever.)
@betaalpha: Eh heh heh.
Pros:
- vastly more intelligent and faster than a human ruler
- could implement its decisions immediately via a robot army
- is incorruptible and never needs to be rotated out of government when it gets too comfy
- has a dedication and selflessness no human could achieve
Cons:
- vastly more intelligent and faster than a human ruler
- could implement its decisions immediately via a robot army
- is incorruptible and never needs to be rotated out of government when it gets too comfy
- has a dedication and selflessness no human could achieve
As for the thing itself, I'd say my view is more-or-less like Wanderhome's, though I'm not so sure on the sysadmin direct control (at least in practice). Save for it taking over (but that would require Everything Is Online trope, and likely more), I don't see, in the foreseeable future, us giving up power to let the machine lead.
I'm waiting for what will come out of this thread, by the way.
Assuming the creators of the AI aren't complete putzes, they'll build in as many controls as possible. Like say, requiring the approval of designated people (be they admins, employees with access clearance, whatever) for certain actions. Like, the AI can come up with a budget, but it has to be approved by the CFO.
- That would be like asking about a government run by Fairies
- Neither exists
- Fairies don't exist
As far as a society run by A.I... I don't see it. Maybe a society run by a monarch who uses an A.I. to implement his authority, but a straight computocracy would never be accepted by the people.
My name is Cu Chulainn. Beside the raging sea I am left to moan. Sorrow I am, for I brought down my only son.If we're talking about human-level AI, it wouldn't run any differently than a human-ran government.
If we're talking about a post-human-level AI, then it runs however the computer wants it to run. This is hopefully in our interest, but doesn't have to be. However, "acceptance by the people" is a non-issue. The Computer Is Your Friend, literally.
From what can I tell, what if it decides, if its program is to create an efficient society or a functioning one, that people are the problem? To me even if it was made, it would need an entire human overwatch that could manipulate it to their interests.
Well he's talking about WWII when the Chinese bomb pearl harbor and they commuted suicide by running their planes into the ship.I won't describe it right, but a computer will follow its programming without any real thought. The more complicated the programming, which will be done by a fallible person, the issue is still there that there isn't self-conscious thought. Only a remarkable illusion thereof.
The more complicated the code with its millions upon millions of lines, no doubt, the more likely a conflict will come up that will lead to a weird solution.
To me, 2001 didn't portray technology as evil, but only limited. HAL was given two orders, to simplify it, 1.) Never lie to its crew, 2.) Don't tell the crew their real mission. To it, the only solution that satisfies both requirements was to eliminate the crew. Even the golden law of robotics was portrayed as limited.
It will follow its program even if it doesn't make sense or its broken. Video-game AI is a poor example, but an example would be the Patriot anti-missle programming. Every time it fired, its aim would be slightly off each time because of recoil or something. But it was never told to adjust its aim so this lead to a situation where it fired and missed a SCUD missile, but it basically reached a dead-end since as far as it knew, it was supposed to hit it.
Logic bombs effect computers more, in my opinion, because they can't just accept it or ignore it, they have to somehow find a solution for it or work around.
But I don't know shit about programming, so ignore what I say. :V
Well he's talking about WWII when the Chinese bomb pearl harbor and they commuted suicide by running their planes into the ship.

Okay, so I wanted to get a sense of opinions of two things on the topic of a society run by sentient artificial intelligence.
a) Would you agree/disagree with such a concept and why?
b) What do you believe it would look like? (Since I imagine this would be related to whether you agree/disagree that humans should be led by sentient artificial intelligence)