TVTropes Now available in the app store!
Open

Follow TV Tropes

Following

AI Run Society

Go To

Yej (4 Score & 7 Years Ago) Relationship Status: They can't hide forever. We've got satellites.
#26: Jan 26th 2011 at 10:25:59 AM

a computer will follow its programming without any real thought.
Look at me still talking when there's science to do...

saladofstones :V from Happy Place Since: Jan, 2011
:V
#27: Jan 26th 2011 at 10:29:50 AM

I saw a lot about algorithms, which proves my point, it seems.

"the creativity, expertise, and the recognition of importance is still dependent on human judgment. The main problem remains the same: how to codify a complex frame of reference."

Okay, so it basically proves my point if I understand it right.

Actually I don't know if I understand it at all. :V

Some of the more reasonable comments hold that the article is overstating what actually happened so I'm neutral now.

edited 26th Jan '11 10:32:24 AM by saladofstones

Well he's talking about WWII when the Chinese bomb pearl harbor and they commuted suicide by running their planes into the ship.
Tzetze DUMB from a converted church in Venice, Italy Since: Jan, 2001
breadloaf Since: Oct, 2010
#29: Jan 26th 2011 at 10:34:26 AM

No no, I want perceptions of people with or without the requisite knowledge on the issue. Democracies don't run as a technocracy, everybody votes on any issue whether or not they have a clue about the subject matter. So your description of what you think is useful even if inaccurate.

So when I speak of an AI-run society, you picture a single computer system controlling absolutely everything?

saladofstones :V from Happy Place Since: Jan, 2011
:V
#30: Jan 26th 2011 at 10:35:32 AM

That depends, I don't see what multiple-A.I.s could really accomplish if they are otherwise the same. If they have different definitions of "good" that would cause problems, wouldn't it?

AI-run, to me, means that an advanced AI dictates how society should be managed in the same way as the boss of a company.

edited 26th Jan '11 10:36:08 AM by saladofstones

Well he's talking about WWII when the Chinese bomb pearl harbor and they commuted suicide by running their planes into the ship.
Tzetze DUMB from a converted church in Venice, Italy Since: Jan, 2001
DUMB
#31: Jan 26th 2011 at 10:36:21 AM

It would cause problems in the same way regular representational democracy causes problems (sides not agreeing, etc.), I suppose.

edited 26th Jan '11 10:36:35 AM by Tzetze

[1] This facsimile operated in part by synAC.
breadloaf Since: Oct, 2010
#32: Jan 26th 2011 at 10:37:53 AM

Well, I don't want to pollute the discussion too much with my own expertise in the subject matter but let me give you a hypothetical scenario:

USA put its to a referendum over whether or not it should use an AI to determine interest rate changes at the Federal Reserve.

Also on the ballot, a state referendum on whether to have AI run the power grid.

So I ask you, what would convince you one way or another how you should vote on the issue?

saladofstones :V from Happy Place Since: Jan, 2011
:V
#33: Jan 26th 2011 at 10:38:02 AM

The problem with an AI is that you have to tell it everything it needs to do, I was told, in rather large detail, of all the things you would need to tell an AI to do to open a door.

I don't see the advantage to it because, assuming it didn't know how to learn and didn't know how to learn in a rational way, it would be static and need to have some human influence in telling it about new issues and how to view them. I think.

@breadloaf: If the AI's decisions had oversight and it could be proved it would do a better job than a person and if it couldn't be manipulated, just like in any other case.

edited 26th Jan '11 10:39:15 AM by saladofstones

Well he's talking about WWII when the Chinese bomb pearl harbor and they commuted suicide by running their planes into the ship.
breadloaf Since: Oct, 2010
#34: Jan 26th 2011 at 10:38:41 AM

That's current AI yes, we're presuming one more advanced capable of learning to some degree.

EDIT: Okay so basically, you would want a human to make the final decision. How would you expect the AI to be used? Or how would you want the public to be able to see/interact with the AI?

edited 26th Jan '11 10:39:27 AM by breadloaf

Yej (4 Score & 7 Years Ago) Relationship Status: They can't hide forever. We've got satellites.
#35: Jan 26th 2011 at 10:42:12 AM

saladofstones, AI has progressed to the point that, in some limited circumstances, you can simply give it information and let it draw its own conclusions, as in the example of someone feeding a machine pendulum data and getting the laws of mechanics out.

Tzetze DUMB from a converted church in Venice, Italy Since: Jan, 2001
DUMB
#36: Jan 26th 2011 at 10:43:51 AM

The problem with an AI is that you have to tell it everything it needs to do, I was told, in rather large detail, of all the things you would need to tell an AI to do to open a door.

I don't see the advantage to it because, assuming it didn't know how to learn and didn't know how to learn in a rational way, it would be static and need to have some human influence in telling it about new issues and how to view them. I think.

I assumed that we were talking about AI in a more science-fictional/theoretic sense.

I really doubt you could program anything approaching consciousness simply by exhausting the space of all possible situations, matching each with an action, because there are pretty well infinite possible situations.

[1] This facsimile operated in part by synAC.
saladofstones :V from Happy Place Since: Jan, 2011
:V
#37: Jan 26th 2011 at 11:12:58 AM

The AI would run the complex calculations on what should be done faster than any human could, some type of human oversight, ideally, would verify that its logic isn't flawed, and it would be up to another AI, perhaps, on how it should be done, and another AI to actually manage doing it.

I figure the more oversight you can have with this, the less things can go horribly wrong, like in 2001, since there wasn't anyone to say, "There is something wrong with this AI"

I'd imagine that since redundancy is often used, you would have many A.I.s running the same basic calculations but I'm not sure how you could reasonably create a system for them to decide which one to use.

If we make the AI's too human, as well, I don't see how that is good either.

edited 26th Jan '11 11:13:32 AM by saladofstones

Well he's talking about WWII when the Chinese bomb pearl harbor and they commuted suicide by running their planes into the ship.
Wanderhome The Joke-Master Since: Apr, 2009 Relationship Status: Healthy, deeply-felt respect for this here Shotgun
The Joke-Master
#38: Jan 26th 2011 at 11:23:12 AM

[up] Which is why Breadloaf's scenario is highly unlikely. No one with an ounce of self preservation and a complementary ounce of sense would be stupid enough to actually hand the reigns of societal control over to a machine.

edited 26th Jan '11 11:23:40 AM by Wanderhome

saladofstones :V from Happy Place Since: Jan, 2011
:V
#39: Jan 26th 2011 at 11:25:26 AM

It would be built up gradually, like anything else. The frog in the boiling water example, in this case, not sure if Tzetze can tell me what its actually called.

We already use computers to do a lot of things.

My worry is that if something happens to the computers, which we would become reliant on in this system, I don't think society could regulate itself.

Well he's talking about WWII when the Chinese bomb pearl harbor and they commuted suicide by running their planes into the ship.
Yej (4 Score & 7 Years Ago) Relationship Status: They can't hide forever. We've got satellites.
breadloaf Since: Oct, 2010
#41: Jan 26th 2011 at 11:33:29 AM

Well I was giving you multiple scenarios on how it would work and I feel a lot of people here believe that if we use AI to run things, we want it merely to spit out an answer and the human then presses the button if he thinks the answer is good.

Shichibukai Permanently Banned from Banland Since: Oct, 2011
Permanently Banned
#42: Jan 26th 2011 at 11:56:35 AM

Can we make the AI truly politically impartial and objective?

Most people are culturally and politically biased. How would the AI decide what is important? Based on a set of algorithms and ideals set by its programmers?

I suppose you could mould the AI to follow the letters of a constitution first and foremost.

What about a council of AI? They would be AI of different dispositions which represent different lifestyles, different viewpoints. All decisions to change the law would be subject to the approval of these AI.

edited 26th Jan '11 11:57:30 AM by Shichibukai

Requiem ~ September 2010 - October 2011 [Banned 4 Life]
Yej (4 Score & 7 Years Ago) Relationship Status: They can't hide forever. We've got satellites.
#43: Jan 26th 2011 at 12:39:50 PM

Can we make the AI truly politically impartial and objective?
Well, yes. There's nothing stopping an AI deducing the answers to problems through pure logic alone.

For a sufficiently advanced AI, it could just read every news outlet simultaneously and analyze from there.

saladofstones :V from Happy Place Since: Jan, 2011
:V
#44: Jan 26th 2011 at 4:06:02 PM

That is assuming that the information from news outlets are being truthful, and even if that was the case, that they were complete.

Well he's talking about WWII when the Chinese bomb pearl harbor and they commuted suicide by running their planes into the ship.
Ultrayellow Unchanging Avatar. Since: Dec, 2010
Unchanging Avatar.
#45: Jan 26th 2011 at 4:31:04 PM

I would have no problem with A.I.s running the power grid and financial markets, as long as they were carefully monitored and objectively better. Now, I think the issue in most governmental situations is what the best code of ethics for AI would be. My thoughts? It would have to be taught that the end does not justify the means, or we'd get fairly horrible conclusions fairly quickly. With that assumption in place, 'protect and advance the interests of humanity, both the subspecies and all the individuals making it up' would be my major concern.

Except for 4/1/2011. That day lingers in my memory like...metaphor here...I should go.
lordGacek Since: Jan, 2001
#46: Jan 26th 2011 at 4:32:12 PM

You know, I'll just stick it here: I've once had that idea for fiction, about there being a Zeroth Law Rebellion, only that the perfect system the machines try to implement is, by some glitch (not so superhumanly smart, after all?) or hacker's mischief, communism. cool

...for this thread, we may call it to mean that I'm a bit afraid of any supposedly "perfect" systems, including these created by AIs.

Yej (4 Score & 7 Years Ago) Relationship Status: They can't hide forever. We've got satellites.
#47: Jan 27th 2011 at 8:08:24 AM

[up][up][up] Except it's a superintelligent AI, so it knows that no single source is going to be flawlessly reliable.

[up] Paranoia?

saladofstones :V from Happy Place Since: Jan, 2011
:V
#48: Jan 27th 2011 at 8:10:26 AM

@Ultra: It would have to be Machiavellian to be good at running the state since he never said the end justifies the means, only that ethics wouldn't be a requirement for a decision to be a good one.

@yej: Im saying that news outlets alone wouldn't provide a complete picture. It would probably have sat. or surveillance methods to find out stuff.

edited 27th Jan '11 8:11:20 AM by saladofstones

Well he's talking about WWII when the Chinese bomb pearl harbor and they commuted suicide by running their planes into the ship.
DeMarquis (4 Score & 7 Years Ago)
#49: Jan 27th 2011 at 8:19:02 AM

You could use an AI to simply study a complex system (the power grid, the economy, global climate) and forecast future states, make recommendations regarding how to improve the system (or prevent developing problems), depending upon specific parameters given by humans (i.e., the goal states we might try to reach), but not actually implement anything. That way we could be told the benefits and costs of a course of action, but make the decision ourselves.

I'm done trying to sound smart. "Clear" is the new smart.
saladofstones :V from Happy Place Since: Jan, 2011
:V
#50: Jan 27th 2011 at 8:34:32 AM

The example here is that ultimately, regardless of oversight, the AI makes the decisions.

What was the Asimov story that had the military-super-AI in which the General admitted that, in the end, he just tossed a coin to decide?

edited 27th Jan '11 8:35:00 AM by saladofstones

Well he's talking about WWII when the Chinese bomb pearl harbor and they commuted suicide by running their planes into the ship.

Total posts: 67
Top