Follow TV Tropes

Following

AI and their motives

Go To

JerekLaz Since: Jun, 2014
#51: Mar 18th 2015 at 8:03:16 AM

Would you like to format your species Y/N?

MattStriker Since: Jun, 2012
#52: Mar 18th 2015 at 8:51:16 AM

One of the few bits I actually liked in Civilization Beyond Earth was the quote you get when you hit maximum Supremacy:

All previous versions of humanity will no longer be supported as of this update. :P

Reality is for those who lack imagination.
JerekLaz Since: Jun, 2014
#53: Mar 18th 2015 at 9:11:58 AM

There's a fair bit of fridge horror to all the choices in that, what with the purity lot not wanting any augments, the bio lot nigh forcing you to use gene mods and the cyborgs going a bit "Matrix machine" on you.

MS in your head etc.

EchoingSilence Since: Jun, 2013
#54: Apr 7th 2015 at 5:07:50 AM

So I've decided to try my hand at Necromancy. RISE FROM YOUR THREAD!

Joking aside I have gotten a bit more for that setting with the Core Guard. I mentioned awhile ago that viral programming exists that can cause A.I.s to break down, well this is the reason why the Empire in setting split up.

The virus caused a mass breakdown of thousands of Core Guards that lead to them, in a panic, starting a war with the rest of the galaxy and empire, now not all the Core Guards ended out this way, many were still stable enough to help the effort to fight back.

EchoingSilence Since: Jun, 2013
#55: May 1st 2015 at 6:58:55 AM

Roll Necromancy!

I have been reading about Atomic Robo and I noted something. Robo is as fallible as a human being when it comes to the brain, he can forget things, he still needs to study, he has trouble with languages he never bothered to learn before.

Would A.I.s and Robots see any reason to overthrow humanity if they are capable of all the same faults of the mind?

DeMarquis Since: Feb, 2010
#56: May 1st 2015 at 10:00:23 AM

I think in real life if an AI were to gain sentience, then either it will be self-aware without being able to change it's pre-programmed higher order goals (much like us), or else it will be as variable and unpredictable as a human being would be (again like us).

The whole "Humanity builds an AI which then turns on us" frankenstein style really doesn't make any sense. A motive to do something has to come from somewhere- presumably it just doesnt wake up one day and decides to become evil, any more than you or I wake up one day and decide to become serial killers. Whatever an AI decides to do after it becomes self-aware will reflect what nature it had before it became self-aware, including whatever features were built into it during development. Only bogey-creatures from the Id somehow develop in just such a way as to reflect humanity's deepest insecurities.

edited 1st May '15 10:00:36 AM by DeMarquis

DeusDenuo Since: Nov, 2010 Relationship Status: Gonna take a lot to drag me away from you
#57: May 1st 2015 at 11:59:20 PM

[up]I doubt the whole "sapience" thing will come to pass, for Chinese Room reasons. There's really no good economic reason to create it in the first place - besides For Science! - if any task given to an AI can be solved without it, and unless one is specifically told to self-improve to sapience it won't be able to.

The whole "Frankenstein" situation could happen, but it would have to be planned from the beginning to happen that way - it's the Doomsday Weapon of this century, I think. I agree, too, that giving an AI an Id would be a tremendously bad idea.

[up][up] So the answer is yes, any duplication of human faults can result in Robot Nazis (or Robot US Antebellum Southerners, or Robot Mongolian Emperors, or Robot ...uh, Penguins - I'm not well versed enough on S.America or Africa's histories to give a world-altering example for them), but you'd have to do some planning that doesn't make any sense.

There's a few reasons Atomic Robo Tesla never tried to pull an ALAN, and I suspect they have to do with him being Atomic Robo Tesla.

edited 2nd May '15 12:00:32 AM by DeusDenuo

Protagonist506 from Oregon Since: Dec, 2013 Relationship Status: Chocolate!
#58: May 2nd 2015 at 12:29:01 AM

The economic reasons would be the largest barrier. There isn't much of a reason to intentionally create a super AI. However, it's quite possible that we may create them on accident. For example, a machine with a directive to avoid damage might start mimicking fear reactions autonomously. However, I'd also argue that the first super A.I.s we create will not be HAL but more like the Iron Giant. As smart as mentally impaired humans. They'll likely have highly specialized minds, being brilliant in one area but rather stupid in another. For example, a super AI might be able to predict the stock market years in advance but have trouble forming proper sentences.

I have a theory on how a sentient AI could be created and possibly turn on us: The military creates a drone army, and a "master computer" which controls them. This AI becomes more advanced than we expected it to. It eventually sees some reason to turn on its creators, likely a zeroth law rebellion. Then, we'd have a robot rebellion on our hands.

"Any campaign world where an orc samurai can leap off a landcruiser to fight a herd of Bulbasaurs will always have my vote of confidence"
DeMarquis Since: Feb, 2010
#59: May 2nd 2015 at 12:39:51 PM

"It eventually sees some reason to turn on its creators, likely a zeroth law rebellion."

It's this step that makes no sense. IRL, computers dont work that way. You cant give them general ambiguous conceptual goals like "Protect Humans" because that assumes they already have human-like semantic understanding (in which case why impose "laws" on them- they arent any more unreliable than a human would be in the same circumstance). The way we program computers (and this would be the way used if sentience is being invented by accident) is that we pre-define all goal states in the form of logical algorithms. First, you would have to define objectively what "protect" and "humans" are, and then the AI could calculate the solution path which maximizes the output given current circumstances. That could still go wrong- in a "convert the entire planet into soft pillows" sort of way, but not by the AI turning rabid rotwieller on us.

DeusDenuo Since: Nov, 2010 Relationship Status: Gonna take a lot to drag me away from you
#60: May 3rd 2015 at 12:12:06 AM

[up][up] That stock market example isn't something you need an full AI for. They have algorithms that attempt to do that, now (which the book Flash Boys covers to some degree), to reasonable success. This isn't like dumping a full-bore OS (say, Windows 8) into a earpiece or headset - the difference is in whether the software is used to generate profit, or is actually the product, which changes the economics.

The military example requires that the AI's handlers to A) ignore what the AI is doing by B) having forgotten to set up a way to do so. This might fly on the front end, which would deliberately obfuscate its actions, but a large portion of the chain of command (as opposed to a civilian government) would have to be holding onto the Idiot Ball to let their contractor or the AI's back end so far off the leash.

(You get a lot of these scenarios from lazy writers who either don't understand or don't care about how much work actually has to go into causing this sort of "runaway AI" scenario. Sci-Fi Writers Have No Sense of Scale indeed. Really, this sort of thing is in the realm of a Mad Scientist being backed up with the budget of a small country - both Doctor Doom and Tony Stark are usually more careful than Doc Ock, though.)

edited 3rd May '15 12:12:26 AM by DeusDenuo

Protagonist506 from Oregon Since: Dec, 2013 Relationship Status: Chocolate!
#61: May 3rd 2015 at 7:12:05 PM

[up] Well yeah, the big problem is that it would require massive Genre Blindness from the world at large. The people who construct super A.I.s will likely have heard at least one sci-fi story where such a rebellion occurs. However, it isn't too unrealistic that a human-level AI would be difficult for people to know exactly what it's planning.

[up][up] Yeah, the hard part would actually be a reason to "turn" on us, as doing so would border on Stupid Evil in most circumstances. I think a "Gone Horribly Right" scenario is most likely: The AI takes its programming to a dangerous extreme. For example: a directive to protect human life might cause it to surrender to a hostile nation or try to control the world ala I, Robot. Even more likely than that though, would be a malicious hacker reprogramming it.

"Any campaign world where an orc samurai can leap off a landcruiser to fight a herd of Bulbasaurs will always have my vote of confidence"
EchoingSilence Since: Jun, 2013
#62: May 3rd 2015 at 7:19:52 PM

Hence why none of my AI villains are evil cause they are A.I.s. They are like that because they emulate human thought very well and that includes the ability to rationalize hurting people.

That and even in some cases it's more of a fault with specific programming like in post apocalyptic scenarios. What else is the military droid supposed to do when a unrecognized threat comes into it's IFF holding a unregistered weapon?

DeMarquis Since: Feb, 2010
#63: May 4th 2015 at 5:21:50 AM

Whatever it's been programmed to do.

EchoingSilence Since: Jun, 2013
#64: May 4th 2015 at 5:37:17 AM

Exactly, which is eliminate threat.

DeMarquis Since: Feb, 2010
#65: May 4th 2015 at 6:19:45 AM

That's not a programming command. For a computer, you have to reduce it to a logical algorithm.

See, the problem I have with this is that a) We assume that the AI is sophisticated enough to be able to interpret complex semantic concepts, like "threat" but then b) Somehow it isnt intelligent enough to understand an equivalent concept like "exceptions"

Doesnt make sense. Logically, it should either have human-like understanding or it doesnt.

EchoingSilence Since: Jun, 2013
#66: May 4th 2015 at 6:23:10 AM

Military A.I.s are closer to Mass Effect V Is, programs capable of interpreting commands but nothing truly magnificent, it's mostly a IFF system from a time of war where anything with a weapon was registered as a threat.

DeMarquis Since: Feb, 2010
#67: May 4th 2015 at 9:58:48 AM

Not familiar with Mass Effect, but what you are describing sounds like an Expert System rather than a full-blown General Artificial Intelligence.

Error404 Magus from Tau Ceti IV-2 Since: Apr, 2014 Relationship Status: Owner of a lonely heart
Magus
#68: May 9th 2015 at 9:09:27 AM

V Is in Mass Effect are essentially 'dumb' A.I.s; capable of carrying out various tasks, interpreting orders, and such, but are very limited by their programming and are non-sapient. Essentially, a very smart program.

Generally they're used for specialized tasks; such things as serving as a secretary/adjutant, dedicated to electronic warfare, assisting in managing computer systems, and the like.

DeMarquis Since: Feb, 2010
#69: May 9th 2015 at 6:58:38 PM

Yup, an Expert System, then.

Add Post

Total posts: 69
Top