Follow TV Tropes

Following

Artificial Intelligence

Go To

DeMarquis Who Am I? from Hell, USA Since: Feb, 2010 Relationship Status: Buried in snow, waiting for spring
Who Am I?
#351: Feb 9th 2018 at 12:00:36 PM

AI decision making is based on whatever inputs they are programmed to seek and accept. A computer program doesnt know what "objective data" are, thats a human concept. They know only sources of inputs and their content. An AI will use empirical data if you program it to seek and utilize such, otherwise it wont.

"We learn from history that we do not learn from history."
Fourthspartan56 from Georgia, US Since: Oct, 2016 Relationship Status: THIS CONCEPT OF 'WUV' CONFUSES AND INFURIATES US!
#352: Feb 9th 2018 at 12:02:34 PM

I thought we were talking about sapient synthetic entities, would they not have the potential to understand objective data?

"Sandwiches are probably easier to fix than the actual problems" -Hylarn
DeMarquis Who Am I? from Hell, USA Since: Feb, 2010 Relationship Status: Buried in snow, waiting for spring
Who Am I?
#353: Feb 9th 2018 at 12:19:22 PM

You mean possessing a sense of conceptual self awareness? I dont see any reason why they would, necessarily. It depends on how self awareness works, which is currently poorly understood. They would have to have some way of interacting with their environment, certainly, in a way analogous to sensory information, but I dont think thats what you are referring to.

If you mean they will be objective and rational in their dealings with the environment, well irrational emotional impulses evolved generally because they provide quick decision short cuts that cost less cognitive reasources than intellectual analysis. If we had to weigh all the relevent facts every time we had to make a minor decision, or react to an emergency, we would not have survived very long. I would think similar considerations would apply to any self aware intelligence.

Of course, an augmented intelligence, human or not, could have access to computational resources we natural humans dont have. But in any competition between two intelligences, speed and cost efficiency will confer an advantage, so some sort of decision making heuristics will be useful.

edited 9th Feb '18 12:23:43 PM by DeMarquis

"We learn from history that we do not learn from history."
supermerlin100 Since: Sep, 2011
#354: Feb 9th 2018 at 12:29:07 PM

Most of the danger from corporations are the long lasting and huge ones. And of course the point of the article is that ai could be a lot worst. large corporations are the source of a lot of the push back against environmental regulations and workers rights. The largest have revenues larger than most countries. And to a large extent you can rarely stop them from doing what they want. Getting Walmart to pay their workers decently through the free market isn't going to happen. And a good chuck of the government is convinced that regulations are satanic. Granted a lot of Walmart's undesirable behavior is simply done where it is legal.

Corporations that limit themselves to nice tactics don't get that huge, and the people who end up in positions of decision making are people's who's decisions are profitable, whether that involves not caring or just a lot of biased thinking doesn't make much difference.

Fourthspartan56 from Georgia, US Since: Oct, 2016 Relationship Status: THIS CONCEPT OF 'WUV' CONFUSES AND INFURIATES US!
#355: Feb 9th 2018 at 12:36:00 PM

You mean possessing a sense of conceptual self awareness? I dont see any reason why they would, necessarily. It depends on how self awareness works, which is currently poorly understood. They would have to have some way of interacting with their environment, certainly, in a way analogous to sensory information, but I dont think thats what you are referring to.
Well since we're talking about AI I assumed that self-awareness would be a given, if they're not sapient then would calling them AI be really accurate?

If you mean they will be objective and rational in their dealings with the environment, well irrational emotional impulses evolved generally because they provide quick decision short cuts that cost less cognitive reasources than intellectual analysis. If we had to weigh all the relevent facts every time we had to make a minor decision, or react to an emergency, we would not have survived very long. I would think similar considerations would apply to any self aware intelligence.
That's exactly my question, an AI could be perfectly objective and rational but as you said there are reasons that we aren't that could absolutely apply to an AI.

Of course, an augmented intelligence, human or not, could have access to computational resources we natural humans dont have. But in any competition between two intelligences, speed and cost efficiency will confer an advantage, so some sort of decision making heuristics will be useful.
Seems logical.

edited 9th Feb '18 12:36:57 PM by Fourthspartan56

"Sandwiches are probably easier to fix than the actual problems" -Hylarn
Imca (Veteran)
#356: Feb 9th 2018 at 12:47:02 PM

Well since we're talking about AI I assumed that self-awareness would be a given, if they're not sapient then would calling them AI be really accurate?

As some one in the feild YES we are living with AI right now, you deal with them dozens of times a day and don't even realize it half the time, they are intelligent, they learn, they adapt and evolve..... Self-Awareness is not a necicery part of being intelligence.

Self-Aware AI have there own term already, its AGI or Artificial General Intelligence.... or well more accurately AGI are assumed to be self aware when discussing them.

edited 9th Feb '18 12:59:36 PM by Imca

Fourthspartan56 from Georgia, US Since: Oct, 2016 Relationship Status: THIS CONCEPT OF 'WUV' CONFUSES AND INFURIATES US!
#357: Feb 9th 2018 at 12:49:39 PM

Fascinating, I was under the impression that AI specifically involved sapience. Thanks for correcting me smile

"Sandwiches are probably easier to fix than the actual problems" -Hylarn
Imca (Veteran)
#358: Feb 9th 2018 at 12:56:20 PM

No problem, if your curious there is actually 3 main types of AI.

You have your basic AI/Expert System, this is what we deal with now, they do a single task, they learn and adapt to deal with that task, they may even be better at that task then a human, but at a broad level there kind of dumb.

You then have the AGI or Artificial General Intelligence, this is a hypothetical broad level system, that can meet or exceed a human in all areas, and is able to learn completely new tasks if needed..... under the current assumptions about self awareness (a natural phenomenon arising from complexity) this kind of AI is expected to be as such, but need not necessarily be so if that assumption turns out to be false..... This is the kind of AI that your typical science fiction robot/AI has.

Lately you have ASI or Artificial Super Intelligence, it is to us what we are to monkies, it is also a hypothetical system obviously... These ones would effectively render humans obsolete so they need to be approached extremely carefully..... one done right could lead to species immortality since it would be able to solve problems we didn't even realize were problems yet... done wrong this is the one most likely to cause Armageddon since the only counter would be another ASI.

Fourthspartan56 from Georgia, US Since: Oct, 2016 Relationship Status: THIS CONCEPT OF 'WUV' CONFUSES AND INFURIATES US!
#359: Feb 9th 2018 at 12:58:44 PM

Very fascinating I appreciate the elaboration, in that case I would say that the only form of AI I wouldn't want would be ASI. Creating a literal Deus ex Machina sounds like an extremely poor idea whose potential costs are just too high compared to its potential benefits.

edited 9th Feb '18 12:58:59 PM by Fourthspartan56

"Sandwiches are probably easier to fix than the actual problems" -Hylarn
Imca (Veteran)
#360: Feb 9th 2018 at 1:02:51 PM

The risk of machine caused Armageddon is actualy fairly low, and by the time we reach ASI we should have around 100 years of experimenting on the process of making AI.....

Unlike the scientists who thought nuclear power could ignite the atmosphere we would actually have practice with this..... soooo its actually a very approachable thing IMHO... especially since the benefit is possibly rendering human extinction a foreign concept.

Its just dangerous to not acknowledge the risk is there, to go back to the nuclear comparison, things would be much worse if we never acknowledged the dangers of what we are poking.

edited 9th Feb '18 1:03:40 PM by Imca

supermerlin100 Since: Sep, 2011
#361: Feb 9th 2018 at 1:03:36 PM

Wouldn't there also be ai's that are general but not to a human level? Something more like a chimp.

I'm a lot less worried about any one super intelligence than a large number of weaker ai changing cultural and legal developments that lead to things getting more and more off track.

edited 9th Feb '18 1:13:18 PM by supermerlin100

Fourthspartan56 from Georgia, US Since: Oct, 2016 Relationship Status: THIS CONCEPT OF 'WUV' CONFUSES AND INFURIATES US!
#362: Feb 9th 2018 at 1:08:59 PM

The risk of machine caused Armageddon is actualy fairly low, and by the time we reach ASI we should have around 100 years of experimenting on the process of making AI.....

Unlike the scientists who thought nuclear power could ignite the atmosphere we would actually have practice with this..... soooo its actually a very approachable thing IMHO... especially since the benefit is possibly rendering human extinction a foreign concept.

Its just dangerous to not acknowledge the risk is there, to go back to the nuclear comparison, things would be much worse if we never acknowledged the dangers of what we are poking.

All good points, and I'm absolutely not one of those "AI will inevitably result in Skynet" people. That's just silly.

"Sandwiches are probably easier to fix than the actual problems" -Hylarn
Imca (Veteran)
#363: Feb 9th 2018 at 1:18:41 PM

[up][up] In theroy nothing prevents it, in practice therr is a lot less of a gap between monkies and humans then most people would be comfy with, and with the rate off advancment, such a system would be passed over faster then one could blink.

Unless it is for the ethical concerns of what you intend to do with it.

[up] Fair enough, I actualy do understand why ASI can be scary though, so I wasn't concerned about sky net fears, I was just trying to elaborate a bit sorry.

Fourthspartan56 from Georgia, US Since: Oct, 2016 Relationship Status: THIS CONCEPT OF 'WUV' CONFUSES AND INFURIATES US!
#364: Feb 9th 2018 at 1:25:02 PM

Fair enough, I actualy do understand why ASI can be scary though, so I wasn't concerned about sky net fears, I was just trying to elaborate a bit sorry.
Oh no need to apologize, I was just agreeing with you smile

"Sandwiches are probably easier to fix than the actual problems" -Hylarn
Imca (Veteran)
#365: Feb 9th 2018 at 1:39:23 PM

Honestly, my single largest concern is that I am pretty sure that AGI are absolutely going to destroy the world econemy.... now I don't think it will be permant, and I do think that once the prices are collected things will absolutely be better for humanity or else I wouldn't stick to the feild....

But I do think those are going to be a rough few years while we adapt to the fact that we can just build labour for everything now.... and I think we are already starting to see the beginning of it, but for some reason people never aproach problems until they have too. :/

edited 9th Feb '18 1:40:26 PM by Imca

Protagonist506 from Oregon Since: Dec, 2013 Relationship Status: Chocolate!
#366: Feb 9th 2018 at 2:02:50 PM

A big thing that many people tend to forget is that humans actually wouldn't necessarily be helpless against an ASI. There's no way for an ASI to exterminate the human race that we couldn't counteract in some manner. Also, an ASI cannot just magically hack into any machine (not matter how smart it is).

Another thing is also that many things that would destroy the human race would also threaten the AI itself. For example, a nuclear war would actually be pretty destructive to electronics. Moreover, an AI would likely want to keep as much infrastructure as intact as possible, assuming it wants to live. If the AI kills us, that means no more people putting communications satellites into space, or running power plants, or mining rare earth metals for it.

Killing off the human race would take way too much effort for little, if any, gain. Also, an AI would be putting its own survival at a massive risk (it actually would not be difficult to punch out cthulhu in this context).

"Any campaign world where an orc samurai can leap off a landcruiser to fight a herd of Bulbasaurs will always have my vote of confidence"
Imca (Veteran)
#367: Feb 9th 2018 at 3:50:06 PM

That last bit is why I don't think an AI rebellion would ever happen even by a smart AI, it is too much risk for too little gain on there end.

But again, in my case, acknowledging the danger doesn't mean I think it will come to pass, much like when I go for a drive I don't expect to crash, but I still wear my safety equipment any way.

Protagonist506 from Oregon Since: Dec, 2013 Relationship Status: Chocolate!
#368: Feb 9th 2018 at 4:24:04 PM

I'd imagine that most of the unintentional harm A.I.s cause to humans would be from the AI acting as a Literal Genie.

A good example: In the video game Empire Earth there was a scenario where your civilians begin protesting and attacking your other units. You aren't allowed to command your units to attack these protesters (in fact, they're technically still "your" units) and have to give into their demands to build more food before they'll stop attacking you. How I solved this problem was by ordering AOE attacks that units aren't Friendly Fire Proof. Technically, this wasn't attacking them, it was Friendly Fire.

An Unsafe AI playing the game could theoretically come up with the same exact strategy that I did. In fact, it might not even realize it's abusing a loophole and think it's using a completely orthodox strategy.

That's why you would want to make sure an AI is "safe" before providing it with important duties. The AI might come up with a plan that falls outside the spirit of what you want from it or is even dangerous. If the AI has some understanding of how people think it can give us what we want better.

"Any campaign world where an orc samurai can leap off a landcruiser to fight a herd of Bulbasaurs will always have my vote of confidence"
Protagonist506 from Oregon Since: Dec, 2013 Relationship Status: Chocolate!
#369: Feb 9th 2018 at 4:29:08 PM

A similar scenario of how an AI might "go bad" would be something that happened in India I believe: The British wanted to kill off cobras so they began giving money to people who brought in dead cobras. People began farming cobras, making the problem worse.

An AI told "bring in as many dead cobras as possible" would likely come up with a similar plan.

edited 9th Feb '18 4:31:41 PM by Protagonist506

"Any campaign world where an orc samurai can leap off a landcruiser to fight a herd of Bulbasaurs will always have my vote of confidence"
DeMarquis Who Am I? from Hell, USA Since: Feb, 2010 Relationship Status: Buried in snow, waiting for spring
Who Am I?
#370: Feb 9th 2018 at 5:25:01 PM

Bear in mind that high intelligence does not imply self awareness. We dont know what leads to self awareness, but it is unlikely to be high IQ alone.

"We learn from history that we do not learn from history."
Imca (Veteran)
#371: Feb 9th 2018 at 5:35:21 PM

Complexity is the best guess, and once you get to a general system, you start creating a complex system.

Complexity =/= intelligence though is correct.

DeMarquis Who Am I? from Hell, USA Since: Feb, 2010 Relationship Status: Buried in snow, waiting for spring
Who Am I?
#372: Feb 9th 2018 at 5:45:09 PM

By "complexity", are you refering to nonlinear systems?

"We learn from history that we do not learn from history."
CaptainCapsase from Orbiting Sagittarius A* Since: Jan, 2015
#373: Feb 9th 2018 at 6:04:41 PM

@Protagonist: The scenraio you describe is literally a textbook Paperclip Maximizer, which is probably the most dangerous sort of AI, not just in an apocalyptic sense but also in a "does something that gets a bunch of people killed" sense.

Iaculus Pronounced YAK-you-luss from England Since: May, 2010
Pronounced YAK-you-luss
Imca (Veteran)
#375: Feb 9th 2018 at 6:45:08 PM

[up][up][up] I am referring to the more.... broad the system gets, the more likely that it is to achieve self awareness as we recognize it, to the point that when its capabilities across the board are comparable to humans it is better to assume it is then to assume it isnt... since awareness seems to be an accidental occurrence rather then an actual dedicated process.

Its also really hard of a thing to test for, while I believe P-Zombies are bullshit, they demonstrate the problems with testing for it.....

[up]

I find it somewhat helpful to analogize UFAI-human interactions to human-mosquito interactions. Humans are enormously more intelligent than mosquitoes; humans are good at predicting, manipulating, and destroying mosquitoes; humans do not value mosquitoes' welfare; humans have other goals that mosquitoes interfere with; humans would like to see mosquitoes eradicated at least from certain parts of the planet. Yet humans haven't accomplished such eradication, and it is easy to imagine scenarios in which humans would prefer honest negotiation and trade with mosquitoes to any other arrangement, if such negotiation and trade were possible.

I actually really like this bit, and see this as much more sensible then what you normally see out of people.... ._.;

edited 9th Feb '18 6:47:16 PM by Imca


Total posts: 424
Top