I thought we were talking about sapient synthetic entities, would they not have the potential to understand objective data?
"Sandwiches are probably easier to fix than the actual problems" -HylarnYou mean possessing a sense of conceptual self awareness? I dont see any reason why they would, necessarily. It depends on how self awareness works, which is currently poorly understood. They would have to have some way of interacting with their environment, certainly, in a way analogous to sensory information, but I dont think thats what you are referring to.
If you mean they will be objective and rational in their dealings with the environment, well irrational emotional impulses evolved generally because they provide quick decision short cuts that cost less cognitive reasources than intellectual analysis. If we had to weigh all the relevent facts every time we had to make a minor decision, or react to an emergency, we would not have survived very long. I would think similar considerations would apply to any self aware intelligence.
Of course, an augmented intelligence, human or not, could have access to computational resources we natural humans dont have. But in any competition between two intelligences, speed and cost efficiency will confer an advantage, so some sort of decision making heuristics will be useful.
edited 9th Feb '18 12:23:43 PM by DeMarquis
"We learn from history that we do not learn from history."Most of the danger from corporations are the long lasting and huge ones. And of course the point of the article is that ai could be a lot worst. large corporations are the source of a lot of the push back against environmental regulations and workers rights. The largest have revenues larger than most countries. And to a large extent you can rarely stop them from doing what they want. Getting Walmart to pay their workers decently through the free market isn't going to happen. And a good chuck of the government is convinced that regulations are satanic. Granted a lot of Walmart's undesirable behavior is simply done where it is legal.
Corporations that limit themselves to nice tactics don't get that huge, and the people who end up in positions of decision making are people's who's decisions are profitable, whether that involves not caring or just a lot of biased thinking doesn't make much difference.
edited 9th Feb '18 12:36:57 PM by Fourthspartan56
"Sandwiches are probably easier to fix than the actual problems" -HylarnAs some one in the feild YES we are living with AI right now, you deal with them dozens of times a day and don't even realize it half the time, they are intelligent, they learn, they adapt and evolve..... Self-Awareness is not a necicery part of being intelligence.
Self-Aware AI have there own term already, its AGI or Artificial General Intelligence.... or well more accurately AGI are assumed to be self aware when discussing them.
edited 9th Feb '18 12:59:36 PM by Imca
Fascinating, I was under the impression that AI specifically involved sapience. Thanks for correcting me
"Sandwiches are probably easier to fix than the actual problems" -HylarnNo problem, if your curious there is actually 3 main types of AI.
You have your basic AI/Expert System, this is what we deal with now, they do a single task, they learn and adapt to deal with that task, they may even be better at that task then a human, but at a broad level there kind of dumb.
You then have the AGI or Artificial General Intelligence, this is a hypothetical broad level system, that can meet or exceed a human in all areas, and is able to learn completely new tasks if needed..... under the current assumptions about self awareness (a natural phenomenon arising from complexity) this kind of AI is expected to be as such, but need not necessarily be so if that assumption turns out to be false..... This is the kind of AI that your typical science fiction robot/AI has.
Lately you have ASI or Artificial Super Intelligence, it is to us what we are to monkies, it is also a hypothetical system obviously... These ones would effectively render humans obsolete so they need to be approached extremely carefully..... one done right could lead to species immortality since it would be able to solve problems we didn't even realize were problems yet... done wrong this is the one most likely to cause Armageddon since the only counter would be another ASI.
Very fascinating I appreciate the elaboration, in that case I would say that the only form of AI I wouldn't want would be ASI. Creating a literal Deus ex Machina sounds like an extremely poor idea whose potential costs are just too high compared to its potential benefits.
edited 9th Feb '18 12:58:59 PM by Fourthspartan56
"Sandwiches are probably easier to fix than the actual problems" -HylarnThe risk of machine caused Armageddon is actualy fairly low, and by the time we reach ASI we should have around 100 years of experimenting on the process of making AI.....
Unlike the scientists who thought nuclear power could ignite the atmosphere we would actually have practice with this..... soooo its actually a very approachable thing IMHO... especially since the benefit is possibly rendering human extinction a foreign concept.
Its just dangerous to not acknowledge the risk is there, to go back to the nuclear comparison, things would be much worse if we never acknowledged the dangers of what we are poking.
edited 9th Feb '18 1:03:40 PM by Imca
Wouldn't there also be ai's that are general but not to a human level? Something more like a chimp.
I'm a lot less worried about any one super intelligence than a large number of weaker ai changing cultural and legal developments that lead to things getting more and more off track.
edited 9th Feb '18 1:13:18 PM by supermerlin100
Unlike the scientists who thought nuclear power could ignite the atmosphere we would actually have practice with this..... soooo its actually a very approachable thing IMHO... especially since the benefit is possibly rendering human extinction a foreign concept.
Its just dangerous to not acknowledge the risk is there, to go back to the nuclear comparison, things would be much worse if we never acknowledged the dangers of what we are poking.
In theroy nothing prevents it, in practice therr is a lot less of a gap between monkies and humans then most people would be comfy with, and with the rate off advancment, such a system would be passed over faster then one could blink.
Unless it is for the ethical concerns of what you intend to do with it.
Fair enough, I actualy do understand why ASI can be scary though, so I wasn't concerned about sky net fears, I was just trying to elaborate a bit sorry.
Honestly, my single largest concern is that I am pretty sure that AGI are absolutely going to destroy the world econemy.... now I don't think it will be permant, and I do think that once the prices are collected things will absolutely be better for humanity or else I wouldn't stick to the feild....
But I do think those are going to be a rough few years while we adapt to the fact that we can just build labour for everything now.... and I think we are already starting to see the beginning of it, but for some reason people never aproach problems until they have too. :/
edited 9th Feb '18 1:40:26 PM by Imca
A big thing that many people tend to forget is that humans actually wouldn't necessarily be helpless against an ASI. There's no way for an ASI to exterminate the human race that we couldn't counteract in some manner. Also, an ASI cannot just magically hack into any machine (not matter how smart it is).
Another thing is also that many things that would destroy the human race would also threaten the AI itself. For example, a nuclear war would actually be pretty destructive to electronics. Moreover, an AI would likely want to keep as much infrastructure as intact as possible, assuming it wants to live. If the AI kills us, that means no more people putting communications satellites into space, or running power plants, or mining rare earth metals for it.
Killing off the human race would take way too much effort for little, if any, gain. Also, an AI would be putting its own survival at a massive risk (it actually would not be difficult to punch out cthulhu in this context).
"Any campaign world where an orc samurai can leap off a landcruiser to fight a herd of Bulbasaurs will always have my vote of confidence"That last bit is why I don't think an AI rebellion would ever happen even by a smart AI, it is too much risk for too little gain on there end.
But again, in my case, acknowledging the danger doesn't mean I think it will come to pass, much like when I go for a drive I don't expect to crash, but I still wear my safety equipment any way.
I'd imagine that most of the unintentional harm A.I.s cause to humans would be from the AI acting as a Literal Genie.
A good example: In the video game Empire Earth there was a scenario where your civilians begin protesting and attacking your other units. You aren't allowed to command your units to attack these protesters (in fact, they're technically still "your" units) and have to give into their demands to build more food before they'll stop attacking you. How I solved this problem was by ordering AOE attacks that units aren't Friendly Fire Proof. Technically, this wasn't attacking them, it was Friendly Fire.
An Unsafe AI playing the game could theoretically come up with the same exact strategy that I did. In fact, it might not even realize it's abusing a loophole and think it's using a completely orthodox strategy.
That's why you would want to make sure an AI is "safe" before providing it with important duties. The AI might come up with a plan that falls outside the spirit of what you want from it or is even dangerous. If the AI has some understanding of how people think it can give us what we want better.
"Any campaign world where an orc samurai can leap off a landcruiser to fight a herd of Bulbasaurs will always have my vote of confidence"A similar scenario of how an AI might "go bad" would be something that happened in India I believe: The British wanted to kill off cobras so they began giving money to people who brought in dead cobras. People began farming cobras, making the problem worse.
An AI told "bring in as many dead cobras as possible" would likely come up with a similar plan.
edited 9th Feb '18 4:31:41 PM by Protagonist506
"Any campaign world where an orc samurai can leap off a landcruiser to fight a herd of Bulbasaurs will always have my vote of confidence"Bear in mind that high intelligence does not imply self awareness. We dont know what leads to self awareness, but it is unlikely to be high IQ alone.
"We learn from history that we do not learn from history."Complexity is the best guess, and once you get to a general system, you start creating a complex system.
Complexity =/= intelligence though is correct.
By "complexity", are you refering to nonlinear systems?
"We learn from history that we do not learn from history."@Protagonist: The scenraio you describe is literally a textbook Paperclip Maximizer, which is probably the most dangerous sort of AI, not just in an apocalyptic sense but also in a "does something that gets a bunch of people killed" sense.
This discussion reminds me of that time when an executive director of GiveWell showed up on LessWrong to explain why their ideas on AI were dumb and counterproductive.
What's precedent ever done for us?I am referring to the more.... broad the system gets, the more likely that it is to achieve self awareness as we recognize it, to the point that when its capabilities across the board are comparable to humans it is better to assume it is then to assume it isnt... since awareness seems to be an accidental occurrence rather then an actual dedicated process.
Its also really hard of a thing to test for, while I believe P-Zombies are bullshit, they demonstrate the problems with testing for it.....
I actually really like this bit, and see this as much more sensible then what you normally see out of people.... ._.;
edited 9th Feb '18 6:47:16 PM by Imca
AI decision making is based on whatever inputs they are programmed to seek and accept. A computer program doesnt know what "objective data" are, thats a human concept. They know only sources of inputs and their content. An AI will use empirical data if you program it to seek and utilize such, otherwise it wont.
"We learn from history that we do not learn from history."