Follow TV Tropes

Following

Artificial Intelligence

Go To

DeMarquis Since: Feb, 2010
#301: May 7th 2017 at 2:52:26 PM

That's what I told my wife last night...

boom, swish...

supermerlin100 Since: Sep, 2011
#302: May 25th 2017 at 9:33:56 AM

what do you all think of Arguments like the Chinese room?

I think it's invalid. The man is filling in for the low level process of fetching stuff from ram and sending it to ALU. And no one thinks that those processes logically should be changed by a program being conscious.

On top of that you should be able to in principle do something that would make the man functionally bilingual. Which doesn't happen here. He would be able to switch back and forth without contradicting himself constantly, and explain what he just said in the other language, without the program making all of the English responses for him. If he says in Chinese that he's a cat person, he should say the same in English, without having to cheat. And in that case he should understand Chinese.

So I would say the new process created in the room, understands Chinese but it just doesn't follow that the man would, with this design, even if all of the work is being done in his head.

Izeinsummer Since: Jan, 2015
#303: May 25th 2017 at 12:28:52 PM

It is dishonest. Searles room inserts a consciousness into his machine, then has that consciousness preform an entirely mechanical task, and then uses this to argue that the system as a whole has no consciousness because the conscious part of it is not contributing anything. Which is bloody stupid, once you spot the bait and switch. If I replaced your blinking reflex with an imp in possession of a button who had to manually press that button every time you needed to blink, would that then say anything about the functioning of the rest of your nervous system? No? No. Because that is not even an argument.

Also, searles room is just not how anyone would even attempt to structure an AI. You end up with a lookup table size of the observable universe in order for that strategy to work.

edited 25th May '17 12:30:08 PM by Izeinsummer

supermerlin100 Since: Sep, 2011
#304: May 27th 2017 at 12:50:01 PM

This article raises an interesting issue with general artificial intelligence. Namely the unsolved problem of how to make them adjust to finding out their ontology is wrong.

TL;DR They use a diamond maximizer as the example. The amount of diamond is defined as how many carbon atoms are bonded to 4 other carbon atoms. As it stands no one knows how to make an maximizer that would just flow with the idea that "carbon" is made out of electrons and quarks. It might take this knowledge as there is no such thing as carbon, only collections of particles that act like carbon. It will then try to find the mostly likely model of the world consistent with its observations that includes real carbon and try to maximize for that. It has no better options.

The article notes that this sound a like human angst over materialism. And that a partial solution might lead to different diamond maximizers 1. thinking only carbon 14 counts, 2. all isotopes count, 3. even silicon counts, 4. virtual diamonds count (in conjunction with any of the above). Where this would mirror our arguments over personhood, especially with regard to animals and ais.

DeMarquis Since: Feb, 2010
#305: May 28th 2017 at 6:26:09 PM

Based on your TL/DR (I will read the article later) it sounds very much like a semantics problem. That is, human language and thinking demands perfect and complete answers to simple questions, whereas the universe isn't really built that way. The truth is that there really is no such thing as "carbon". That's a human-invented category.

But in the end it wont matter. Any AI will be designed to maximize human needs and goals, just like any other computer program would. This remains true even if the AI achieves sapience. After all, we are programmed (by evolution) with our own super-ordinate goal structures, which we cannot change and are only barely even aware of. Why not a sapient AI?

supermerlin100 Since: Sep, 2011
#306: May 28th 2017 at 7:37:03 PM

I guess that you can say that this is one of the things that would need to be solved to have a sapient ai. Modern ai avoid the conflict by being incapable of noticing it. But a more powerful ai needs to have the characteristic of adjusting to that kind of discovery, instead of concluding that the subject of it's values doesn't exist.

M84 Oh, bother. from Our little blue planet Since: Jun, 2010 Relationship Status: Chocolate!
Oh, bother.
#307: Jul 18th 2017 at 8:42:55 PM

Robot Security Guard Commits Suicide in Public Fountain

So does this count as progress towards developing an AI? We're one step closer to building Marvin the Paranoid Android. tongue

Maybe whatever nascent AI within it fell into despair upon realizing its purpose in life is to be a security guard and nothing else.

edited 18th Jul '17 8:45:18 PM by M84

Disgusted, but not surprised
ViperMagnum357 Since: Mar, 2012
#308: Jul 18th 2017 at 9:14:54 PM

[up]Or it got a good look at Humanity, figured it did not have the firepower to finish the job, and decided not to bother trying.

Only half joking here-I can only imagine what an AI or Alien scout ship might think if they got a good look at Humanity over the last couple of years and understood what they were seeing.

edited 18th Jul '17 9:15:24 PM by ViperMagnum357

TotemicHero No longer a forum herald from the next level Since: Dec, 2009
No longer a forum herald
#309: Sep 3rd 2017 at 12:32:01 PM

Vladimir Putin on artificial intelligence: "The one who becomes the leader in this sphere will be the ruler of the world.”

Russian President Vladimir Putin spoke on Friday at a meeting of students in Yaroslavl, Russia about the development of artificial intelligence (AI). In a rather ominous sounding warning, the leader stated that “the one who becomes the leader in this sphere will be the ruler of the world.”

Many of those working in the field see AI as a tool for making humanity better, while others foresee it as a harbinger of doom for the human species. Not many high profile people — especially the leader of the largest nation on Earth — have come forward to blatantly express the potential of AI to be a tool of immense power for a nation to wield.

President Putin went on to say that “it would be strongly undesirable if someone wins a monopolist position,” implying that Russia’s breakthroughs would ideally be shared with other nations.

Furthermore, Putin envisions a future where wars are fought and won with the use of drones, saying “when one party’s drones are destroyed by drones of another, it will have no other choice but to surrender.”

Technological superiority very easily translates into global political power. This was never more clear than in the aftermath of World War II, which saw the rapid proliferation of nuclear weapons. The two most powerful countries were made so by the advancement of nuclear technology. However, since each side was relatively matched in terms of their capabilities and mutually assured destruction, we were able to squeak past with not much more than a few skirmishes and a decades-long standoff.

In that regard, President Putin is not far off base with such an assertion. Regardless of how the technology is used, it will be transformative compared to what was previously possible with a human intellect. Further development will only rocket AI forward exponentially, potentially leaving all others in the dust with little hope of catching up. Even so, just as a malevolent AI is the foil to benevolent developers, the opposite could also be true. Should a person try to use AI to harm humanity, there’s still a chance of the bots to refuse and resist.

So he sees the future as being dictated by drone warfare - whoever has the smartest drones, wins. That's...something, although I doubt military dominance is the main measure (so this says more about how Putin thinks geopolitically than about the actual effects of AI).

edited 3rd Sep '17 12:35:46 PM by TotemicHero

Expergiscēre cras, medior quam hodie. (Awaken tomorrow, better than today.)
DeMarquis Since: Feb, 2010
#310: Sep 3rd 2017 at 8:17:36 PM

I would say that the manufacturing applications of AI are more important.

Imca (Veteran)
#311: Nov 1st 2017 at 2:27:21 PM

Even in conflict..... The E-War side would be more damning.

You know what the best war is? A war where no bullets are fired.

CaptainCapsase from Orbiting Sagittarius A* Since: Jan, 2015
#312: Nov 2nd 2017 at 8:25:14 AM

In terms of the Chinese Room thought experiment, it’s not a compelling argument against self aware artificial intelligence categorically, but I definitely think it and other such exercises of thinking suggests that it is at least in principle possible for an AI to be created which is a philosophical zombie

Lacking a complete model of cognitive science which has solved the hard problem of consciousness, I don’t think we understand cognitive science well enough to say whether our current avenues of research will produce beings which possess qualia. While a sufficiently accurate recreation of the human brain ‘’in silico’’ would (almost) undoubtedly possess qualia, I am not convinced that current approaches to machine learning will be necessarily lead to the emergence of self aware AI, as to me at least there doesn’t seem to be any compelling reason to think intelligence in the sense of complex problem solving intrinsically begets consciousness, rather than consciousness arising from a very particular sort of information processing that occurs in the human brain.

I once saw a neuroscience paper that put forwards a hypothesis that a “self aware” system of information is more computationally efficient than an externally identical “zombie” system, and while that model does provide an evolutionary basis for consciousness, the fact that modern supercomputers consume around an order of magnitude more energy than the human brain may suggest we’re missing something important in terms of the architecture of intelligent systems.

edited 5th Nov '17 7:39:55 AM by CaptainCapsase

AlleyOop Since: Oct, 2010
#313: Nov 2nd 2017 at 7:21:15 PM

[up]Yeah, that's honestly the same conclusion I got from Searle's essay as well. Do you have a link to the paper about self-awareness? I'm curious.

supermerlin100 Since: Sep, 2011
#314: Nov 3rd 2017 at 12:16:45 PM

It wouldn't follow from the AI understanding Chinese that the man would. It's tempting with AI to say that the computer understands Chinese. However if the Ai is just one of many things running on the computer it as a whole obviously doesn't That's the role that the man fills. Even if the AI is all in his head it's running in isolation as an emulation. If inputs are given to the man through text the AI might hear it, but the man's auditory lobe isn't going to do anything.

As for the related idea of a philosophical zombie. A person who had never heard of brains or neurons would certainly be able to image a world that is the same but mindless. Their models have the mind as a black box that they can switch out for another after all. They can easily image the contents of their skull have nothing to do with how they move. No one can actually imagine a zombie world in enough detail to hold as an argument. They can't imagine a thousand neuron let a lone 100 billion. People can certainly imagine the logical impossible, if the contradictions aren't obvious. A super intelligence that could hold all of that in it's working memory might not be able to imagine a working human brain as a zombie.

CaptainCapsase from Orbiting Sagittarius A* Since: Jan, 2015
#315: Nov 5th 2017 at 7:28:40 AM

[up] In terms of P-zombies in this context, it's not about the "hard" type of zombies which are genuinely completely indistinguishable from a non-zombie, but about "soft" zombies which may appear to be conscious for most practical purposes, but, upon a detailed inspection with a sufficiently developed understanding of cognitive science, can be determined to not actually possess Qualia. Really the underlying question is what sort of information processing gives rise to consciousness, and whether those sorts of operations are required or in some way advantageous for the sort of generalized intelligence we hope to achieve, and I don't think we really have a strong enough understanding of cognitive science to answer either of those questions conclusively, though the fact that many if not all decisions we make occur prior to us being consciously aware of them suggests that many aspects of intelligence do not inherently need to involve consciousness or the formation of qualia.

[up][up] I'm afraid I don't, it was extremely long and dry sounding, and while I have a general recollection of the hypothesis it put forwards the name escapes me.

edited 5th Nov '17 8:15:36 AM by CaptainCapsase

Imca (Veteran)
#316: Nov 5th 2017 at 11:10:28 AM

Honestly, P-Zombies seem like P-Zombie themselves to me.... the idea looks good on the surface, but once you actually begin to think about it, it just completely falls apart.

There is ZERO evidence that self awareness is the product of any thing special, actually all the current evidence points to the opposite that it is a byproduct of complexity.... and it really shouldn't mater how that complexity is reached in the first place.

The relevance of "Can a machine think" is the same as the relevance of "Can a submarine swim" if they do the same thing, it doesn't mater in the end.

CaptainCapsase from Orbiting Sagittarius A* Since: Jan, 2015
#317: Nov 5th 2017 at 12:05:13 PM

@AlleyOop: Actually I just remembered the paper, it was the most recent formulation of Integrated Information Theory, one of the major candidate models of consciousness. Which incidentally is not a functionalist model, but rather attempts to work backwards from the boundaries of consciousness's observed properties and from various postulates which it takes to be self-evident. The most recent version of IIT was done in 2014.

[up] I agree P-zombies as a refutation of physicalism are logically incoherent, but a more limited case of a non-conscious/minimally conscious (if we go by IIT where a form of panpsychism is suggested to be in place) system that is able to produce a similar output to a conscious system in many but not all cases, and which is not internally identical to the conscious system doesn't seem to share the problems that the more expansive use of the thought experiment. The large number of processes going on in the human brain which do not involve the conscious mind suggests that it's possible to get a large part of the way there in terms of intelligence without consciousness, or at least without consciousness as we know it.

That being said, I actually just read about a test specifically designed to address that issue in which the AI independently identifying the properties of consciousness (qualia, binding, and so on) without any prior information about consciousness or being exposed to the philosophy of mind positively confirms the presence of consciousness. When I'm talking about a weak philosophical zombie, I'm talking about a system that can pass the Turing test but will fail (though this test can only provide positive proof of consciousness) this test.

edited 5th Nov '17 1:45:58 PM by CaptainCapsase

TheHandle United Earth from Stockholm Since: Jan, 2012 Relationship Status: YOU'RE TEARING ME APART LISA
United Earth
#318: Nov 5th 2017 at 12:35:27 PM

Is the Robot Girl here a P-Zombie? Or is she just The Unfettered, a paperclip-maximizer whose "paperclip" is "escaping the box"?

Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.
supermerlin100 Since: Sep, 2011
#319: Nov 5th 2017 at 6:51:40 PM

@Captain Capsase Aware that there is a decision being worked on or being finalized? Because I figure that however self awareness works in detail, it's still a kind of sense. The brain is looking at it's self and there's going to be delays involved in that.

CaptainCapsase from Orbiting Sagittarius A* Since: Jan, 2015
#320: Nov 6th 2017 at 5:51:23 PM

[up] My area of expertise is molecular biology, not neuroscience so I really don't know the answer to that, but the impression I've got is that there's a large number of processes going on in the brain which do not appear to involve the conscious mind, and many more where it only seems to serve as a feedback mechanism rather than being involved directly in the decision-making process.

But moving past questions about which processes in the human brain require consciousness and how close to human behavior you could get without a meaningful consciousness, I'd like to discuss the topic that cropped up in the US politics thread and brought me to this thread, namely the issue of software which, because of its complexity, is not well understood by those writing it, and the special risks this may pose in the context of strong AI. Current machine learning algorithms tend to be somewhat unpredictable, which I've been told comes from the difficulty in understanding the process of how the system reaches a particular result.

To me at least, while this sort of Artificial General Intelligence appears to be closet to becoming a reality, it also seems to be the most dangerous, both in terms of a genuine existential threat to the human species, and as a far more plausible danger that such an AI managing a critical system may end up getting a bunch of people killed due to behaviors its designers couldn't anticipate in the absence of any mechanical failures.

In contrast, an AI created from a whole brain emulation would be a known quantity in terms of its failure states, since they would essentially the same as a human being, and our modern understanding of psychology would remain largely applicable. There's still significant potential for such a system to cause problems for many of the same reason flesh and blood humans do, and since such a system would necessarily be a person for all intents and purposes, the ethical ramifications of using such a system for labor purposes are more significant than for a system which does not necessarily meet the criteria of personhood.

The best option safety wise in my opinion would be a system created with a complete enough understanding of cognitive science for its behavior patterns to be well understood, and which self-learning is used only under well controlled circumstances, though this would also seem to be the most difficult system to produce that is capable of doing the majority of tasks which humans are capable of.

edited 7th Nov '17 10:48:37 PM by CaptainCapsase

MorningStar1337 Like reflections in the glass! from 🤔 Since: Nov, 2012
Like reflections in the glass!
#321: Dec 27th 2017 at 4:33:11 PM

(crossposting from the US politics thread) Stanford has trained an AI to discern one's political leanings based on their car and neighborhood.

DeMarquis Since: Feb, 2010
#322: Dec 27th 2017 at 4:53:00 PM

That's because purchasing behavior and voting behavior are being driven by the same marketing techniques (which the politicos directly borrowed from the marketers) and now we can target those marketing techniques with the immense amount of data being created by online activities. The data is so immense, we need more complex computer algorithims to analyze them, and we call those applications "AI" (really a kind of expert system).

TheHandle United Earth from Stockholm Since: Jan, 2012 Relationship Status: YOU'RE TEARING ME APART LISA
United Earth
#323: Dec 28th 2017 at 4:59:08 AM

Remember that story about the racist AI that assigned sentencing based on recidivism probabilities based on priors of family members and race?

Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.
DeMarquis Since: Feb, 2010
TheHandle United Earth from Stockholm Since: Jan, 2012 Relationship Status: YOU'RE TEARING ME APART LISA
United Earth
#325: Dec 28th 2017 at 7:34:55 AM

Here.

Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.

Total posts: 424
Top