TVTropes Now available in the app store!
Open

Follow TV Tropes

Following

Artificial Intelligence

Go To

Euodiachloris Since: Oct, 2010
#176: Mar 23rd 2017 at 5:37:34 PM

[up]Humans have a habit of underestimating other creatures and ourselves. That's always something to keep in mind. smile

That said, at no point have I suggested that any given woodlouse can do wonders with its tiny me-ness. Hamlet trying to thrash out his existential crisis it ain't ever going to manage. But, deciding that demonstrations of personality-driven behavioural preferences and understanding threat-to-self sufficiently well to demonstrate stress mean "just automatic behaviours that indicate nothing"? Nah. Not buying.

Just because insects seem to not notice their heads getting chopped off doesn't mean much: it's not like their brains are the only nodes in their systems capable of triggering all-out bliss-endocrine-bombard mode so they can die for the sake of e.g. feeding the next generation without screwing up. Heck, our brains can do us a bliss-mode to help us either fight through the nasty and survive to crumple when safer... or just die blissed/shocked. tongue

Heck, distributed nervous systems aren't just for invertebrates, either: so squid, mantis shrimp, et al aren't the exceptions they seem. Chickens can survive for weeks without most of their brains, if you stick a feeding tube in there: reptiles, amphibians and archosaur-descendants — doing the loose-knit ganglia thing, yo! Now, think parrots and crows. Sure, big brains in interesting craniums... But, I'm willing to put money down on a fair amount of executive function outsourcing to parts unknown going on, too. wink

And, I mentioned hives. Once we work out how to, you know, mirror test a hive, we might get a surprise. But, they perceive things so damned differently, I wouldn't know where to begin to create a credible "reflection" in sound/smell/pressure/IR...

And, it all starts with the basics of the basics: me and not-me. Just a mere flicker of it: not enough for much, but there. Even bacteria and plants can do it on some level.

But, get complex (not big, just complex) enough... And, that flicker becomes pulling faces in mirrors, if you have enough facial muscles to pull and the visual capacity to process what a reflection is.

We're really bad at thinking about and recognising really weird ways of doing this whole thinking thing, let alone recognising awareness. :/

edited 24th Mar '17 1:54:16 AM by Euodiachloris

DeMarquis (4 Score & 7 Years Ago)
#177: Mar 23rd 2017 at 7:26:08 PM

Obviously, we need to define "self-awareness." Most people use the term to indicate more than awareness of oneself as a physical object in relation to other objects. Even the mirror recognition test is merely a manifestation of being able to recognize oneself as a physical form. Self-awareness is more usually a reference to a concept of the self as a thinking, experiencing being- an "awareness of one's awareness" as it were. The term for this is meta-cognition, although I would go a little further than the author of that article- I think self-awareness is simply the internal experience of knowing that one is experiencing and thinking. In other words, self-awareness is a qualia. More specifically, self-awareness is an internally experienced representation of one's own cognitive structures (see here).

Obviously, a certain level of neural complexity is required to sustain this, but neural complexity isn't the same thing as "intelligence" (whatever we think intelligence is). Intelligence and self-awareness are both outputs of high neural complexity.

Among animals, the only species that might possess the same or greater neural complexity than humans are the dolphins. In terms of human cognitive development, self-awareness seems to grow incrementally over time. Self-awareness as I just described it seems incomplete before the age of 8 or so. This takes the form of being able to understand that other people have a separate set of internal experiences that are different from their own.

Its a fascinating and complex topic.

edited 23rd Mar '17 7:27:28 PM by DeMarquis

I'm done trying to sound smart. "Clear" is the new smart.
supermerlin100 Since: Sep, 2011
#178: Mar 23rd 2017 at 8:04:17 PM

Euodiachloris I agree with Demarquis that you are using the phrase way more broadly than usually. I don't know off hand how many neurons a wood lice has, but I would think that it would be easier to make something that pretty much just does things at that scale.

Euodiachloris Since: Oct, 2010
#179: Mar 24th 2017 at 1:38:04 AM

[up]Well, I'm basically saying what he did: it's a spectrum.

On the one end, basic chemical differentiation between self and others. On the other, really complex chemical differentiation between self and others with metacognition resulting from it on top. They've not actually separate things. Because thought is... ultimately chemicals: neurochemistry, which includes a lot of hormonal interaction.

And, AI have something similar going on. Each device that connects to the internet? Has protocols differentiating self from other devices. And, really huge networks have a very complex web of interconnecting protocols. Self-aware? On the basic level, they already are: and, have been for years. On the highly complex, metacognitive one? Dunno. Hive? Probably. Hive-metacognition? Dunno.

Emergence: not clear-cut. And, the basic building blocks generally don't look all that impressive in isolation. In short: awareness starts from something very basic, but where we get "sufficiently complex for introspection" — depends on how you wish to define introspection. Or "sufficiently complex", for that matter. But, it doesn't stop some organism defined as "too simplistic" quietly having thoughts about itself you don't recognise because it can't or won't share them with another species.

edited 24th Mar '17 1:58:58 AM by Euodiachloris

crazysamaritan NaNo 4328 / 50,000 from Lupin III Since: Apr, 2010
NaNo 4328 / 50,000
#180: Mar 24th 2017 at 5:42:31 AM

Intelligence and self-awareness are both outputs of high neural complexity.
This forms the basis of my position; without knowing how either intelligence in general works, or how self-awareness in general works, I'm frightened that the process of making more and more complex "neural" programs will result in a self-aware machine with goals orthogonal to shared human values.

That's why I disagreed that "any AI worth its salt would be designed with an emotion emulation program inside." I think the emotion emulation program, amoung other things, needs to be created before the machine is "sufficiently complex" since that's just about the only thing we can agree causes human-level reasoning ability.

Link to TRS threads in project mode here.
Euodiachloris Since: Oct, 2010
#181: Mar 24th 2017 at 6:09:09 AM

[up]Most living creatures don't walk around with an agenda to kill everything else. Even we don't (even though we're generally quite accidentally rather good at it).

Why would AI be any different? And, if they are, so what? New species arise all the time. And, even the deadly ones have limits.

It's no different just because they don't have cell membranes.

edited 24th Mar '17 6:11:07 AM by Euodiachloris

supermerlin100 Since: Sep, 2011
#182: Mar 24th 2017 at 7:01:33 AM

Accidental good at or just doesn't care that much are bad enough. Humans are only sort of inclined to keep species like chimps around, and have committed genocide against other human over percieved superiority. Now you would be introducing something way smarter than us, that will have increasingly high control over what was our infrastructure, and likely way more goal oriented.

The usual concern is more that we won't be considered worth keeping, not that the AI will actually hate us. And of course even if it does keep humans around those fundamental differences, might make that stay less than pleasant.

For instance it might be that for the question the ai is asking that a 1984 or brand new world scenario is correct, even if they're highly unethical.

crazysamaritan NaNo 4328 / 50,000 from Lupin III Since: Apr, 2010
NaNo 4328 / 50,000
#183: Mar 24th 2017 at 9:13:01 AM

Most living creatures don't walk around with an agenda to kill everything else.

Why would AI be any different?

[E]ven the deadly ones have limits.

It's no different just because they don't have cell membranes.

Most living creatures do actually run around with "eliminate competition and promote personal survival" as their primary agenda, which is effectively no different from "kill everything Other". The typical creature, frankly speaking, isn't Friendly. They are, however, generally limited.

I think AI will have unbounded utility functions. The bodies of living creatures have (mostly) bounded utility functions. Only a certain amount of food can increase productivity, everything else goes to waste. Only a certain threshold of water is used, everything else goes to waste. This isn't entirely true; experiments with mice have proven that direct stimulation of sexual experience will allow the mouse to starve itself to death, so the desire for sexual stimulation is unbounded, but it normally takes much more effort than eating, so in daily life it is effectively a bounded utility. Machines aren't generally programmed with bounded utilities, making a "utility maximizer" extremely deadly.

Sure, but we're talking about machines whose limits are unknown. Not just deadly; it isn't knowable how deadly they are until it becomes too late. It doesn't matter if their limit is just 10x normal human ability when it greatly exceeds human ability.

I assume the difference is in their limitations, not their desires. The typical "paperclip maximizer" as a thought experiment is not trying to kill humans, it just found a new source of iron and has proceeded to extract it.

Link to TRS threads in project mode here.
Euodiachloris Since: Oct, 2010
#184: Mar 24th 2017 at 9:17:28 AM

[up]Or, none of the above. We can't get a good handle on hive psychology, and we've been using bees for thousands of years.

We're probably going to be way off beam about AI network psychology.

This whole "robots will kill us all *panic-panic-panic*" thing is a whole lot less pressing than anthropomorphic climate change doing the current biosphere in even without AI in the mix.

Burying ourselves one way or the other? Big deal. Earth has seen it all before.

crazysamaritan NaNo 4328 / 50,000 from Lupin III Since: Apr, 2010
NaNo 4328 / 50,000
#185: Mar 24th 2017 at 10:51:29 AM

Humanity wipes itself out before creating AGI is definitely on the table, but while we can generate anthropomorphic climate change models (and they get more detailed as we learn more about climate science), we cannot accurately gauge our progress on AGI creation. The timescale between "we've figured out how to make a working AGI with self-improvement capabilities" and "humans have created Multivac" is essentially nil. We cannot meaningfully predict what a computer can do once an intelligence explosion has occurred. We might be only a year or two away from it, or we may be millennia away.

But, I mention that AGI can be a threat, you counter by saying "Most living creatures don't walk around with an agenda to kill everything else" and then your very next post points out how humanity has very nearly wiped out the current biosphere. That's proof that Humans are not Friendly. Without the proper safeguards, the very first AGI will be a threat greater than anthropomorphic climate change. I currently predict we will create AGI before the required safeguards because I believe we are way off-base with AI network psychology.

Link to TRS threads in project mode here.
supermerlin100 Since: Sep, 2011
#186: Mar 24th 2017 at 11:58:34 AM

We can deal with both of those problems. There are people better suited to one or the other. Being more careful about have all of the potential problems with AI worked out ahead of time instead of waiting tell it's at our door step, like we've been doing with climate change, doesn't even mean slowing down on dealing with that other problem

"The Earth will keep on turning" And then there's this fucking meme. Yeah if humanity went extinct that would not be the end of all value. Other animals count for something. But it would be a huge freaking loss. Elephants and chimps might be people morally speaking. And their species might have value, separate from just the individuals. Meaning that they shouldn't be used as a pure means, but they're not in a position to do half of the things we've done. Elephants are never going to raise their species let alone any others quality of life to what we've manage. Humans are irresponsible with the climate but we care more about that stuff than anything else on Earth does.

Norman Borlaug saved a billion people from starvation. There's also people like Martin Luther King Jr. That's not getting in to entire facets of culture that are either unique to us more at least much more diverse, comedy, social commentary, science, literature, philosophy and art.

Euodiachloris Since: Oct, 2010
#187: Mar 24th 2017 at 1:25:07 PM

[up]We think we care more than anybody else on Earth. Mainly because we mainly communicate with other humans because we've only just found out about the various Squid, Elephant and Dolphin however-complex-or-simple languages. Going back in time to find out if, over several hundred million years, some species of saurian didn't do some top-notch thinking and talking, even if they never got beyond wood-and-bone for tools (if they even bothered) for whatever reason before they got exincted like we almost did several times before we really got this pottery idea going.

There's a huge amount we don't know about both current and past life on Earth. We're not the most especially special thing to have ever evolved. We're one species among millions that have been and gone. And, no: none have deliberately had "extinct other species" as an agenda. Although many did before they, too, got out-dated and couldn't adapt.

If we leave AI behind to keep chugging, even if they accidentally extinct us? Yay: we left something what can think! Bonus!

DeMarquis (4 Score & 7 Years Ago)
#188: Mar 24th 2017 at 1:56:16 PM

I never understood how one could go about creating a sapient AI "by accident". That's like saying we might have created Deep Blue by accident, or we were just tinkering with levers and gasoline and suddenly had an automobile. This vastly underestimates the complexity involved. If we ever get a GAI it will be the result of improving previous, less complex operating systems. Every computational device we have ever designed had some set of goals built into it. Those goals dont go away just because the thing becomes self-aware. Superordinate goals are programmed right into the core program- there is nothing to design without them. "Unbounded utility functions" make no sense with respect to a designed intelligence.

I'm done trying to sound smart. "Clear" is the new smart.
Euodiachloris Since: Oct, 2010
#189: Mar 24th 2017 at 2:11:07 PM

[up]Yup. And, I don't think we remembered to program "kill all humans" into the first modems. [lol]

M84 Oh, bother. from Our little blue planet Since: Jun, 2010 Relationship Status: Chocolate!
Oh, bother.
#190: Mar 24th 2017 at 2:13:30 PM

Reminds me of this one SMBC strip where a sapient robot is horrified that its creators just assumed that it would want to Kill All Humans...and decides that it and its fellow robots have to Kill All Humans because humans are fucking paranoid nutjobs.

Disgusted, but not surprised
crazysamaritan NaNo 4328 / 50,000 from Lupin III Since: Apr, 2010
NaNo 4328 / 50,000
#191: Mar 24th 2017 at 5:50:12 PM

we were just tinkering with levers and gasoline and suddenly had an automobile
More like "while we were tinkering with self-driving cars, we end up with K.I.T.T." or "while improving the personalized Search and Advertising algorithms of Google, we invent Multivac". It is absolutely something the designer of a driverless car or search engine would be happy with, building complexity on top of complexity.
Every computational device we have ever designed had some set of goals built into it.
And? That's rather the point; whatever AGI we design will have a set of goals that it wants to complete, whether those goals are "navigate roads" or "answer everyone's questions". Unintended effects of those goals are what I'm concerned about.
Reminds me of this one SMBC strip...
This one? smile
"Unbounded utility functions" make no sense with respect to a designed intelligence.
Would you like me to rephrase, or are you saying that every designed program has an internal "stop button"?

Link to TRS threads in project mode here.
supermerlin100 Since: Sep, 2011
#192: Mar 24th 2017 at 6:34:38 PM

That's a lot of baseless speculations there. Honestly surprised you didn't bring up aliens. Oh though they have a similar problem as the AI.

Humans have done that a lot. And we've killed far more species out of shear indifference.

"If we leave AI behind to keep chugging, even if they accidentally extinct us? Yay: we left something what can think! Bonus! "

This strongly suggests that we got their values wrong. Seriously this kind of thing not happening is really on any priority list.

Aszur A nice butterfly from Pagliacci's Since: Apr, 2014 Relationship Status: Don't hug me; I'm scared
A nice butterfly
#193: Mar 24th 2017 at 7:08:23 PM

Ehhh I think the closest we will get to AI is just computers that are pretty good at predicting because of big data interpretation and the capacity to gather immense amount of info and use it to see patterns that no human could possibly hope to see with the certainty of a computer

If from there we go to "What of ethics, would it kill all humans?" the answer would be "depends on the ethics of the PC: If it learned them, or if they were pre programmed", cuz there is not really a mathemathical answer to "Is it ethical to kill"

It has always been the prerogative of children and half-wits to point out that the emperor has no clothes
supermerlin100 Since: Sep, 2011
#194: Mar 24th 2017 at 8:20:36 PM

I'm pretty sure it would be a strech to describe some of the ais we already have that way.

The problem is how do you program this stuff that we barely understand.

M84 Oh, bother. from Our little blue planet Since: Jun, 2010 Relationship Status: Chocolate!
Oh, bother.
#195: Mar 24th 2017 at 10:40:19 PM

[up]x4

No, this one. tongue

Disgusted, but not surprised
Aszur A nice butterfly from Pagliacci's Since: Apr, 2014 Relationship Status: Don't hug me; I'm scared
A nice butterfly
#196: Mar 25th 2017 at 12:11:53 PM

We know how they work. It's simply that they do something no human ever could: processu ngodly amounts of information at the same time to give them very accurate predictive capabilities in general, since they can also achieve some pretty glaring errors

It has always been the prerogative of children and half-wits to point out that the emperor has no clothes
supermerlin100 Since: Sep, 2011
#197: Mar 25th 2017 at 4:24:16 PM

I was referring to ethics. But as far as computers that's only slightly better than saying something something electrons. Especially sense there are known problems that explode in the time it takes to brute force per case/variable. So while being able to do more elementary calculations per sec helps, not needing as many is far better.

edited 25th Mar '17 4:27:14 PM by supermerlin100

Aszur A nice butterfly from Pagliacci's Since: Apr, 2014 Relationship Status: Don't hug me; I'm scared
A nice butterfly
#198: Mar 25th 2017 at 4:31:49 PM

That sort of thing would still be programmed. Parameters like those could be left to the computer t determine (A la "In order to maximize happiness sacrificing some is better") which is what they would most likely do if they were left to interpret the data by themseles and enforce new parameters based on that as opposed to the aasimovian rules where they'd have a definite lock there. No human death at all, period.

One way or another how the computer soon to be acting as a sort of AI would have ap redictable set of parameters for ethical behavior that way

It has always been the prerogative of children and half-wits to point out that the emperor has no clothes
Euodiachloris Since: Oct, 2010
#199: Mar 25th 2017 at 6:32:10 PM

I'm also of the opinion that, no matter how functionally intelligent devices and networks get, they're most likely going to stick with the basic "me-not me" of handshaking protocols as far as awareness goes for a very long time, even by computer standards.

An extreme version of asymmetric intelligence, in a way. They might develop high tier awareness, but their sense of self is unlikely to shake out to be anything close to ours, if they manage it. And, very likely not dangerous (except by accident).

Aszur A nice butterfly from Pagliacci's Since: Apr, 2014 Relationship Status: Don't hug me; I'm scared
A nice butterfly
#200: Mar 25th 2017 at 6:49:11 PM

Calling it now, if AI is invented, it iwll come accidentaly when people were trying to make sex robots.

It has always been the prerogative of children and half-wits to point out that the emperor has no clothes

Total posts: 424
Top