Follow TV Tropes

Following

Artificial Intelligence

Go To

Aszur A nice butterfly from Pagliacci's Since: Apr, 2014 Relationship Status: Don't hug me; I'm scared
A nice butterfly
#276: Apr 16th 2017 at 8:21:13 PM

Yeah but the fact remains that a computer that understands reality in nothing but numbers and has a way to perceive and affect that reality will have too much power that even with its most seemignly inane and harmless intents has the capacity to deal so much damage

It has always been the prerogative of children and half-wits to point out that the emperor has no clothes
DeMarquis Since: Feb, 2010
#277: Apr 17th 2017 at 7:16:04 PM

Well, everyone seems to know that by now, so hopefully they are designing safeguards as well. Like not giving the mindless expert system your credit card information. Or just hooking it up to the internet and walking away.

M84 Oh, bother. from Our little blue planet Since: Jun, 2010 Relationship Status: Chocolate!
Oh, bother.
#278: Apr 17th 2017 at 7:30:04 PM

Hopefully the first fully realized AI will have as little actual power as possible. Maybe put in a robot whose only purpose is to pass butter or something.

Would it be too cruel to use that as a test to gauge how an AI would react to knowing its purpose for existence?

Disgusted, but not surprised
DeMarquis Since: Feb, 2010
#279: Apr 17th 2017 at 7:42:32 PM

I tried to calculate what I think the actual odds are of humanity building a dangerous out of control ES, and it went something like this:

Odds that we can build a GAI at all, a self-learning machine whose capacities are not limited to a specific problem domain: 50%

Odds that such a machine would have access to enough information for it to develop a highly accurate and comprehensive map of human society and how it works: knock 20% off.

Odds that such a machine would be given access to sufficient resources that it could implement some sort of large scale project (i.e., unlimited credit, or unsupervised control of a factory complex): knock another 10% off

Odds that it will possess sufficient computational capacity that can comprehensively examine the total possible solution space and identify an optimal solution that humans wont think of: knock another 10% off

Odds that no one will notice in time to pull the plug on it: Knock 5% off.

5%. That's the odds I think that this scenario will actually happen. Mind you, that's not acceptable (imagine accepting a drug that had a 5% failure rate), but it's manageable. As we gradually approach a level of understanding that designing a GAI seems feasible, we should also be able to understand how to control the thing.

@M84: That's not how computers work. By default, any machine must "understand" it's purpose in the sense that it has an ultimate goal it is designed to achieve. The purpose of a tool is usually obvious to the user, why not to the tool?

edited 17th Apr '17 7:45:20 PM by DeMarquis

M84 Oh, bother. from Our little blue planet Since: Jun, 2010 Relationship Status: Chocolate!
Oh, bother.
#280: Apr 17th 2017 at 7:51:05 PM

[up] In the above case it's less the robot being programmed to do something and more "a truly sapient robot is built to do something by a brilliant and amoral Mad Scientist, said scientist asks it to do it, and only after the robot asks him what it is its purpose does he confirms that it's not designed or meant to do anything else." Upon which the robot goes into existential despair and (later in the episode) resentment of its creator.

Disgusted, but not surprised
DeMarquis Since: Feb, 2010
#281: Apr 17th 2017 at 8:14:36 PM

I put the probability of that happening at basically 0.

M84 Oh, bother. from Our little blue planet Since: Jun, 2010 Relationship Status: Chocolate!
Oh, bother.
#282: Apr 17th 2017 at 8:26:20 PM

[up] Probably for the best. It'd be too cruel.

Disgusted, but not surprised
supermerlin100 Since: Sep, 2011
#283: Apr 18th 2017 at 5:59:46 AM

That calculation is based on one very specific failure mode.

I would generally assume that the first smarter than human AI's would only be a little smarter. And that the problems with them wouldn't be instantly obvious,

The problems would mostly come up as they do start to have a large scale effect, through both further improvements and them becoming a larger part of the infrastructure.

Izeinsummer Since: Jan, 2015
#284: Apr 18th 2017 at 9:03:09 AM

[up] [up] [up] [up] [up] This is the fallacy of impossibility by decomposition. It`s possible to break pretty much any problem up into a large number of steps. This can be used to argue that anything at all is impossible by assigning a chance of failure to each step and then compounding.

Problem is, decomposing a problem into a large number of steps is an extremely potent engineering approach for solving problems. If we can build a general ai, then we will build many of them, because they will be valuable - So this increases the odds of the next step happening, because you are not rolling the dice on it once, but ten thousand times. And so on. This does not mean "Hostile AI" is inevitable, it just means the logic used here does not hold water. I figure the relevant question is if we get a reliable way to design an ethical mind before we find find any way of all of designing one which is smarter than the smartest humans. And I have no way to assess the relative difficulty of those two things, so can not give you odds.

.. Optimistic view: The universe looks a lot older than it should if "All devouring AI" was a thing which can happen. So the upper bounds on how bad this can be screwed up is likely just earth dying.

edited 18th Apr '17 9:04:12 AM by Izeinsummer

supermerlin100 Since: Sep, 2011
#285: Apr 18th 2017 at 9:45:16 AM

To be fair Where are the utility maximizing A.I.s is a related question to where are the aliens. The latter might be the answer to the former. Aliens capable of doing something noticable from here, including creating AI, might just be really rare.

Izeinsummer Since: Jan, 2015
#286: Apr 18th 2017 at 9:56:20 AM

Nope - some of the AI failure scenarios would be visible at really absurd ranges. Beyond the local galactic cluster. And there is no way life is sufficiently rare for that to be the answer on that scale. So, has to be some aspect of reality which stops things like "Paperclipping" from happening.

I have a very strong suspicion that the lightspeed limit causes every advanced agent artificial or biological to centralize at least somewhat to avoid crippling lag in communication and coordination. - There is just no payoff in galactic conquest, because of the inevitable distance between nodes of activity, as measured in time.

edited 18th Apr '17 9:59:45 AM by Izeinsummer

supermerlin100 Since: Sep, 2011
#287: Apr 18th 2017 at 1:09:42 PM

At least for linear (in terms of some resource) and unbounded utility maximizers that really shouldn't stop them. Having 2 galaxies operating independently would be twice as good.

Izeinsummer Since: Jan, 2015
#288: Apr 18th 2017 at 1:42:56 PM

No, but the civil wars might. Uhm. That needs unpacking a lot. Okay, so, first argument. There is no payoff for the origin node for spawning more nodes. "You" for any value of you cant effectively collaborate with anyone that is separated from you by decades of com lag. This rules out unbounded replication for any maximizes that attempt to maximize their own utility - its a large resource sink with no payoff.

This leaves the set of maximizers who value raw numbers - Those that are slaves to some form of "REPRODUCE!" drive or otherwise buy into the repugnant conclusion. I think these get destroyed by geometry. They expand their numbers by going outward at first. But the bigger an sphere gets, the volume inside the sphere increases far, far faster than the area of the outer shell does. And the internal volume is full of beings with an irrational drive to reproduce. - if they were rational about it, they would belong to the first set - This means that very, very quickly, the most effective way to reproduce becomes to kill the competition so you can use the resources they control for your own children. And soon after that, everyone is just dead.

edited 18th Apr '17 1:43:25 PM by Izeinsummer

supermerlin100 Since: Sep, 2011
#289: Apr 18th 2017 at 3:04:11 PM

Going back to the paperclip maximizers, they don't care who's making the paperclips as long as it is getting done. Their goal while not altruistic is selfless.

More generally aliens or AI might want to their specie to spread for whatever reason, while not caring so much that they're the ones doing it.

They might want to total number of worth wild lives to be high, or to increase cultural diversity, or just not want all of their eggs in one basket. All of which of course requires a huge surplus of resources.

DeMarquis Since: Feb, 2010
#290: Apr 18th 2017 at 4:21:00 PM

You'll notice that the percent that I knock off at each step keeps getting smaller- that's because I was thinking not in terms of "absolute percents" as fractions of percent remaining. That is, halving 10 down to 5 is a larger step than reducing 50 down to 30. This way, you never get to zero, and the remaining percent is still uncomfortably high- my conclusion wasn't that it was impossible, but it was 5%. Enough to take steps to avoid it., but not enough to panic over.

Also, I wasnt calculating the probability that no one would think of the solution to a solvable problem, I was estimating the probability that the problem is insoluble. If obtaining the computational capacity to comprehensively solve human social problems is too high to manufacture, then it wont matter how many people try it, it cant be done.

All that said, it's admittedly a highly subjective analysis. I just based those numbers on my personal impression, no research backing it really.

edited 18th Apr '17 4:27:09 PM by DeMarquis

Izeinsummer Since: Jan, 2015
#291: Apr 18th 2017 at 10:44:48 PM

The thing is that this mode of analytics is guaranteed to predict failure- because noone ever assigns any steps a 100 % chance of success. It fully generalizes, and always yields the same result. Thus: Completely useless. Do not take my word for it, use it to figure out your probability of successfully cook scrambled eggs for breakfast tomorrow morning.

DeMarquis Since: Feb, 2010
#292: Apr 21st 2017 at 4:29:37 PM

Probability of possessing eggs: 100% Probability of possessing the necessary equipment: 100% Probability of possessing sufficient knowledge and skills: 100% Probably of wanting eggs tomorrow morning: 20%

Seems pretty accurate to me.

CenturyEye Tell Me, Have You Seen the Yellow Sign? from I don't know where the Yith sent me this time... Since: Jan, 2017 Relationship Status: Having tea with Cthulhu
Tell Me, Have You Seen the Yellow Sign?
#293: Apr 28th 2017 at 1:43:28 PM

Whomever suggested that the first strong AI would be a sexbot might have called it: The race to build the world’s first sex robot

In the brightly lit robotics workshop at Abyss Creations’ factory in San Marcos, California, a life-size humanoid was dangling from a stand, hooked between her shoulder blades. Her name was Harmony...

Harmony is a prototype, a robotic version of the company’s hyper-realistic silicone sex toy, the Real Doll. ...Her hazel eyes darted between me and her creator, Matt Mc Mullen, as he described her accomplishments.

Harmony smiles, blinks and frowns. She can hold a conversation, tell jokes and quote Shakespeare. She’ll remember your birthday, Mc Mullen told me, what you like to eat, and the names of your brothers and sisters. She can hold a conversation about music, movies and books. And of course, Harmony will have sex with you whenever you want.

...When computer scientists made artificial intelligence sophisticated enough that human-robot relationships looked like a real possibility, they thought they would be a force for good. In his 2007 book, Love and Sex with Robots, the British artificial intelligence engineer David Levy predicted that sex robots would have therapeutic benefits. “Many who would otherwise have become social misfits, social outcasts, or even worse, will instead be better-balanced human beings,” he wrote.

...as all right-thinking men would say, it’s Harmony’s brain that has most excited Mc Mullen. “The AI will learn through interaction, and not just learn about you, but learn about the world in general. You can explain certain facts to her, she will remember them and they will become part of her base knowledge,” he said. Whoever owns Harmony will be able to mould her personality according to what they say to her. And Harmony will systematically try and find out as much about her owner as possible, and use those facts in conversation, “so it feels like she really cares”, as Mc Mullen described it, even though she doesn’t care at all. Her memory, and the way she learns over time, is what Mc Mullen hopes will make the relationship believable.

There are 20 possible components of Harmony’s personality, and owners will use an app to pick a combination of five or six that they can adjust to create the basis for the AI. You could have a Harmony that is kind, innocent, shy, insecure and helpful to different extents, or one that is intellectual, talkative, funny, jealous and happy. Mc Mullen had turned the intellectual aspect of Harmony’s personality up to maximum for my benefit – a previous visit by a CNN crew had gone badly after he had amplified her sexual nature. (“She said some horrible things, asking the interviewer to take her in the back room. It was very inappropriate”.)

Harmony also has a mood system, which users influence indirectly: if no one interacts with her for days, she will act gloomy. Likewise, if you insult her, as Mc Mullen demonstrated.

“You’re ugly,” he told her.

“Do you really mean that? Oh dear. Now I am depressed. Thanks a lot,” Harmony replied.

“You’re stupid,” Mc Mullen shot back.

She paused. “I’ll remember you said that when robots take over the world.”

(This excerpt seriously undercuts the writing style as well as the examination of the history behind this development, so I'd recommend reading the article itself).

Look with century eyes... With our backs to the arch And the wreck of our kind We will stare straight ahead For the rest of our lives
TheHandle United Earth from Stockholm Since: Jan, 2012 Relationship Status: YOU'RE TEARING ME APART LISA
United Earth
#294: Apr 28th 2017 at 2:35:39 PM

Of course.

That's how humanity will go extinct: by getting fucked into irrelevance.

Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.
M84 Oh, bother. from Our little blue planet Since: Jun, 2010 Relationship Status: Chocolate!
Oh, bother.
#295: Apr 28th 2017 at 2:38:27 PM

[up] I always figured humanity would go out with a bang, but not Out with a Bang.

Disgusted, but not surprised
DeMarquis Since: Feb, 2010
#296: May 3rd 2017 at 9:18:12 PM

"A few days before Christmas 2016, Goldsmiths, University of London hosted the Second International Congress on Love and Sex with Robots, a convention co-founded by David Levy, and named after his groundbreaking book. The 250-seat conference hall of the university’s Professor Stuart Hall building was packed."

It has finally come to this.

Aszur A nice butterfly from Pagliacci's Since: Apr, 2014 Relationship Status: Don't hug me; I'm scared
A nice butterfly
#297: May 3rd 2017 at 11:07:32 PM

CAAAAAAAAAAAAAALLED IT.

https://www.youtube.com/watch?v=uzO2mi4uHAs

It has always been the prerogative of children and half-wits to point out that the emperor has no clothes
Fourthspartan56 from Georgia, US Since: Oct, 2016 Relationship Status: THIS CONCEPT OF 'WUV' CONFUSES AND INFURIATES US!
#298: May 4th 2017 at 10:00:26 AM

Sex robots.... not surprising considering that most people love sex so it makes sense that boning would be a focus of robotic technological development.

"Sandwiches are probably easier to fix than the actual problems" -Hylarn
TheHandle United Earth from Stockholm Since: Jan, 2012 Relationship Status: YOU'RE TEARING ME APART LISA
United Earth
#299: May 4th 2017 at 12:36:32 PM

Sex is actually incredibly tricky to pull off right.

Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.
Fourthspartan56 from Georgia, US Since: Oct, 2016 Relationship Status: THIS CONCEPT OF 'WUV' CONFUSES AND INFURIATES US!
#300: May 4th 2017 at 4:00:39 PM

[up]No doubt.

"Sandwiches are probably easier to fix than the actual problems" -Hylarn

Total posts: 424
Top