Follow TV Tropes

Following

Artificial Intelligence

Go To

M84 Oh, bother. from Our little blue planet Since: Jun, 2010 Relationship Status: Chocolate!
Oh, bother.
#201: Mar 26th 2017 at 2:25:57 AM

[up] Probably in the process of trying to program a robot to say things like "don't worry it happens to lots of people" among other things.

edited 26th Mar '17 2:26:08 AM by M84

Disgusted, but not surprised
CenturyEye Tell Me, Have You Seen the Yellow Sign? from I don't know where the Yith sent me this time... Since: Jan, 2017 Relationship Status: Having tea with Cthulhu
Tell Me, Have You Seen the Yellow Sign?
#202: Mar 26th 2017 at 4:42:48 AM

Yeah, about that It's already here. note 

As Grandma always said, "The only drawback to fucking is the humans." That's why pretty much every horny, lonely person on earth has wished at some point for a convincing sex simulation, a realistic experience with no strings attached after they turn off the power.

Uh, yeah, here's the thing: The tech exists right now ...the hardware exists for a full-immersion virtual boning session engaging all five senses. Most of it you can get off the shelf. A proactive hacker/pervert could write software to make it happen by the end of the month.

On that note, ai intelligence is being pressured by video games as much as anything. Just think of the difference between what ai could do on Galaga compared to the latest Halo game.

EDIT: Now, that I think of it, the link may not be quite site-safe, but its easy to google, especially with the quote.

edited 26th Mar '17 4:47:37 AM by CenturyEye

Look with century eyes... With our backs to the arch And the wreck of our kind We will stare straight ahead For the rest of our lives
supermerlin100 Since: Sep, 2011
#203: Mar 26th 2017 at 7:29:06 AM

I think you're greatly underestimating the problem. Every proposed solution is something we don't know how to do.

DeMarquis Who Am I? from Hell, USA Since: Feb, 2010 Relationship Status: Buried in snow, waiting for spring
Who Am I?
#204: Mar 26th 2017 at 7:56:46 AM

@crazysamaritan: "That's rather the point; whatever AGI we design will have a set of goals that it wants to complete, whether those goals are "navigate roads" or "answer everyone's questions". Unintended effects of those goals are what I'm concerned about. "

Ah, my bad. I thought someone was worried about the "Kill All Humans" robot apocalypse scenario. An actively hostile AI is a remote possibility. An accidently fatal AI program, in the "turn everyone into paperclips" sense is still semi-realistic.

As for utility functions, every device ever made is self-limited by it's design goals, all the way back to hammers. AI isn't going to be an exception, unless it's so human-like, it's basically a person, in which case you can treat it like a person. Look, one of two scenarios will hold: either the thing is so stupid it has to be told everything it does, in which case it's all on the programmer, or it can be programed with abstract concepts like "be nice" or "be safe". There is no in-between where it understands abstract categories of actions like "defend itself" or "prevent all violence" but doesn't understand the concept of "exceptions", or compromising between conflicting goal states. It's either following an explicit programming command or it isn't. If it is, then you have to tell it what to do or it just sits there, it's never going to rise up and kill you. If it isn't, then it's exactly as dangerous as you or I or any random human. Remember, we obey our unquestionable super-ordinate goals too, which were developed via an evolutionary algorithm (much like some of the most advanced programs available now).

If they rise, they will be very similar to us, at least as similar as chimps and dolphins, both of whom behave according to social parameters that we recognize very easily.

As for the role of the entertainment industry, it would be immensely ironic if sex-bots were the first things to develop artificial sapience. The vast majority of whom are designed to be female. Would men be better off, or worse?

"We learn from history that we do not learn from history."
supermerlin100 Since: Sep, 2011
#205: Mar 27th 2017 at 12:30:24 PM

It's not a matter of ai (plural) being smart enough to get the basic idea. There have been plenty of ideologies among humans that are mutually evil, despite including all of those basic concepts.

Morality is far more specific than social game theory. We want a design principle that if the ancient Greeks had somehow created the intelligence explosion would have turned against slavery eventually, and wouldn't conclude fascism is the answer.

And yes I'm assuming that there is a way to be wrong about what is right. But this is in the abstract just a bunch or really complicated questions. A paperclip maximizer doesn't disagree it's just not moved by those questions, and would only ask them for pragmatic reasons.

DeMarquis Who Am I? from Hell, USA Since: Feb, 2010 Relationship Status: Buried in snow, waiting for spring
Who Am I?
#206: Mar 28th 2017 at 7:56:45 AM

Good luck with that. Moral philosophers have been trying to reduce morality to a simple self-consistent system for over 2000 years, and havnt had much success yet.

edited 28th Mar '17 7:57:16 AM by DeMarquis

"We learn from history that we do not learn from history."
Journeyman Overlording the Underworld from On a throne in a vault overlooking the Wasteland Since: Nov, 2010
Overlording the Underworld
#207: Mar 28th 2017 at 6:36:15 PM

I doubt it would be sex bots. Sex and porn viewership are both down across the board. It's more likely to be companionship bots that get the AI boost. Hell, statistically speaking, a lot of Johns aren't looking for sex. When interviewed, prostitutes admit a lot of them just want somebody they can actually talk to because the people they DON'T pay have their own shit to worry about and can't be bothered to give two shits about them for a change. I would seriously expect it to be the Japanese who pull it off if it is a companionship model that goes there, because they're in even worse shape than us US people.

As for technical stuff, I have nothing to contribute. My programming knowledge ends at C++ and I don't retain much of that. Hardware wise I understand the basics, and that's about all you can say.

CenturyEye Tell Me, Have You Seen the Yellow Sign? from I don't know where the Yith sent me this time... Since: Jan, 2017 Relationship Status: Having tea with Cthulhu
Tell Me, Have You Seen the Yellow Sign?
#208: Mar 30th 2017 at 9:21:02 AM

[up]Well, Japan already has robots made to comfort seniors (somehow...)

Vector Institute is just the latest in Canada's AI expansion

Canadian researchers have been behind some recent major breakthroughs in artificial intelligence. Now, the country is betting on becoming a big player in one of the hottest fields in technology, with help from the likes of Google and RBC.

In an unassuming building on the University of Toronto's downtown campus, Geoff Hinton laboured for years on the "lunatic fringe" of academia and artificial intelligence, pursuing research in an area of AI called neural networks.

Also known as "deep learning", neural networks are computer programs that learn in similar way to human brains. The field showed early promise in the 1980s, but the tech sector turned its attention to other AI methods after that promise seemed slow to develop.

Now, neural networks - which allow computers to do things like teach themselves to play games like Texas hold 'em - are considered tech's next big thing, and Hinton is recognised globally for his work.

Neural networks perform complex and intuitive tasks through exposure to huge amounts of data. Today's more powerful computers and massive sets of data allowed for breakthroughs in neural network technology, improving accuracy in speech recognition and computer vision, which is helping make self-driving cars a reality.

Neural networks are used by the likes of Netflix to recommend what you should binge watch and smartphones with voice assistance tools. Google Deep Mind's Alpha Go AI used them to win against a human in the ancient game of Go in 2016.

Toronto will soon get the Vector Institute for Artificial Intelligence, geared to fuelling "Canada's amazing AI momentum".

The new research facility, which will be officially launched on Thursday, will be dedicated to expanding the applications of AI by through explorations in deep learning and other forms of machine learning. It has received about C$170m (US$127m/£102m) in funding from the Canadian and Ontario governments and a group of 30 businesses, including Google and RBC.

Hinton will be the institute's chief scientific adviser.

Earlier this month, the federal government announced C$125m ($94m/£75m) for a "pan-Canadian AI strategy".

RBC is also investing in the future of AI in Canada, including opening a machine learning lab headed by Agrafioti, co-funding a program to bring global AI talent and entrepreneurs to Toronto, and collaborating with Sutton and the University of Alberta's Machine Intelligence Institute.

Those trying to build Canada's AI scene admit places like Silicon Valley will always be attractive to tech talent. But they hope strategic investments like these will allow Canada to fuel the growth of domestic startups.

Canadian tech also sees the travel uncertainty created by the Trump administration in the US as making Canada more attractive to foreign talent. (One of Clark's the selling points is that Toronto as an "open and diverse" city).

"I would hate to see one more professor moving south," Agrafioti says. "Really, I hope that five years from now we look back and say we almost lost it but we caught it in time and reversed it."

In essence, when the resistance forms up, we'll probably find Skynet in Toronto.[lol]

edited 30th Mar '17 9:23:45 AM by CenturyEye

Look with century eyes... With our backs to the arch And the wreck of our kind We will stare straight ahead For the rest of our lives
DeMarquis Who Am I? from Hell, USA Since: Feb, 2010 Relationship Status: Buried in snow, waiting for spring
Who Am I?
#209: Mar 30th 2017 at 5:08:15 PM

It will be widely distributed across all our appliances: the internet of things come alive!

"We learn from history that we do not learn from history."
crazysamaritan NaNo 4328 / 50,000 from Lupin III Since: Apr, 2010
NaNo 4328 / 50,000
#210: Mar 31st 2017 at 8:51:15 AM

This strongly suggests that we got their values wrong.
Well, yeah. As De Marquis said, we haven't been able to figure out a self-consistent ethical framework for millennia. It doesn't seem likely that an entire millennia will pass before we can create a fully sapient AI (even if we have to cheat by uploading a human brain). I've not seen evidence to convince me that we even have "Friendly" code yet. You seem to say it is a priority, but we have an active project recreating the processes of a mouse brain before moving onto the next step. Unless there's some "magic force" that exists from an undetectable source, the human brain will be simulated in only a few decades.

As for utility functions, every device ever made is self-limited by it's design goals, all the way back to hammers.
Counterpoint: Conway's Game of Life never tells itself to stop. It has an unlimited utility function because the design goals are never satisfied. Similarly, Tetris never stops adding pieces. Normally, this is limited by the space on the board, but enlarging the board to the size of the observable universe removes the limitation. I expect that removing a human brain from the limitations of the human body will remove similar "stop maximizing" functions.

If they rise, they will be very similar to us, at least as similar as chimps and dolphins, both of whom behave according to social parameters that we recognize very easily.
And all three of which are prepared to commit genocide. I don't like genocide being an option.

If [it's basically a person], then it's exactly as dangerous as you or I or any random human.
No, a sentient computer algorithm is much more dangerous than a random human. John Henry carries the lesson that in the days of railroad-building, humans were just barely as good as a machine. Humans have not improved since that time and the machines have. Unless the computer is kept "in a box", it is at least as dangerous as an exceptional human.

edited 31st Mar '17 8:54:00 AM by crazysamaritan

Link to TRS threads in project mode here.
Corvidae It's a bird. from Somewhere Else Since: Nov, 2014 Relationship Status: Non-Canon
It's a bird.
#211: Mar 31st 2017 at 9:44:49 AM

Unless the computer is kept "in a box", it is at least as dangerous as an exceptional human.

If you're trying to create an unpredictable, sapient program, you'd think it'd be common sense to take some basic safety precautions. Like say, not giving it weapons/a body that's strong enough to harm people/unrestricted internet access/whatever. And while we're at it, don't give it a single primary objective with no abort function or room for compromise.

Still a great "screw depression" song even after seven years.
crazysamaritan NaNo 4328 / 50,000 from Lupin III Since: Apr, 2010
NaNo 4328 / 50,000
#212: Mar 31st 2017 at 10:09:28 AM

If you're trying to create an unpredictable, sapient program,
Unless your programs work right 100% of the time the first time you run them, then "unpredictable" is a given.
you'd think it'd be common sense to take some basic safety precautions. Like say, not giving it weapons/a body that's strong enough to harm people/unrestricted internet access/whatever.
Sounds good; when is testing done and you "let it out" of your box?
And while we're at it, don't give it a single primary objective with no abort function or room for compromise.
Yeah, it's called a "stop button" and the AI team for Google assumes that the AI will learn to deactivate that abort function because it gets in the way of completing the highest-utility outcome (not always single objective).

Link to TRS threads in project mode here.
Aszur A nice butterfly from Pagliacci's Since: Apr, 2014 Relationship Status: Don't hug me; I'm scared
A nice butterfly
#213: Mar 31st 2017 at 10:39:17 AM

meh i insist that the closest we will get to something behaving like an AI are computers that process and predict human behavior accordingly.

Self awareness is kinda out there and is a subject that dances more with philosophy than it does with programming

It has always been the prerogative of children and half-wits to point out that the emperor has no clothes
crazysamaritan NaNo 4328 / 50,000 from Lupin III Since: Apr, 2010
NaNo 4328 / 50,000
#214: Mar 31st 2017 at 11:33:08 AM

I insist that the closest we will get to something behaving like an AI are computers that process and predict human behavior accordingly.
Why won't we be able to fully simulate a human brain?

Link to TRS threads in project mode here.
Euodiachloris Since: Oct, 2010
#215: Mar 31st 2017 at 12:44:27 PM

[up]Brains aren't binary. A single neuroreceptor can react to several neurotransmitters (and several things that technically aren't) in varying levels of intensity.

For starters.

crazysamaritan NaNo 4328 / 50,000 from Lupin III Since: Apr, 2010
NaNo 4328 / 50,000
#216: Mar 31st 2017 at 3:18:33 PM

And once you define what the problems are, people start creating solutions. Again; The goal of the Blue Brain Project is to build biologically detailed digital reconstructions and simulations of the rodent, and ultimately the human brain. The problems are complex, but what would make them insurmountable?

Link to TRS threads in project mode here.
Journeyman Overlording the Underworld from On a throne in a vault overlooking the Wasteland Since: Nov, 2010
Overlording the Underworld
#217: Mar 31st 2017 at 3:30:12 PM

A magical force from outside the Universe that taps into the brain with energy readings indistinguishable from our own brain wave patterns.

It's becoming easier and easier to believe that the entire Universe already IS a simulation. So we likely will be able to simulate the human brain. That being said, how well the simulation holds up compared to our real brains is a good question. We will just have to wait and see. I'm agnostic myself so it really doesn't matter to me one way or the other. And being able to simulate the brain really doesn't impact anything supernatural. You'd only be simulating behavior.

Aszur A nice butterfly from Pagliacci's Since: Apr, 2014 Relationship Status: Don't hug me; I'm scared
A nice butterfly
#218: Mar 31st 2017 at 5:36:08 PM

What Euo said. Even if you get into quantum computing, they are ultimately doing a lot of binary analysis. Which is why I insist that what is most likely is that we will be able to make programs that behave like an AI would behave, because behavior CAN be taught to reply to results. Cleverbot is a clever example. Or that one program by Microsoft I think that 4chan or something organized to turn it into a racist bot.

By binary processors you can end up narrowing down human behavior, but you are not very likely to attain what we call GETTIN WOOOOOOOKE BRO

It has always been the prerogative of children and half-wits to point out that the emperor has no clothes
supermerlin100 Since: Sep, 2011
#219: Mar 31st 2017 at 7:07:27 PM

Why? You can represent things like neuro transmitters and 'continuous' inputs with computers. You seem to be getting too caught up with binary, when for most purposes computers are dealing with base 8, 16, or 32.

Aszur A nice butterfly from Pagliacci's Since: Apr, 2014 Relationship Status: Don't hug me; I'm scared
A nice butterfly
#220: Mar 31st 2017 at 7:09:08 PM

Hexadecimal is a base 16 system used to simplify how binary is represented...

It has always been the prerogative of children and half-wits to point out that the emperor has no clothes
Euodiachloris Since: Oct, 2010
#221: Apr 1st 2017 at 3:20:00 AM

[up]Yup. The hoops engineers and programmers have had to jump through to get silicon chips to develop fuzzy logic out of binary conditional states are astounding, no? :/

While... neurons have yes/no/maybe/perhaps/up/down/left/right/asdfgh/qwerty baked in as standard. If anybody thinks Hex gets complicated, they haven't chased down the effects of dopamine on mammalian processing. And, just dopamine. tongue

My first year neurology lecturer put it this way: neurons are eukaryotic cells that are specialised to work out how strongly they should pass any given stimulus on... And, which of their neighbours they should pass the buck to. That is their basic function and the basis of their decisions comes straight from the decision-making any bacteria has to make (hence "up/down/left/right/time/pressure" as points on their compass).

In short: they make decisions — sure, very basic, chemically-derived and very immediate ones, yeah... But, they don't just register yes/no or 1/0 — they work with indefinite factors with multiple possible outcomes to calculate and generate in various proportions. To do the same in silicon or whatever, you need a network. Our neural building block is already a decoder and decider of things in a single package with a personal and public chemical lexicon.

edited 1st Apr '17 4:14:59 AM by Euodiachloris

supermerlin100 Since: Sep, 2011
#222: Apr 1st 2017 at 6:00:36 AM

The point is that it is computable. There's nothing about using binary as a base that precludes more complicated questions as just simulations, any more than brains are limited to the questions individual neurons can handle.

edited 1st Apr '17 6:16:21 AM by supermerlin100

Izeinsummer Since: Jan, 2015
#223: Apr 1st 2017 at 6:49:48 AM

Neurons are trivially simmed on the individual level - there's no magic sauce to them. The only reason we don't already have uploads is that simulating the very large number of neurons in the brain sums up to a non-trivial task and scanning the brain to sufficient fidelity is a technical challenge

Euodiachloris Since: Oct, 2010
#224: Apr 1st 2017 at 6:57:07 AM

[up][up]But, backing a brain up in a computer isn't because they're fundamentally incompatible thanks to brains not being binary. That's the point: to artificially simulate a single neuron requires more space, energy consumption, time and processing power in computing terms than a neuron uses. And, even then, it's a whole bunch of work-arounds to compensate for the whole "cells aren't binary" thing.

The only way you could approximate a brain would be an internet-sized network attached to a network of internets. And, even then, I rather doubt it. (Mainly because we don't store hard data in any way even approaching ROM.)

Binary and biology: not compatible. They can coordinate (heck, that's what an interface is — when a lynx can pounce on digital fish on a tablet, that's interfacing), but, frankly, backing up skills and memories to download later? Is well beyond pie in the sky.

Let AI think like AI — they don't need to ape the biological to do intelligence and self awareness using very different principles.

edited 1st Apr '17 7:07:29 AM by Euodiachloris

supermerlin100 Since: Sep, 2011
#225: Apr 1st 2017 at 8:42:02 AM

Again they are compatible in the sense that that you can make system out of binary computers that replicates all of the tiny functions of cells. With the binary computers action as a structural substrate for the neurons, the same way organic chemistry normally does. At the level of abstraction the use of ROM doesn't matter. I've heard this described as an ontology. It's just that this might not be the best way of doing things.

On one hand compared to biology you can have neuronetwork designs that would be metabolically, embryologically or just geometrical impossible, and you can have precise control of the details, and certainly scan none destructively.

But other designs that use floating point numbers, might be less resource intensive, and easier to examine for safety and updates.

edited 1st Apr '17 8:46:06 AM by supermerlin100


Total posts: 424
Top