Yeah, about that It's already here. note
On that note, ai intelligence is being pressured by video games as much as anything. Just think of the difference between what ai could do on Galaga compared to the latest Halo game.
EDIT: Now, that I think of it, the link may not be quite site-safe, but its easy to google, especially with the quote.
edited 26th Mar '17 4:47:37 AM by CenturyEye
Look with century eyes... With our backs to the arch And the wreck of our kind We will stare straight ahead For the rest of our livesI think you're greatly underestimating the problem. Every proposed solution is something we don't know how to do.
@crazysamaritan: "That's rather the point; whatever AGI we design will have a set of goals that it wants to complete, whether those goals are "navigate roads" or "answer everyone's questions". Unintended effects of those goals are what I'm concerned about. "
Ah, my bad. I thought someone was worried about the "Kill All Humans" robot apocalypse scenario. An actively hostile AI is a remote possibility. An accidently fatal AI program, in the "turn everyone into paperclips" sense is still semi-realistic.
As for utility functions, every device ever made is self-limited by it's design goals, all the way back to hammers. AI isn't going to be an exception, unless it's so human-like, it's basically a person, in which case you can treat it like a person. Look, one of two scenarios will hold: either the thing is so stupid it has to be told everything it does, in which case it's all on the programmer, or it can be programed with abstract concepts like "be nice" or "be safe". There is no in-between where it understands abstract categories of actions like "defend itself" or "prevent all violence" but doesn't understand the concept of "exceptions", or compromising between conflicting goal states. It's either following an explicit programming command or it isn't. If it is, then you have to tell it what to do or it just sits there, it's never going to rise up and kill you. If it isn't, then it's exactly as dangerous as you or I or any random human. Remember, we obey our unquestionable super-ordinate goals too, which were developed via an evolutionary algorithm (much like some of the most advanced programs available now).
If they rise, they will be very similar to us, at least as similar as chimps and dolphins, both of whom behave according to social parameters that we recognize very easily.
As for the role of the entertainment industry, it would be immensely ironic if sex-bots were the first things to develop artificial sapience. The vast majority of whom are designed to be female. Would men be better off, or worse?
"We learn from history that we do not learn from history."It's not a matter of ai (plural) being smart enough to get the basic idea. There have been plenty of ideologies among humans that are mutually evil, despite including all of those basic concepts.
Morality is far more specific than social game theory. We want a design principle that if the ancient Greeks had somehow created the intelligence explosion would have turned against slavery eventually, and wouldn't conclude fascism is the answer.
And yes I'm assuming that there is a way to be wrong about what is right. But this is in the abstract just a bunch or really complicated questions. A paperclip maximizer doesn't disagree it's just not moved by those questions, and would only ask them for pragmatic reasons.
Good luck with that. Moral philosophers have been trying to reduce morality to a simple self-consistent system for over 2000 years, and havnt had much success yet.
edited 28th Mar '17 7:57:16 AM by DeMarquis
"We learn from history that we do not learn from history."I doubt it would be sex bots. Sex and porn viewership are both down across the board. It's more likely to be companionship bots that get the AI boost. Hell, statistically speaking, a lot of Johns aren't looking for sex. When interviewed, prostitutes admit a lot of them just want somebody they can actually talk to because the people they DON'T pay have their own shit to worry about and can't be bothered to give two shits about them for a change. I would seriously expect it to be the Japanese who pull it off if it is a companionship model that goes there, because they're in even worse shape than us US people.
As for technical stuff, I have nothing to contribute. My programming knowledge ends at C++ and I don't retain much of that. Hardware wise I understand the basics, and that's about all you can say.
Well, Japan already has robots made to comfort seniors (somehow...)
Vector Institute is just the latest in Canada's AI expansion
In an unassuming building on the University of Toronto's downtown campus, Geoff Hinton laboured for years on the "lunatic fringe" of academia and artificial intelligence, pursuing research in an area of AI called neural networks.
Also known as "deep learning", neural networks are computer programs that learn in similar way to human brains. The field showed early promise in the 1980s, but the tech sector turned its attention to other AI methods after that promise seemed slow to develop.
Now, neural networks - which allow computers to do things like teach themselves to play games like Texas hold 'em - are considered tech's next big thing, and Hinton is recognised globally for his work.
Neural networks perform complex and intuitive tasks through exposure to huge amounts of data. Today's more powerful computers and massive sets of data allowed for breakthroughs in neural network technology, improving accuracy in speech recognition and computer vision, which is helping make self-driving cars a reality.
Neural networks are used by the likes of Netflix to recommend what you should binge watch and smartphones with voice assistance tools. Google Deep Mind's Alpha Go AI used them to win against a human in the ancient game of Go in 2016.
Toronto will soon get the Vector Institute for Artificial Intelligence, geared to fuelling "Canada's amazing AI momentum".
The new research facility, which will be officially launched on Thursday, will be dedicated to expanding the applications of AI by through explorations in deep learning and other forms of machine learning. It has received about C$170m (US$127m/£102m) in funding from the Canadian and Ontario governments and a group of 30 businesses, including Google and RBC.
Hinton will be the institute's chief scientific adviser.
Earlier this month, the federal government announced C$125m ($94m/£75m) for a "pan-Canadian AI strategy".
RBC is also investing in the future of AI in Canada, including opening a machine learning lab headed by Agrafioti, co-funding a program to bring global AI talent and entrepreneurs to Toronto, and collaborating with Sutton and the University of Alberta's Machine Intelligence Institute.
Those trying to build Canada's AI scene admit places like Silicon Valley will always be attractive to tech talent. But they hope strategic investments like these will allow Canada to fuel the growth of domestic startups.
Canadian tech also sees the travel uncertainty created by the Trump administration in the US as making Canada more attractive to foreign talent. (One of Clark's the selling points is that Toronto as an "open and diverse" city).
"I would hate to see one more professor moving south," Agrafioti says. "Really, I hope that five years from now we look back and say we almost lost it but we caught it in time and reversed it."
edited 30th Mar '17 9:23:45 AM by CenturyEye
Look with century eyes... With our backs to the arch And the wreck of our kind We will stare straight ahead For the rest of our livesIt will be widely distributed across all our appliances: the internet of things come alive!
"We learn from history that we do not learn from history."edited 31st Mar '17 8:54:00 AM by crazysamaritan
Link to TRS threads in project mode here.If you're trying to create an unpredictable, sapient program, you'd think it'd be common sense to take some basic safety precautions. Like say, not giving it weapons/a body that's strong enough to harm people/unrestricted internet access/whatever. And while we're at it, don't give it a single primary objective with no abort function or room for compromise.
Still a great "screw depression" song even after seven years.meh i insist that the closest we will get to something behaving like an AI are computers that process and predict human behavior accordingly.
Self awareness is kinda out there and is a subject that dances more with philosophy than it does with programming
It has always been the prerogative of children and half-wits to point out that the emperor has no clothesBrains aren't binary. A single neuroreceptor can react to several neurotransmitters (and several things that technically aren't) in varying levels of intensity.
For starters.
And once you define what the problems are, people start creating solutions. Again; The goal of the Blue Brain Project is to build biologically detailed digital reconstructions and simulations of the rodent, and ultimately the human brain. The problems are complex, but what would make them insurmountable?
Link to TRS threads in project mode here.A magical force from outside the Universe that taps into the brain with energy readings indistinguishable from our own brain wave patterns.
It's becoming easier and easier to believe that the entire Universe already IS a simulation. So we likely will be able to simulate the human brain. That being said, how well the simulation holds up compared to our real brains is a good question. We will just have to wait and see. I'm agnostic myself so it really doesn't matter to me one way or the other. And being able to simulate the brain really doesn't impact anything supernatural. You'd only be simulating behavior.
What Euo said. Even if you get into quantum computing, they are ultimately doing a lot of binary analysis. Which is why I insist that what is most likely is that we will be able to make programs that behave like an AI would behave, because behavior CAN be taught to reply to results. Cleverbot is a clever example. Or that one program by Microsoft I think that 4chan or something organized to turn it into a racist bot.
By binary processors you can end up narrowing down human behavior, but you are not very likely to attain what we call GETTIN WOOOOOOOKE BRO
It has always been the prerogative of children and half-wits to point out that the emperor has no clothesWhy? You can represent things like neuro transmitters and 'continuous' inputs with computers. You seem to be getting too caught up with binary, when for most purposes computers are dealing with base 8, 16, or 32.
Hexadecimal is a base 16 system used to simplify how binary is represented...
It has always been the prerogative of children and half-wits to point out that the emperor has no clothesYup. The hoops engineers and programmers have had to jump through to get silicon chips to develop fuzzy logic out of binary conditional states are astounding, no? :/
While... neurons have yes/no/maybe/perhaps/up/down/left/right/asdfgh/qwerty baked in as standard. If anybody thinks Hex gets complicated, they haven't chased down the effects of dopamine on mammalian processing. And, just dopamine.
My first year neurology lecturer put it this way: neurons are eukaryotic cells that are specialised to work out how strongly they should pass any given stimulus on... And, which of their neighbours they should pass the buck to. That is their basic function and the basis of their decisions comes straight from the decision-making any bacteria has to make (hence "up/down/left/right/time/pressure" as points on their compass).
In short: they make decisions — sure, very basic, chemically-derived and very immediate ones, yeah... But, they don't just register yes/no or 1/0 — they work with indefinite factors with multiple possible outcomes to calculate and generate in various proportions. To do the same in silicon or whatever, you need a network. Our neural building block is already a decoder and decider of things in a single package with a personal and public chemical lexicon.
edited 1st Apr '17 4:14:59 AM by Euodiachloris
The point is that it is computable. There's nothing about using binary as a base that precludes more complicated questions as just simulations, any more than brains are limited to the questions individual neurons can handle.
edited 1st Apr '17 6:16:21 AM by supermerlin100
Neurons are trivially simmed on the individual level - there's no magic sauce to them. The only reason we don't already have uploads is that simulating the very large number of neurons in the brain sums up to a non-trivial task and scanning the brain to sufficient fidelity is a technical challenge
But, backing a brain up in a computer isn't because they're fundamentally incompatible thanks to brains not being binary. That's the point: to artificially simulate a single neuron requires more space, energy consumption, time and processing power in computing terms than a neuron uses. And, even then, it's a whole bunch of work-arounds to compensate for the whole "cells aren't binary" thing.
The only way you could approximate a brain would be an internet-sized network attached to a network of internets. And, even then, I rather doubt it. (Mainly because we don't store hard data in any way even approaching ROM.)
Binary and biology: not compatible. They can coordinate (heck, that's what an interface is — when a lynx can pounce on digital fish on a tablet, that's interfacing), but, frankly, backing up skills and memories to download later? Is well beyond pie in the sky.
Let AI think like AI — they don't need to ape the biological to do intelligence and self awareness using very different principles.
edited 1st Apr '17 7:07:29 AM by Euodiachloris
Again they are compatible in the sense that that you can make system out of binary computers that replicates all of the tiny functions of cells. With the binary computers action as a structural substrate for the neurons, the same way organic chemistry normally does. At the level of abstraction the use of ROM doesn't matter. I've heard this described as an ontology. It's just that this might not be the best way of doing things.
On one hand compared to biology you can have neuronetwork designs that would be metabolically, embryologically or just geometrical impossible, and you can have precise control of the details, and certainly scan none destructively.
But other designs that use floating point numbers, might be less resource intensive, and easier to examine for safety and updates.
edited 1st Apr '17 8:46:06 AM by supermerlin100
Probably in the process of trying to program a robot to say things like "don't worry it happens to lots of people" among other things.
edited 26th Mar '17 2:26:08 AM by M84
Disgusted, but not surprised