TVTropes Now available in the app store!
Open

Follow TV Tropes

Following

Artificial Intelligence

Go To

Euodiachloris Since: Oct, 2010
#226: Apr 1st 2017 at 9:23:19 AM

[up]Wrong. It's canal systems and telegraph networks all over again, I swear. -_-'

Modelling the behaviour of a cell in a program comes nowhere near replicating it in hardware. Which is what you'd need to do to get anywhere near as efficient as a cell when it comes to processing.

Chips work differently; let them get more efficient using the rules that apply to them before you ask them to store an effing model of a brain. Because how we process info and how they do varies markedly on the microscopic physical and chemical levels.

Processing models: beware taking examples from machines to describe how we think, and visa versa (the aforementioned canals and telegraph systems were, at one point, used as early cognitive models to describe thought: much error ensued).

edited 1st Apr '17 9:44:15 AM by Euodiachloris

TheHandle United Earth from Stockholm Since: Jan, 2012 Relationship Status: YOU'RE TEARING ME APART LISA
United Earth
#227: Apr 1st 2017 at 9:28:58 AM

So, are we paperclips yet? Or just unemployed?

Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.
Izeinsummer Since: Jun, 2013
#228: Apr 1st 2017 at 11:09:06 AM

Oh the unemployment thing doesn't need any sort of breakthroughs. Most jobs are really very straightforward to automate given decent machine vision. Which we have already. It's just a question of a gazillion hours of coder time now to bring about the automated economy.

TheHandle United Earth from Stockholm Since: Jan, 2012 Relationship Status: YOU'RE TEARING ME APART LISA
United Earth
#229: Apr 1st 2017 at 11:59:47 AM

What then?!

Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.
CenturyEye Tell Me, Have You Seen the Yellow Sign? from I don't know where the Yith sent me this time... Since: Jan, 2017 Relationship Status: Having tea with Cthulhu
Tell Me, Have You Seen the Yellow Sign?
#230: Apr 1st 2017 at 1:17:17 PM

We gather together in troupes and juggle beach balls on our noses for the rich. The best acts get the food. Or, in Germany & Canada, you probably live in proto-The Culture.

Serious Mode: The Atlantic tries to explore a world without work—with one outcome resembling the household economies common during the transition from the agrarian to the industrial era. But what happens with automation is more an issue of politics than tech.

edited 1st Apr '17 1:20:19 PM by CenturyEye

Look with century eyes... With our backs to the arch And the wreck of our kind We will stare straight ahead For the rest of our lives
Journeyman Overlording the Underworld from On a throne in a vault overlooking the Wasteland Since: Nov, 2010
Overlording the Underworld
#231: Apr 1st 2017 at 7:40:22 PM

You can't fully automate most thinking jobs yet. You can offload the hard calculation parts to computers, but you still need humans to tell them where to go. The big thing with automation is the manual labor type of jobs. It really is politics. IF we get a government that focuses on retraining people, and getting our roads, electrical, and internet systems up to snuff to serve EVERYBODY, we can get the majority of millennials ready for that kind of economy. Hell, a lot of Baby Boomers could be retrained for it too, there's just not as much call for it since they're phasing out within the next ten to fifteen years and going into retirement.

At that point we go for Automation-enabled Socialism where the survival stuff is provided for us by machines so we can focus on the higher level stuff like design and future-resistance engineering. Space travel, that kind of thing. Money will still be a thing in some form, but it'll mostly be for the excess non-survival stuff. Better tasting food for those who care. Hobby stuff for those who want to engage in it. The only stuff provided outright should be enough food, shelter, and medicine to keep you alive, and job training/computer access to find work. Maybe some low level entertainment stuff like karaoke bars for spending time with friends. Otherwise, you work for the rest.

It won't be possible for a long time, til we cut out the excess pollution and let nature find her new equilibrium. At that point, we need to find some nonviolent way to pull the reins out of corporate hands and hold onto them ourselves until the corporate overlords stop charging us for survival stuff. As the Declaration of Independence says, we have the UNALIENABLE RIGHT to Life, Liberty, and the PURSUIT of Happiness. If you need to pay for what keeps you alive, that's a direct violation to your RIGHT to life. As for happiness . . . if you can be happy with just enough to survive and no frills to life, go for it. Otherwise, go get a bloody computer job and earn it.

supermerlin100 Since: Sep, 2011
#232: Apr 1st 2017 at 7:40:30 PM

@Euodiachloris you're not reading what I wrote. You're talking about effectiveness, I am talking about possibility. This line of discussion got started by someone claiming it was impossible and further more that true computer intelligence was impossible as a result.

Euodiachloris Since: Oct, 2010
#233: Apr 2nd 2017 at 2:03:08 AM

[up]And, I'm saying that because the fundamentals are very dissimilar, getting a computer to model a whole brain accurately is impossible. It's a lovely plot device in sci-fi, but that's all it is.

However, AI getting true intelligence? Feasible. If we don't keep hammering computers to copy our processing.

Think of them as another order of animal (or well beyond: homebrew aliens). Like, say, squid. We don't ask squid to use their frontal lobes like we do, do we?

edited 2nd Apr '17 2:06:21 AM by Euodiachloris

supermerlin100 Since: Sep, 2011
#234: Apr 2nd 2017 at 7:14:49 AM

Not impossible just impractical.

M84 Oh, bother. from Our little blue planet Since: Jun, 2010 Relationship Status: Chocolate!
Oh, bother.
#235: Apr 2nd 2017 at 7:17:19 AM

I can't help but think that accurately modeling an AI after a human brain would require doing things that I'm pretty sure are crimes against humanity.

edited 2nd Apr '17 7:17:33 AM by M84

Disgusted, but not surprised
Euodiachloris Since: Oct, 2010
crazysamaritan NaNo 4328 / 50,000 from Lupin III Since: Apr, 2010
NaNo 4328 / 50,000
#237: Apr 2nd 2017 at 12:26:29 PM

A magical force from outside the Universe that taps into the brain with energy readings indistinguishable from our own brain wave patterns.

It's becoming easier and easier to believe that the entire Universe already IS a simulation. So we likely will be able to simulate the human brain. That being said, how well the simulation holds up compared to our real brains is a good question. We will just have to wait and see. I'm agnostic myself so it really doesn't matter to me one way or the other. And being able to simulate the brain really doesn't impact anything supernatural. You'd only be simulating behavior.

It looks like you're referencing the Philosopher's Zombie thought experiment, but it isn't clear if you're saying that we can't create a zombie because the "soul" is a required component of brain activity, or if you're saying that a zombie doesn't count as a human brain.
Modelling the behaviour of a cell in a program comes nowhere near replicating it in hardware.
So that's not the goal. Really, the goal is to make AGI based on ever-improving computing technology. My stance is that it is likely to appear during our lifetimes, and the absolute worst way to build AGI is a fully functioning simulation of the human brain. If the brain is purely materialistic, then the barriers to do so are 1) mapping the interactions of the brain cells, which requires 2) measurements of the physical activity within the brain, and 3) the size of the simulation. Each problem is something that can be defined with today's scientific understanding, and therefore solved, even if we cannot solve the Chinese Room question of "what makes a thing self-aware?"
getting a computer to model a whole brain accurately is impossible.
Accurate whole brain simulations already complete for you to download on your PC. It's just a flatworm, but it simulates every nerve in the creature, even the ones outside of the area we call the brain. The only remaining work on that brain simulation is how to make the code/hardware more efficient.
I can't help but think that accurately modeling an AI after a human brain would require doing things that I'm pretty sure are crimes against humanity.
Copying a specific human brain with destructive techniques would probably not pass an ethics board, but that's why we start on the non-human brains. If we can refine the MRI scanner to grant enough details to accurately measure all physical activity within the brain (without making it more dangerous), what would be the crime?

edited 2nd Apr '17 12:27:46 PM by crazysamaritan

Link to TRS threads in project mode here.
CenturyEye Tell Me, Have You Seen the Yellow Sign? from I don't know where the Yith sent me this time... Since: Jan, 2017 Relationship Status: Having tea with Cthulhu
Tell Me, Have You Seen the Yellow Sign?
#238: Apr 2nd 2017 at 12:48:30 PM

Thoughts on the brain in a line: Here is this - mass of jelly that you can hold in the palm of your hand. And it can contemplate the vastness of interstellar space.
There's that, plus the author of A History of Knowledge decided that we can detect an AI (I assume he meant strong AI) when the thing tells a joke and asks if it is funny. (This was from the nineties).

Nowadays, I suppose we'll find an AI by "sapience," but may end up moving the goalposts so often that we don't notice until we trip over one, because sapience has a definition as useful as saying that space as big.
I wouldn't expect an AI to be anything like us. Unless we go into matrioshka brain territory, I'd expect them to act more like trained hamsters that happen build things/drive cars/etc. There's really no reason to build a human like AI, when there's a) the old fashion way to get more people, and b) for what purpose? Once said mad scientists has her ego satisfied, the ethics board, media, and all her neighbors come to her door asking unpleasant questions, like: how she's going to ensure the quality of life for the sapient AI and would having it follow her orders violate anti-slavery laws, etc.

edited 2nd Apr '17 12:49:57 PM by CenturyEye

Look with century eyes... With our backs to the arch And the wreck of our kind We will stare straight ahead For the rest of our lives
Corvidae It's a bird. from Somewhere Else Since: Nov, 2014 Relationship Status: Non-Canon
It's a bird.
#239: Apr 2nd 2017 at 1:18:16 PM

Copying a specific human brain with destructive techniques would probably not pass an ethics board, but that's why we start on the non-human brains. If we can refine the MRI scanner to grant enough details to accurately measure all physical activity within the brain (without making it more dangerous), what would be the crime?

The original brain will be fine, but the copy might not necessarily enjoy being an AI, even if they remember volunteering. (Assuming that you copy the full contents including memories etc. Even a "blank slate" would bring up some ethical concerns for many people though.)

Yes yes, this is all pretty farfetched and so on, but it's interesting anyway, imo.

Still a great "screw depression" song even after seven years.
Journeyman Overlording the Underworld from On a throne in a vault overlooking the Wasteland Since: Nov, 2010
Overlording the Underworld
#240: Apr 2nd 2017 at 3:41:27 PM

[up][up][up]I'm saying ultimately we won't know whether we can do it or not until we actually do it. It's entirely possible we'll map everything accurately, flip the switch, and get either a vegetable or something that acts inhuman. It's also possible we'll do it right and get what a perfectly normal human being would be if they were jammed into a computer. And even that doesn't preclude the possibility of God, an Afterlife, or any of the rest. It just proves we can simulate a human being. Nothing more or less.

crazysamaritan NaNo 4328 / 50,000 from Lupin III Since: Apr, 2010
NaNo 4328 / 50,000
#241: Apr 2nd 2017 at 4:51:17 PM

I'm saying ultimately we won't know whether we can do it or not until we actually do it.
That's not really true. We have already collected evidence that we can do it. This is a problem that we have figured out "What questions do we need to ask?" and are finding answers to those questions. We don't currently have an answer to the question (we've also never landed a probe on 162173 Ryugu before, but the plan is to bring soil samples back to Japan). It would take extraordinary circumstances to prevent us from accomplishing the mission at this point. One such extraordinary circumstance would be proof that human-level consciousness requires influence from supernatural forces.
It just proves we can simulate a human being. Nothing more or less.
It also proves that we can create a human-level intelligence in a computer, which ¬Aszur said was impossible.

Link to TRS threads in project mode here.
Robrecht Your friendly neighbourhood Regent from The Netherlands Since: Jan, 2001 Relationship Status: They can't hide forever. We've got satellites.
Your friendly neighbourhood Regent
#242: Apr 2nd 2017 at 5:10:17 PM

I always love it (read: get annoyed) when people think that mapping the neural pathways in an animal brain (be it human or some other animal) is all that useful for building a functional AI (of any level of sapience or cognition). 'Cause, you know, it's not. At all. Sure, a brain is a network of nodes along which electrical impulses travel and that's superficially similar to a computer(chip). And an MRI scan can give a 100% accurate picture the physical structure of the neurons along which electrical impulses travel.

But the chemistry of a biological brain is just a important in routing and modulating those impulses as the way those pathways are connected. And the truth is that we barely know anything about the full range of effects that the many chemicals (not just neurotransmitters) in a biological brain have on the cognitive process.

Angry gets shit done.
DeMarquis (4 Score & 7 Years Ago)
#243: Apr 2nd 2017 at 5:12:45 PM

It doesnt prove that, it provides a theoretical basis for it. But like Euo said, such a device would be orders of magnitude less efficient than the brain it was simulating. There's also significant bandwidth problems. So, theoretically possible, but impractical. Just aiming for a new, unique sapient AI is still on the table, however.

I'm done trying to sound smart. "Clear" is the new smart.
M84 Oh, bother. from Our little blue planet Since: Jun, 2010 Relationship Status: Chocolate!
Oh, bother.
#244: Apr 2nd 2017 at 9:15:07 PM

While this is in the realm of fiction, I find the way AI is done with the character of Bladewolf in Metal Gear Revengeance interesting. His brain is modeled after a human brain so well that it actually faces the same limitations that an actual person has. He has to actually learn like a human being and lacks perfect memory. He also has even worse facial recognition than most people.

edited 2nd Apr '17 9:17:41 PM by M84

Disgusted, but not surprised
DeMarquis (4 Score & 7 Years Ago)
#245: Apr 3rd 2017 at 8:39:04 AM

Seems plausible to me. Every theoretical framework for advanced AI that I have seen presumes that it is built on some form of machine learning. The main obstacle right now isn't the learning function itself, but the application of a learned solution outside of a narrowly restricted problem domain. This is the same problem we have with human children, where its called "generalization"- in other words, can little Jane take the skills she learned in math class and apply them to shopping? Can our expert chess playing algorithm take those skills and apply them to politics? No, and it's a much trickier problem than anyone suspected.

I'm done trying to sound smart. "Clear" is the new smart.
Journeyman Overlording the Underworld from On a throne in a vault overlooking the Wasteland Since: Nov, 2010
Overlording the Underworld
#246: Apr 3rd 2017 at 1:30:52 PM

" It would take extraordinary circumstances to prevent us from accomplishing the mission at this point. One such extraordinary circumstance would be proof that human-level consciousness requires influence from supernatural forces."

That was my entire argument . . . There's increasing evidence that this world doesn't actually exist at all. It's a simulation. So depending on how that simulation is run, our consciousness might not actually exist within this world at all. It's most likely a self contained sim where we're part of the fake, nonliving world around us, in which case, yeah, we probably can simulate our own minds. But if we're not? If our conscious decision making is actively driven by players "off-screen" who tell us what to think and do, then we could very well get everything within our world right and get a total failure to actually simulate our behavior.

I'm not saying it's certain to be one way or another. Hell, it's entirely possible to cross those lines. Could be that we're not entirely of this world, but that there's enough decision making being done within our minds to get a convincing sim that works exactly like us anyway. We just won't 100% know any of that until we go for it and get it right, and even that won't solve the questions of "are we real?" and "is there something outside the Universe involved with us?"

Robrecht Your friendly neighbourhood Regent from The Netherlands Since: Jan, 2001 Relationship Status: They can't hide forever. We've got satellites.
Your friendly neighbourhood Regent
#247: Apr 3rd 2017 at 5:22:53 PM

There's increasing evidence that this world doesn't actually exist at all. It's a simulation.

I now simultaneously understand why there is no 'eye roll' smiley and miss its absence.

edited 3rd Apr '17 5:23:17 PM by Robrecht

Angry gets shit done.
Aszur A nice butterfly from Pagliacci's Since: Apr, 2014 Relationship Status: Don't hug me; I'm scared
A nice butterfly
#248: Apr 3rd 2017 at 11:06:43 PM

"Simulation is possible hence there's a higher chance of us being a ismulation" is dumb.

There is not infinite space within virtual reality. Just because you can put a matryoshka doll inside a matryoshka doll does not mean that the matryoshka doll inside will be the exact same as the matryoshka doll it came from. At some point what you put inside a small matryoshka doll will be too small for it to even be recognizable as a matryoshka doll, and while you could technically put stuff inside of that, doesn't mean that when you get to the planck length sized matryoshka doll you get to law the laws of physics cuz "HEY I GOT TO PUT MANY DOLLS IN THE ONE BEFORE THIS ONE WHY STOP NOW"

You dont create more energy in simulation IN REALITY. Just cuz you make a game or write a comic where the superhero created enough energy with a breath, doesn't mean you also created that much more nergy: the expenditure of energy was just you drawing or writing that and its involved processes.

It has always been the prerogative of children and half-wits to point out that the emperor has no clothes
CenturyEye Tell Me, Have You Seen the Yellow Sign? from I don't know where the Yith sent me this time... Since: Jan, 2017 Relationship Status: Having tea with Cthulhu
Tell Me, Have You Seen the Yellow Sign?
#249: Apr 4th 2017 at 1:32:59 AM

I think TAP is referring to the Holographic Principle. (If this article or the other wiki cannot explain it, it is beyond me for now)

Look with century eyes... With our backs to the arch And the wreck of our kind We will stare straight ahead For the rest of our lives
crazysamaritan NaNo 4328 / 50,000 from Lupin III Since: Apr, 2010
NaNo 4328 / 50,000
#250: Apr 4th 2017 at 8:02:31 AM

It doesn't prove that, it provides a theoretical basis for it.
My apologies, I thought it was clear that we were discussing a conditional. Given: we have successfully simulated a complete human brain from inside a computer. Then: we have a human-level intellect within a computer.
If our conscious decision making is actively driven by players "off-screen" who tell us what to think and do, then we could very well get everything within our world right and get a total failure to actually simulate our behavior.
Not that I understand how the Holographic explanation of the universe differs from the Membrane explanation... Not that you've provided any evidence that the Hologram explanation requires these outside agents... But yes; if there's a supernatural component to consciousness, that could be an insurmountable problem to the idea of creating a new form of consciousness.

Link to TRS threads in project mode here.

Total posts: 424
Top