We gather together in troupes and juggle beach balls on our noses for the rich. The best acts get the food. Or, in Germany & Canada, you probably live in proto-The Culture.
Serious Mode: The Atlantic tries to explore a world without work
—with one outcome resembling the household economies common during the transition from the agrarian to the industrial era. But what happens with automation is more an issue of politics than tech.
edited 1st Apr '17 1:20:19 PM by CenturyEye
Look with century eyes... With our backs to the arch And the wreck of our kind We will stare straight ahead For the rest of our livesYou can't fully automate most thinking jobs yet. You can offload the hard calculation parts to computers, but you still need humans to tell them where to go. The big thing with automation is the manual labor type of jobs. It really is politics. IF we get a government that focuses on retraining people, and getting our roads, electrical, and internet systems up to snuff to serve EVERYBODY, we can get the majority of millennials ready for that kind of economy. Hell, a lot of Baby Boomers could be retrained for it too, there's just not as much call for it since they're phasing out within the next ten to fifteen years and going into retirement.
At that point we go for Automation-enabled Socialism where the survival stuff is provided for us by machines so we can focus on the higher level stuff like design and future-resistance engineering. Space travel, that kind of thing. Money will still be a thing in some form, but it'll mostly be for the excess non-survival stuff. Better tasting food for those who care. Hobby stuff for those who want to engage in it. The only stuff provided outright should be enough food, shelter, and medicine to keep you alive, and job training/computer access to find work. Maybe some low level entertainment stuff like karaoke bars for spending time with friends. Otherwise, you work for the rest.
It won't be possible for a long time, til we cut out the excess pollution and let nature find her new equilibrium. At that point, we need to find some nonviolent way to pull the reins out of corporate hands and hold onto them ourselves until the corporate overlords stop charging us for survival stuff. As the Declaration of Independence says, we have the UNALIENABLE RIGHT to Life, Liberty, and the PURSUIT of Happiness. If you need to pay for what keeps you alive, that's a direct violation to your RIGHT to life. As for happiness . . . if you can be happy with just enough to survive and no frills to life, go for it. Otherwise, go get a bloody computer job and earn it.
And, I'm saying that because the fundamentals are very dissimilar, getting a computer to model a whole brain accurately is impossible. It's a lovely plot device in sci-fi, but that's all it is.
However, AI getting true intelligence? Feasible. If we don't keep hammering computers to copy our processing.
Think of them as another order of animal (or well beyond: homebrew aliens). Like, say, squid. We don't ask squid to use their frontal lobes like we do, do we?
edited 2nd Apr '17 2:06:21 AM by Euodiachloris
It's becoming easier and easier to believe that the entire Universe already IS a simulation. So we likely will be able to simulate the human brain. That being said, how well the simulation holds up compared to our real brains is a good question. We will just have to wait and see. I'm agnostic myself so it really doesn't matter to me one way or the other. And being able to simulate the brain really doesn't impact anything supernatural. You'd only be simulating behavior.
edited 2nd Apr '17 12:27:46 PM by crazysamaritan
Link to TRS threads in project mode here.Thoughts on the brain in a line: Here is this - mass of jelly that you can hold in the palm of your hand. And it can contemplate the vastness of interstellar space.
There's that, plus the author of A History of Knowledge decided that we can detect an AI (I assume he meant strong AI) when the thing tells a joke and asks if it is funny. (This was from the nineties).
Nowadays, I suppose we'll find an AI by "sapience," but may end up moving the goalposts so often that we don't notice until we trip over one, because sapience has a definition as useful as saying that space as big.
I wouldn't expect an AI to be anything like us. Unless we go into matrioshka brain territory, I'd expect them to act more like trained hamsters that happen build things/drive cars/etc. There's really no reason to build a human like AI, when there's a) the old fashion way to get more people, and b) for what purpose? Once said mad scientists has her ego satisfied, the ethics board, media, and all her neighbors come to her door asking unpleasant questions, like: how she's going to ensure the quality of life for the sapient AI and would having it follow her orders violate anti-slavery laws, etc.
edited 2nd Apr '17 12:49:57 PM by CenturyEye
Look with century eyes... With our backs to the arch And the wreck of our kind We will stare straight ahead For the rest of our livesThe original brain will be fine, but the copy might not necessarily enjoy being an AI, even if they remember volunteering. (Assuming that you copy the full contents including memories etc. Even a "blank slate" would bring up some ethical concerns for many people though.)
Yes yes, this is all pretty farfetched and so on, but it's interesting anyway, imo.
Still a great "screw depression" song even after seven years.![]()
![]()
I'm saying ultimately we won't know whether we can do it or not until we actually do it. It's entirely possible we'll map everything accurately, flip the switch, and get either a vegetable or something that acts inhuman. It's also possible we'll do it right and get what a perfectly normal human being would be if they were jammed into a computer. And even that doesn't preclude the possibility of God, an Afterlife, or any of the rest. It just proves we can simulate a human being. Nothing more or less.
I always love it (read: get annoyed) when people think that mapping the neural pathways in an animal brain (be it human or some other animal) is all that useful for building a functional AI (of any level of sapience or cognition). 'Cause, you know, it's not. At all. Sure, a brain is a network of nodes along which electrical impulses travel and that's superficially similar to a computer(chip). And an MRI scan can give a 100% accurate picture the physical structure of the neurons along which electrical impulses travel.
But the chemistry of a biological brain is just a important in routing and modulating those impulses as the way those pathways are connected. And the truth is that we barely know anything about the full range of effects that the many chemicals (not just neurotransmitters) in a biological brain have on the cognitive process.
Angry gets shit done.It doesnt prove that, it provides a theoretical basis for it. But like Euo said, such a device would be orders of magnitude less efficient than the brain it was simulating. There's also significant bandwidth problems. So, theoretically possible, but impractical. Just aiming for a new, unique sapient AI is still on the table, however.
I'm done trying to sound smart. "Clear" is the new smart.While this is in the realm of fiction, I find the way AI is done with the character of Bladewolf in Metal Gear Revengeance interesting. His brain is modeled after a human brain so well that it actually faces the same limitations that an actual person has. He has to actually learn like a human being and lacks perfect memory. He also has even worse facial recognition than most people.
edited 2nd Apr '17 9:17:41 PM by M84
Disgusted, but not surprisedSeems plausible to me. Every theoretical framework for advanced AI that I have seen presumes that it is built on some form of machine learning. The main obstacle right now isn't the learning function itself, but the application of a learned solution outside of a narrowly restricted problem domain. This is the same problem we have with human children, where its called "generalization"- in other words, can little Jane take the skills she learned in math class and apply them to shopping? Can our expert chess playing algorithm take those skills and apply them to politics? No, and it's a much trickier problem than anyone suspected.
I'm done trying to sound smart. "Clear" is the new smart." It would take extraordinary circumstances to prevent us from accomplishing the mission at this point. One such extraordinary circumstance would be proof that human-level consciousness requires influence from supernatural forces."
That was my entire argument . . . There's increasing evidence that this world doesn't actually exist at all. It's a simulation. So depending on how that simulation is run, our consciousness might not actually exist within this world at all. It's most likely a self contained sim where we're part of the fake, nonliving world around us, in which case, yeah, we probably can simulate our own minds. But if we're not? If our conscious decision making is actively driven by players "off-screen" who tell us what to think and do, then we could very well get everything within our world right and get a total failure to actually simulate our behavior.
I'm not saying it's certain to be one way or another. Hell, it's entirely possible to cross those lines. Could be that we're not entirely of this world, but that there's enough decision making being done within our minds to get a convincing sim that works exactly like us anyway. We just won't 100% know any of that until we go for it and get it right, and even that won't solve the questions of "are we real?" and "is there something outside the Universe involved with us?"
"Simulation is possible hence there's a higher chance of us being a ismulation" is dumb.
There is not infinite space within virtual reality. Just because you can put a matryoshka doll inside a matryoshka doll does not mean that the matryoshka doll inside will be the exact same as the matryoshka doll it came from. At some point what you put inside a small matryoshka doll will be too small for it to even be recognizable as a matryoshka doll, and while you could technically put stuff inside of that, doesn't mean that when you get to the planck length sized matryoshka doll you get to law the laws of physics cuz "HEY I GOT TO PUT MANY DOLLS IN THE ONE BEFORE THIS ONE WHY STOP NOW"
You dont create more energy in simulation IN REALITY. Just cuz you make a game or write a comic where the superhero created enough energy with a breath, doesn't mean you also created that much more nergy: the expenditure of energy was just you drawing or writing that and its involved processes.
It has always been the prerogative of children and half-wits to point out that the emperor has no clothesI think TAP is referring to the Holographic Principle. (If this article or the other wiki cannot explain it, it is beyond me for now)

Modelling the behaviour of a cell in a program comes nowhere near replicating it in hardware. Which is what you'd need to do to get anywhere near as efficient as a cell when it comes to processing.
Chips work differently; let them get more efficient using the rules that apply to them before you ask them to store an effing model of a brain. Because how we process info and how they do varies markedly on the microscopic physical and chemical levels.
Processing models: beware taking examples from machines to describe how we think, and visa versa (the aforementioned canals and telegraph systems were, at one point, used as early cognitive models to describe thought: much error ensued).
edited 1st Apr '17 9:44:15 AM by Euodiachloris