troperville

tools

toys


main index

Narrative

Genre

Media

Topical Tropes

Other Categories

TV Tropes Org
random
Headscratchers: The Singularity

Note to posters: The Singularity Institute has a nice FAQ about the singularity, with several citations and links to scholarly articles. Please give it a look before asking a question.

There are a few unsubstantiated assumptions with the standard "AI will usher in the Technological Singularity" theory.

  • 1: Artificial Intelligence is possible.
    • We don't know that it is.
  • 2: AI can upgrade itself.
    • For all we know, any modifications will require a reboot to take effect, and that destroys the machine.
  • 3: The machine wants to improve itself.
    • As far as we know, reproduction and improvement are a strictly meatbag biological imperative.
  • 4: The machines will be smarter than us.
    • Maybe our brains are as smart as you can get for 3 lbs. Or maybe coherent consciousness has an upper limit on intelligence. Or maybe biological systems are intrinsically faster than silicon.

  • 1. I don't know why you would think that it isn't; intelligence can clearly be implemented in organic matter. Carbon and oxygen and hydrogen aren't magic, and machine intelligence continues to increase little by little. It seems clear to me that general intelligence will be implemented on silicon at some point, barring some kind of horrible catastrophe that wipes us all out.
  • 2. Wait, what?
    • I believe he is mentioning the teleporter dilemma in regards to computers. Something gets to the goal, but what exactly that is depends more on philosophy than anything else, and not many living beings would in reality be willing to do so.
Of course this still assume that in some hypothetical scenario changes to the operating system would still require a reboot.
  • 3. Improvement, like survival, is a nearly universal value in that it enables one to better pursue one's goals, and would only be discarded when it was in direct conflict with those goals. It's hard to imagine that, whatever such a machine wants (it's programmed) to do, it won't see improving itself as a logical step in the way of getting it done faster and better and with less risk of failing. See Basic AI Drives
  • 4. If there is an upper limit on intelligence, I would be very surprised if it just happened to be at the human level, or so slightly above it that there was no room for significant improvements. You are gonna need a lot of evidence before I even begin considering that one. Also, who says that an artificial intelligence needs to be run on 3 lbs of hardware?

  • 1. We are all artificially intelligent
  • 2. We are all capable of upgrading ourselves (in many countries, it is illegal NOT to spend over a decade doing nothing but this)
  • 3. Most of us want to improve ourselves
  • 4. Some of us are smarter than the rest of us

  • Even if intelligence is impossible without a shell made of carbon, hydrogen, and oxygen, such a shell can be artificially made (there something like seven billion living examples to support this statement), and probably created without a traditional body to control (why not?). Such a shell could be much larger than the human brain, and presumably more powerful (perhaps the human brain in its current configuration and size is the most intelligent thing possible, but this is doubtful). At least one such artificial construct would want to improve itself or to improve another intelligence (more such intelligences could be created until one was like this). It would be capable of improving itself or another like it, and so on until at least one is more intelligent than humans. The singularity is inevitable. Only xenocide can stop it.

  • Specifically in answer to no. 2: take a look at Tool Command Language or PicoLisp; two programming languages that actually let you completely rewrite the application code while it's running (technically possible with any language but these two really showcase it by being almost totally homoiconic). Having to reboot for changes to take effect is a bizarre design decision popularised by certain current-gen operating systems, not universal either to all systems or all programs.
    • As an addendum: take a look at the Crawler-Tractor, implemented in newLISP, for a piece of code that doesn't just modify another part of the running program - the same function actually mutates itself, to create an endless cycle while using neither recursion nor a loop. This sort of thing is why Lisp dialects got a reputation for being good for AI work back in the 60s-80s.

  • Heck, even if the AI did have to reboot to improve itself, it could just save its present mind state, reboot, and go on doing whatever it was doing, except better now. They would have much more reliable non-volatile memory than biological brains do.

  • Since technology must be based on itself or what came before, it can't increase any faster than exponentially. So why is it named after a mathematical phenomenon that doesn't occur on an exponential curve!? (Oh, and with respect to the four questions - it's one of the basic tenets of science that unless there is a reason for things to be different, they're probably the same. (1) Since humans are organic computers and are intelligent, silicon-based intelligence is not different. (2) See the production line: robots building more advanced computers and robots. (3) There are already path-perfecting programs in existence. These actively try to make the best solution given certain output parameters. If those parameters involve intelligence, then the program will develop intelligence (if given the right starting tools). (4) There is no reason why there would be a limit.

  • Who the Hell actually wants this sort of thing to happen? The death of baseline humanity isn't a remotely pleasant thought yet scientists talk about it with a disturbing glee.
    • Neckbeards who have given up all hope that they can better themselves and instead fantasize about a future in which they are gods.
    • It depends on your definition of idenity. In terms of Mind Uploading, is an exact copy of something essentially the original? Many scientists seem to think so, though I'd have to say it's not the way I see it.
    • Who says the death of anyone is a requisite for this?
    • Who says that we won't be human anymore? There is nothing to suggest that altering the materials of our bodies or location of our consciousnesses will somehow dehumanise us. And what's so special about a damp mass of organic matter that gets sick?
      • 'What's so special about a damp mass of organic matter that gets sick?' Yeah, that doesn't scare me away at all. What's so special a rusted out tin can?
      • 'What's so special about a rusted out tin can?' Well, it's smarter, faster, more durable, backup-able, modular, possesses practically unlimited memory capacity and is immortal.
    • Those in favor should keep in mind that Brain Uploading isn't like moving your mind from one shell to another. It's creating a digital copy of the original. From your perspective, you're gonna stay in your meat body (unless you get destroyed in the process, which isn't any better).
      • Brain Uploading isn't the only potential option, though.
      • Correct. Personally, I would only Brain Upload if I were about to die and no other technology could preserve my life. I would prefer a slow transition to full artificial intelligence, meaning some type of specialized Nanomachines that behaves like human brain cells and neurons that attach themselves to my existing ones but vastly increases their capabilities. Then as my organic ones die one by one the specialized nanomachines have already been doing majority of the work of my by now centuries old organic ones. It would be hard to argue that Iím no longer me, as which brain cell was me when it died?
      • What's the difference between moving from one shell to the other, and from destroying the original while creating an identical copy? (this is a rhetorical question: there is no difference. Physics describes the universe working exactly like this already). If, as you say, preserving the original is even better (X isn't better than Y = Y is better than X), then there really isn't any bad consequence to your observation.
      • Because in the latter case, you die. From an outside perspective, you'll still be around, but from your perspective your existence ends.

  • Should we ever achieve the Singularity, would it be moral, or even laudable, for us who become part of it to make becoming part of it obligatory? If so, is it a morally good thing to coerce people, by force or otherwise, to become part of said Singularity?
    • See also Assimilation Plot for how various writers deal with such.
    • No, forcing somebody to participate in the singularity would not be moral, natural selection would take it's course and eventually everyone who did not upgrade would die out, (or simply become irrelevant) simply do to not being competitive in a post-singular society,

  • If the Singularity is supposed to be incomprehensible, how could anyone predict it? It's going to come out of left field by definition, right?
    • No, it's what comes after the singularity that is incomprehensible the singularity itself or at least what lead up to it we can comprehend, and besides that just because i cant see a rock fall doesn't mean i cant hear a thud.
    • Also ote that there are several different definitions of the singularity, only one of which states that what happens after the singularity is incomprehensible.
  This page has not been indexed. Please choose a satisfying and delicious index page to put it on.  



random
TV Tropes by TV Tropes Foundation, LLC is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Permissions beyond the scope of this license may be available from thestaff@tvtropes.org.
Privacy Policy
12273
23