Follow TV Tropes

Following

Archived Discussion Main / LogicBomb

Go To

This is discussion archived from a time before the current discussion method was installed.


Kimiko Muffin: I clarified the bit about the "Three Laws of Robotics", since it's only the First Law that caused robots to shut down. (They could break laws 2 and 3 just fine, as long as they were obeying one of the lower-numbered laws, as demonstrated in Runaround by an old robot which, in spite orders, rushes in to save a human with no subsequent ill effects.

Scrounge: It's self explanatory that anything piloted by default is immune, but should there be a note here about it only applying to a logic-type AI, as opposed to robots with emotions who are capable of behaving illogically? This sort of thing won't do you any good agains, say, any number of Transformers. MUCH LATER: Added.

Pro-Mole: I wonder... computers today are actually prone to get closed into situations like that. According to a professor of mine, the easiest way to do this is to simply make a program that creates one million integers and then start an inifinite loop accessing these integers on a random order, what would cause massive thrashing. According to my own experience, only 100 or so iterations render the machine useless for some time, so even trying to shut the program down is useles...

So, imagine you get permission to run a program, or a simple script on a server... Do this count as a Truth in Television of sorts?

Haesslich: Another anime/manga example from Keroro Gunsou: Giroro uses this to blow up a 'weapons-prohibiting' robot by pointing out that it is also a weapon, as it attacked Giroro earlier, and thus fell under the ban. Appears in a (slightly) modified form in episode 123 of the series, and earlier on in the manga.

Caphi: So I reread Satisfaction Guaranteed by Asimov, and it turns out I remember it wrong. Anyone know what story made the claim that a robot would be destroyed irreparably way before it could possibly break First Law? Might have been a Baley...

The Jerf: Pro-Mole: It is trivially easy to execute a denial-of-service on most any machine you can run programs on. (There are some exceptions; there are ways of setting limits in Linux, if you poke it right, for instance.) In Trope terms, this is really about A.I.s only. The odds of a real AI suffering from this are basically zero, for three reasons. First, it is unlikely to use an internal representation that is prone to this problem. Second, even if it did, it would still need to be build to deal with this problem, because in real life, contradictory input would be encountered all the time. (In a real logic system, it takes very, very little to encounter contradictions, which is actually one of their weaknesses when it comes to using them for real AI, which is why point 1 is true; it is unlikely that these will ever be useful for building AI.) Consequently, walking up to an AI and reciting the Cretan paradox isn't going to shut it down, it's probably resolving thousands of similar conflicts per second anyhow. Third... AI researchers are pretty Genre Savvy, they simply aren't going to be dumb enough to let this pass. Good odds feeding an AI a Cretan Paradox to "see what happens" will be part of the early design phase of the AI, just for fun, precisely because of this trope.

  • The line below this is false.
The line above this is true.

  • I am a pathological liar.

  • OK, so one of the examples is someone saying "Did you know that everything I say is a lie?" Now, I know this is an old, old question, so I must be overlooking something, but that doesn't seem like a paradox to me, because the truth is that some things I say are a lie, including that statement. Right? - Balso Snell The statement contradicts itself, and that is enough to define it as a paradox. The fact that a logical scenario can be applied is irrelevant to that particular definition - though it does mean the statement isn't a true "logic bomb".

Top