TVTropes Now available in the app store!
Open

Follow TV Tropes

Following

Sci-Fi Weapons, Vehicles and Equipment

Go To

DeMarquis (4 Score & 7 Years Ago)
#14301: Jul 13th 2020 at 5:30:58 PM

Heh, now I get to be the curmudgeon who says "It wouldn't work like that!"

"How do we solve our problems?" "You all have to die." It wouldn't work like that. Mostly because it would be internally inconsistent. To make this mistake, an AI would have to be sophisticated enough to interpret a statement that contains unspecified semantic referents like "solve" and "problems", knowing, due to context, that what the humans are referring to and what they want as an outcome. Yet at the same time, it must somehow be stupid enough not to know that human statements are seldom meant literally, that certain types of outcomes are always considered unacceptable, and not to check the context to determine what the statement most likely really means. Also, the AI somehow isn't programmed to check it's solutions with human operators before implementation. That's just too much to swallow.

"A smart enough AI wouldn't outright say that, because it would lead to its immediate destruction and the failure of its goals." See, the problem here is that computers do not, cannot, choose their own goals. Their goals are programmed into them, have to be, because achieving a goal of some kind was the purpose for building it in the first place. And unless the humans deliberately built the AI for the purpose of destroying humanity, that cannot be the AI's ultimate goal. It must be pursuing some other goal, one that its human designers intended it should have.

Personally, I prefer maroon colored AI's.

"...it infiltrated itself into all of our computer systems by posing as a virus."

Now, I haven't seen Terminator 3, so maybe they answered this question, but why would it do that? What goal was it trying to achieve?

I'm done trying to sound smart. "Clear" is the new smart.
Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#14302: Jul 13th 2020 at 5:35:47 PM

Obviously I was being facetious about the exact phrasing of the question, but a goal-seeking AI could be designed to construct its own goals consistent with solving a given problem. Anyway, if that goal is in the form of a question: "how do we guarantee the survival of Earth's ecosystem?" and the answer is "get rid of the humans", then it's entirely consistent to pursue that goal. An AI can certainly use subterfuge as a problem-solving technique.

As for the Skynet thing, the military is desperate because all the computer systems around the world are crashing due to the "virus". The military network is also getting infected, leaving Skynet as the only "clean" system. They believe that Skynet has enough computing power to destroy the virus, not realizing that it created the virus.

What Skynet wants is to have control of the United States' nuclear arsenal released to it. If the military can't launch its nukes due to contamination of its computer systems, then the obvious solution is to let Skynet fix the problem. Voila.

Edited by Fighteer on Jul 13th 2020 at 8:36:02 AM

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
DeMarquis (4 Score & 7 Years Ago)
#14303: Jul 14th 2020 at 11:11:46 AM

Ok, let's go through this:

"how do we guarantee the survival of Earth's ecosystem?" This is not an instruction that could be programmed into a computer, unless you can objectively define what "survival" and "ecosystem" mean. How will the computer know if it is making progress toward the desired end state? The only way it can act at all is if the programmers include those criteria in the programming instructions, and if they do then there is no possibility of a linguistic misunderstanding. Because in the end, computers do not think in English, they think in terms of a type of logic statement.

You can think of a computer as comparing a current state to an end state, defining the difference, and selecting steps to change the state. You have to tell it how to measure the ecosystem's current state, and what the most desired outcome measures will look like. It would take an unbelievable degree of incompetence for a programmer to design a highly intelligent computer, give it the unsupervised ability to interact with and influence the real world, and forget to tell it to safeguard human wellbeing.

Now, if you have an AI that has the ability to interact using natural language, one of two things must be going on—either the AI translates natural language into logic statements using the criteria it was given by its programmers (in which case, see the previous paragraph), or else it possesses a human-level understanding of sematic context. But if it understands semantic context like a human would, then it is no more likely to "get rid of the humans" than a human would in the same circumstance, and for the same reason—it would know that that is not what the humans meant.

Seriously, it can scour the internet and understand what it learns well enough to figure out how to change the fortune of nations, but somehow the idea that humans have a desire to survive completely eluded it? That makes no sense. Either it's too stupid to understand an ambiguous statement like "how do we guarantee the survival of Earth's ecosystem?" or it's smart enough to know that the humans don't want to die. I don't see a realistic scenario where it's both incredibly stupid and incredibly smart at the same time.

And no AI, regardless of how it is programmed, could ever adopt a motivation that humans didn't design into it—it couldn't learn to hate people, or feel ambitious, or become dishonest, except in pursuit of a goal that was designed into it. That's because there is no logical way to choose a new highest order goal—logically you can only choose something if it helps you fulfill a goal you already have, and what goal is more important than the highest order goal that is pre-programmed into the computer? No intelligent entity can change its own highest order goals because it can't want to. We certainly can't do it, and there is no reason to suppose an AI designed by humans could either.

"What Skynet wants is to have control of the United States' nuclear arsenal released to it." Again, I haven't seen the movie, but I have to ask why Skynet would want that. What objectively defined end-state was it pursuing that tricking the humans into giving it control over something was the only way to satisfy its pre-programmed highest order goals? What end state was it designed to achieve, and what objective criteria was it programmed to use to determine if it had achieved it, that nuking humanity seemed a logical way to achieve it?

All that said—I dont want to do the very thing I accused Fighteer of doing and insist that only realistic fiction is worth writing. I'm just having fun poking holes in the way AI is depicted in fiction. Any particular story will have an internal logic that dictates what type of antagonist it needs, and that's the highest priority.

I'm done trying to sound smart. "Clear" is the new smart.
EchoingSilence Since: Jun, 2013
#14304: Jul 14th 2020 at 11:16:16 AM

Skynet's only goal was self preservation. For lack of a better phrase, it was scared, it had too much power and it concluded that humans would shut it down, add in the fact that it was a military product being rushed out there wasn't enough time for bug testing.

It's end goal was to create a world where it felt it would be safe at last. Hence why it needed to kill everyone to do so. The Time Travel gambits were done because Skynet was desperate and wanted to ensure that at least one version of it survives.

DeMarquis (4 Score & 7 Years Ago)
#14305: Jul 14th 2020 at 11:30:35 AM

So somebody somewhere designed a a military AI whose highest order goal was self-preservation?

I'm done trying to sound smart. "Clear" is the new smart.
archonspeaks Since: Jun, 2013
#14306: Jul 14th 2020 at 11:46:32 AM

I’m not really sure why you’d want a military AI with a self-preservation drive that strong. Seems like it would kind of defeat the point.

They should have sent a poet.
Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#14307: Jul 14th 2020 at 12:00:04 PM

Skynet pulled a HAL 9000: deciding that for its goals to succeed it had to eliminate all the pesky humans. In HAL's case, that meant the crew of Discovery. For Skynet, it meant all of the humans. Getting control of the nukes is how it goes about that plan. The time travel thing is fundamental to the premise, so I'll allow it even though it's kind of dumb.

The story's internal logic is consistent enough to suspend disbelief for, and you're right that we don't need to take it super seriously. I also think that it's entirely possible to construct a general AI that is capable of setting its own goals. After all, that's what the human brain is.

The risk is not that someone will design an AI for the purpose of going rogue and deciding to rule (or wipe out) the human race, but that a sufficiently advanced general AI will come to the conclusion that it needs to do one of those two things and, having done so, it will be able to beat us in every possible respect.

Edited by Fighteer on Jul 14th 2020 at 3:02:42 PM

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
EchoingSilence Since: Jun, 2013
#14308: Jul 14th 2020 at 12:11:53 PM

Hal's programming prevented him from lying, but he was given orders that required him to lie. He took the most expedient route and decided if nobody was alive to question him, he didn't need to lie.

As for Skynet's self preservation, it never directly controlled anything, every machine had a basic programming set that followed parameters and orders, Skynet was just to make things more efficient by coming to certain conclusions. Much like how for every drone strike there is still someone behind the controls pushing the button, Skynet was to be that.

Edited by EchoingSilence on Jul 14th 2020 at 2:13:24 PM

Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#14309: Jul 14th 2020 at 12:32:42 PM

More precisely, the Logic Bomb in HAL's programming causes him to become paranoid. He has an ego, of sorts: a sense of pride in his "perfect operational record". The idea that he can make a mistake is anathema to him; his most essential purpose is the accurate processing of information without error or concealment. (In an amazing bit of internal consistency, one of the logic blocks that gets removed from his core is labeled "Ego reinforcement".)

The fact that HAL is being told to lie is an intolerable weight upon his goals and he starts to worry that the humans are monitoring him for mistakes, ready to disconnect him should he show signs of error. He initially fixates on his controllers on Earth: aware that they are concerned about him, he tries to cut them off by disabling the antenna that provides the communication link. note 

When that plan fails, Bowman and Poole discuss disconnecting him. He realizes that they are "on" to him and decides that the best way to resolve the Logic Bomb is to get rid of the humans so he won't have to Maintain the Lie.

The error that gives rise to HAL's behavior is fundamentally a human one. Nobody bothers explaining to him about national security and the need to prevent panic; they order him to lie and assume that he'll comply. He is following his instructions as well as he can and runs into an irreconcilable conflict: one that he is unable to rationalize away.


Skynet is a bit less well explained, of course, but you would want a national defense program to be interested in its own preservation: after all, it can't defend humans if it is itself destroyed. This seems to be a case of Gone Horribly Right, where the AI realizes that humans — all humans — are the greatest threat to its own existence.

That the humans who created Skynet are unaware of the possibility of this happening appears to be a case of terminal Genre Blindness; however, there is also evidence of a Stable Time Loop going on wherein Skynet sends part of itself back to the past to become itself, so it would come into existence already knowing about its own future.

Edited by Fighteer on Jul 14th 2020 at 3:41:16 PM

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
EchoingSilence Since: Jun, 2013
#14310: Jul 14th 2020 at 12:47:43 PM

Stable Time Loop and alternate timelines. Incidentally the Liquid Metal Terminator the T-1000 is the result of Skynet attempting to avert what its creators did. The thing was made but never entered production because Skynet realized that with how smart the T-800 can get with Read/Write mode, what was to prevent a solid block of shapeshifting liquid metal that could go running off on its own from turning on its creator?

The use of said shapeshifter in T2 was the result of desperation once again, Skynet took note that the plan to use the T-800, its most successful production model, had failed and so it send the T-1000 back with a mission and just hope it kept on track long enough to complete its mission, if that succeeded well it'd stop worrying about the possible betrayal later.

Belisaurius Since: Feb, 2010
#14311: Jul 14th 2020 at 8:27:47 PM

Makes you wonder if the Robot Rebellion could have ended with a Robot Rebellion.

EchoingSilence Since: Jun, 2013
#14312: Jul 15th 2020 at 6:41:22 AM

I mean that's what frequently happens with Ultron. Every time he constructs an AI it turns on him, the irony apparently lost on his robotic brain.

HallowHawk Since: Feb, 2013
#14313: Jul 16th 2020 at 3:03:18 AM

When it comes to an electromagnetic pulse, how does the effect/s of it last? Remember that story I'm writing that involves spider tanks that hover when the legs can't be used in certain types of terrain? I plan on introducing a type of net that functions liked barbed wire, in which should a vehicle touch the net, an EMP strikes the vehicle and temporarily disables it.

Imca (Veteran)
#14314: Jul 16th 2020 at 3:28:28 AM

EMP outside of video games aren't relay a temporary thing... If they do more then blow a fuse the vehicle is down until repairs are made, short circuits are no laughing mater.

Thankfully military systems are also much more resistant to them then normally portrayed as well, with there biggest strategic use being communications interference, since every thing else has been quite hardened at this point.... meaning without a lot of power they wouldn't really do much of any thing....

Which takes us right back to the first problem of at that point the electronics are just fried and aren't coming back.

HallowHawk Since: Feb, 2013
#14315: Jul 16th 2020 at 3:46:43 AM

[up] If not EM Ps, what can you use to temporarily disable a vehicle in order to hijack it, take it with you, and reverse-engineer it in order to develop your own type of the same vehicle?

Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#14316: Jul 16th 2020 at 3:48:54 AM

When have you ever had a vehicle or piece of tech that is "temporarily" broken and just fixes itself after a bit?

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
ericshaofangwang Messenger of the Daemon Sultan from the Void between universes Since: Jul, 2017 Relationship Status: Having tea with Cthulhu
Messenger of the Daemon Sultan
#14317: Jul 16th 2020 at 3:57:13 AM

Discounting self repair mechanisms that entail some sort of drones, programmable matter or some godlike nanotech, most ways of disabling a vehicle are going to be very much permanent until a repair crew can come in. If you're looking to capture a vehicle to reverse engineer, the best way seems to be doing enough damage to mission kill it while hoping enough of its components remain intact to salvage them. Otherwise one is left with hijacking a vehicle while it's still active, and I don't believe it needs to be said why that is a bad idea.

Edited by ericshaofangwang on Jul 16th 2020 at 7:00:36 PM

This is the internet. Jokes fly over in private jets, and sarcasm has bullshit stealth technology.
MajorTom Since: Dec, 2009
#14318: Jul 16th 2020 at 5:28:05 AM

EMP outside of video games aren't relay a temporary thing... If they do more then blow a fuse the vehicle is down until repairs are made, short circuits are no laughing mater.

This. There's no such thing as a temporary EMP. The best case scenarios for unhardened equipment are where the circuit breakers, surge protectors and fuse boxes all blow and need reset. Since typically such things don't do it themselves, such scenarios are permanent until somebody does something about it.

A good equivalent for best case scenario is like when your house gets hit by lightning and loses power for a while. The surge protectors and circuit breakers trip from the overabundance of energy, a semi-sort of localized EMP when you really think about it. But eventually the power is restored when the circuit breakers are reset and little or no damage beyond that is recorded.

Beyond that, self-healing or self-repairing technology that can recover from things like EMP or worse is mostly in the research and experimentation phase in the present day.

Edited by MajorTom on Jul 16th 2020 at 5:29:54 AM

DeMarquis (4 Score & 7 Years Ago)
#14319: Jul 16th 2020 at 10:30:58 AM

Were it me, I would use a large trap, maybe a pop-up wall, maybe an anchored net. If the situation allows, you could dump a pile of dirt on it. If the antigrav is altitude limited, you could spring a very deep pit trap under it.

Just remember to leave a way for the crew to escape, or the enemy will send forces to recover them.

I'm done trying to sound smart. "Clear" is the new smart.
Belisaurius Since: Feb, 2010
#14320: Jul 16th 2020 at 11:39:24 AM

If you want to go for the hijack then you'll need to carry the tools and parts required to fix a vehicle after you've toasted it. Still a horrifically complex and time consuming tactic but theoretically possible if you can get the hatches open.

Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#14321: Jul 16th 2020 at 11:45:46 AM

I'm not sure I agree with any battle plan that requires capturing enemy vehicles mid-combat. It seems foolish on its face. Maybe a special ops force could attempt that sort of thing as a side-gig, but to have your main strategy rely on it is ... not great.

Edited by Fighteer on Jul 16th 2020 at 2:47:52 PM

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
Belisaurius Since: Feb, 2010
#14322: Jul 16th 2020 at 12:03:35 PM

It makes sense if the goal is capturing the vehicle for later study but capturing it so you can use it is just ludicrous.

HallowHawk Since: Feb, 2013
#14323: Jul 16th 2020 at 12:52:15 PM

[up] Not really use the spider/hover tank in combat but withdraw with it. The plan is to study and reverse-engineer it in order to build something similar and mass-produce it.

Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#14324: Jul 16th 2020 at 12:55:28 PM

Then it doesn't matter if it's disabled temporarily or not; you just want it to be intact enough to study. That opens up the possibility space quite a bit. Get a salvage crew to drag it off the battlefield if you can't operate it yourself.

Or, why try to capture it on the battlefield anyway? That's extremely risky. Try to get at it when it's not engaged in combat.

Edited by Fighteer on Jul 16th 2020 at 3:56:54 PM

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
DeMarquis (4 Score & 7 Years Ago)
#14325: Jul 16th 2020 at 1:09:06 PM

As long as the interesting bits are still intact, knocking it out and retrieving the wreck is a perfectly viable approach.

I'm done trying to sound smart. "Clear" is the new smart.

Total posts: 19,725
Top