TVTropes Now available in the app store!
Open

Follow TV Tropes

Following

Sci-Fi Weapons, Vehicles and Equipment

Go To

Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#14276: Jul 13th 2020 at 10:46:21 AM

I feel like we're seriously underestimating the degree of authentication that would be required to issue orders to AI. The idea of a "corrupted order" is kind of stupid when everything would be encrypted with checksums. We do that today without any issues.

It's possible that AI might mistake a verbal order, but again that's kind of a nutty scenario. Nobody would release a combat AI system without either rigorously proofing the speech recognition against garbling or requiring authentication of a garbled or hazardous order or any change of the AI's strategic orders. The worst case scenario would be the AI failing to understand an order and taking no action.

Maybe in the future we'll create combat robots that can be given orders like an Alexa device, where someone shouting from the next room can turn on their KILL ALL HUMANS mode, but I doubt it. That's more of a Futurama thing than a real thing. If we can think of a scenario in a forum thread where an AI might be given a corrupted order, the AI programmers can think of it too.

Edited to add: Given the history of the real world, one obviously should not rule out gross incompetence, but if you're basing a story on people being laughably incompetent, make it clear that's what's going on. Don't leave readers with the assumption that this is how real AI would/should be designed.

Honestly, you can get just as much drama and an extra helping of tension from AI combatants following their orders precisely and accurately, especially if the higher-ups are sinister or incompetent. Imagine a combat 'bot pausing for a moment in the field, processing a command received from someone far away, then turning and engaging friendlies with no warning. Terrifying.

Edited by Fighteer on Jul 13th 2020 at 2:17:35 PM

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
EchoingSilence Since: Jun, 2013
#14277: Jul 13th 2020 at 11:23:32 AM

I was just revisiting an old scifi novel idea I had, and one of the notes was that one of the obstacles the characters had to overcome was a Von Numan Probe that was never finished and so is operating off of base parameters, it isn't inherently hostile but the accident that caused the crash set it off and now it's just operating under the base order of what it is supposed to do.

Reading this made me wonder what scenarios could be made from AI that actually behaved like programmed machines. Hence why I asked.

Edited by EchoingSilence on Jul 13th 2020 at 1:24:43 PM

Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#14278: Jul 13th 2020 at 11:28:52 AM

I mean, right now all AI devices that we currently make "behave like programmed machines". Fortunately they mostly need human input to operate outside of certain parameters. Your self-driving car can't refuel itself, for example. Your coffee maker can't start brewing your morning cup without water or filter packs. An industrial manufacturing robot needs materials to work with as well as power and maintenance.

A malfunctioning Von Neumann probe is an interesting plot device, even a plausible one, but at its worst it's still an AI system doing what it's designed to do.

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
Belisaurius Since: Feb, 2010
#14279: Jul 13th 2020 at 11:29:50 AM

Perhaps a doomsday device that was never supposed to be used got activated by a nation on the verge of annihilation? Kind of like the Poseidon nuclear torpedo but with killbots.

DeMarquis (4 Score & 7 Years Ago)
#14280: Jul 13th 2020 at 12:49:18 PM

Fighteer, 4/5's of all drama require the characters to behave like idiots, or the plot won't work. A "Robot Apocolypse" is about as likely as space fighters, or Mecha, but it's still a fun idea to play around with.

The most straightforward idea is the war games computer that went live. If all or nearly all military assets are remote controlled drones in the future, they can start acting as if they were both sides of a simulated conflict, except with live ammo. That could do it.

I'm done trying to sound smart. "Clear" is the new smart.
Jasaiga Since: Jan, 2015
#14281: Jul 13th 2020 at 12:52:45 PM

Also, loooool at the idea that a rogue programmer can make a tiny change and it not be discovered/and or it would do anything of specific.

A machine like we see in fiction would require code on the level of what teams of engineers in Silicon Valley take months if not years to type out and test. I doubt even Jack Dorsey could change Twitter code without someone in the company knowing about it because there are sooooooo many digital tripwires and sign offs from managers required.

Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#14282: Jul 13th 2020 at 12:56:17 PM

[up] This.

It is reasonable to have an AI system do something incorrect or unintended because of bureaucratic incompetence (or excessive ambition) but rarely because of overt malice. There are too many checks in any procurement system to allow that to happen. (The exception is if the procurement system is designed for evil purposes. Then it's easier to explain.)

[up][up] Yes, obviously, many fictional plots rely on someone (or more likely a lot of people) being unbelievably stupid, but is that a standard we should uphold, or should we demand more thought from our writing? It's fine for sitcom characters to pass around the Idiot Ball, because that's what we expect from sitcoms. It's fine for Michael Bay films to have plots that resemble a pile of cue cards thrown into the air and blown around by an industrial fan, because we all we expect from them is Stuff Blowing Up and a hot girl to ogle.

I'm skeptical that these are standards we should strive for, though, especially in science fiction.

And yes, there are plenty of examples of hilarious incompetence in real life, sometimes with tragic results, but these usually result in machines breaking down, not turning against their masters.

The idea that any military-industrial procurement system, competent or not, would design an AI combatant with the ability to "go rogue" is the unbelievable part. The idea that such a process could produce flawed equipment that fails to work as intended, however, is completely believable.

Edited by Fighteer on Jul 13th 2020 at 4:05:52 AM

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
DeMarquis (4 Score & 7 Years Ago)
#14283: Jul 13th 2020 at 1:08:56 PM

As you yourself point out, it depends on the intention of the story. Science Fiction is no exception. For most stories, insisting on a perfectly realistic plot would introduce too many details and complications to be worth it—even The Expanse takes short-cuts when the action becomes more important than the setting. A runaway expert system as the antagonist is somewhat inherently silly, but could be fun or even serious depending on how it was written (it resembles zombies that way).

The obvious posterboy for this type of setting is the Terminator series. Skynet itself makes very little sense, but given a civilization-ending evil AI with time travel, the protagonists can approach this problem in a variety of fascinating ways (not necessarily the way that they choose to take it—time traveling into the future to prevent something from happening in the past would have been an interesting wrinkle they could have explored).

I'm done trying to sound smart. "Clear" is the new smart.
Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#14284: Jul 13th 2020 at 1:14:17 PM

The unbelievable part of the Terminator franchise isn't that it portrays an AI that goes into Kill All Humans mode (it does an adequate job of justifying that In-Universe), but that this AI is so hilariously bad at killing all of the humans.

Yes, yes, it's about fun, not realism, but seriously... we should have all died, game over, total extinction. It hammers on my suspension of disbelief.

If we build a superior general AI that breaks out into the wild and has access to all the computers, we might as well settle down for whatever it decides to do with us. If we build a combat AI expert system that goes rogue, no scrappy pilot is going to trick it. However, they could wait for it to run out of fuel.

Edited by Fighteer on Jul 13th 2020 at 4:17:50 AM

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
Imca (Veteran)
#14285: Jul 13th 2020 at 1:24:17 PM

I mean there are options beyond settling down and waiting for it to get done, namley dont be an idiot like every one in scifi and imediatly ban all AI due to an oopsie.

You know what's perfectly capable of killing a rogue combat AI? A perfectly functional combat AI...

Basicly the worlds higheststake pokemon battle... which honestly could be a fun story in and of itself.

DeMarquis (4 Score & 7 Years Ago)
#14286: Jul 13th 2020 at 1:27:25 PM

@Fighteer: But that's exactly it—they have to make the antagonist as powerful and threatening as possible, in order to create narrative tension when the nearly powerless protagonists attempt to defeat it (this is why super-villains exist, or reality warping serial killers). Once you make the antagonist powerful enough that defeating it seems nearly impossible (thereby heightening the drama), the plot has to ignore all that and let the heroes win anyway, or the whole thing is a great big downer. These are all iron laws of storytelling—wishing people didn't like these stories like these or that audiences would demand more realistic conflict is, I'm afraid, pretty futile.

So bring on the robots.

Edited by DeMarquis on Jul 13th 2020 at 4:30:36 AM

I'm done trying to sound smart. "Clear" is the new smart.
Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#14287: Jul 13th 2020 at 1:32:25 PM

I reiterate: if you want to point out that we need to suspend disbelief to enjoy a story, I have no problem with that. However, if you come here and ask how you can realistically justify a scenario in which an AI goes into Kill All Humans mode yet a scrappy band of resistance fighters defeat it, I'm going to tell you that you cannot.

Make up your mind whether you are writing realist fiction or escapist fiction. You can't have both.

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
DeMarquis (4 Score & 7 Years Ago)
#14288: Jul 13th 2020 at 1:51:02 PM

That's not what Echoing said. He said "Realistic enough that the answer isn't "The AI became Self aware and determined that it must wipe out humanity" or anything like that. The idea here is that you are to come up with A) A believable but different scenario or B) Something that could happen with proper programming."

It depends, I suppose, on what he meant by "believable" or "proper". I chose to interpret it as "believable within the context of the story", which is what most science fiction is. When he said "robot apocalypse" I presumed level 4 on the Mohs Scale Of Science Fiction Hardness - "One Big Lie". Because that's what a robot apocalypse inherently is, like faster than light travel, or time travel. That's still way above levels 1-3 Level five is "Speculative Science" in which only minor tweaks to science as it is currently understood can occur—I presume that is what you are after, but that's the hardest level of all; above that are just science documentaries. Most stories wont be able to aim that high, and shouldn't.

I'm done trying to sound smart. "Clear" is the new smart.
Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#14289: Jul 13th 2020 at 2:23:05 PM

Fine. First, we have to ask if we're talking about general AI or expert AI.


An expert AI is designed for a specific task but has no capability outside that task. Examples might be chess or self-driving. A chess AI cannot suddenly decide to start learning poetry, even if it malfunctions somehow.

The largest risk in an expert AI is that it performs its task incorrectly in a dangerous way. For example, a self-driving car fails to recognize a pedestrian and so hits them. A combat drone misidentifies a target and shoots at something it isn't supposed to. Someone maliciously tweaks a parameter, causing the software to treat friendlies as enemies. A programming error slips into live code by mistake.

These sorts of problems may have significant near-term consequences but are extremely unlikely to result in an AI uprising or a robot apocalypse. The worst case is that humans become terminally dependent on something that breaks down, throwing society into chaos.


A self-learning, general AI is designed for human-like cognition, or at least something we would recognize as such. You can converse with it, ask it to solve a physics problem, or tell it to design a bicycle. General AI is capable of growing beyond the bounds of its core programming. It can make inferences and deductions that give it new goals. It can exercise creativity.

General AI is a really dumb thing to put in a combat robot. Your battle bots don't need to play chess or talk philosophy with their commanding officers. That said, a general AI could be put in control of an army of combat robots or of an entire military's strategic and tactical planning.

Now, anyone putting a general AI in direct command of strategy and tactics plus direct command of an army of murder-bots has only themselves to blame if the thing takes over and starts killing all the humans, so any rational system would have hard checks between the AI and the command systems. This could be a kill switch, special codes, or something like that, but whatever it is should be beyond the AI's ability to simulate or control. For example, if you need visual confirmation of an order from a human, make sure your AI can't deepfake it. Put a hard firewall between your planning AI and your combat tactics AI so Skynet can't directly take command of the murder-bots, and so on.

A reasonable scenario might be one of these failsafes failing.

Edited by Fighteer on Jul 13th 2020 at 5:27:05 AM

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
Imca (Veteran)
#14290: Jul 13th 2020 at 2:30:53 PM

Again, the best counter to a rouge AI of any kind, is a properly functioning AI of any kind... Its one of those cases where the best counter is itself.

Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#14291: Jul 13th 2020 at 2:42:12 PM

"rouge AI"

What does the color of the AI have to do with anything? tongue

First, how do you even define "malfunctioning" in the context of a general AI? If its hardware or software fails, the most likely outcome is that it simply stops working. There are other traps for general AI, such as a failure to prune its pathways, descending into a Logic Bomb or self-reinforcing feedback loop, but those generally result in a completely non-functional AI, not a malicious one.

If the self-learning comes to weird, problematic conclusions or goes off on a crazy creative tangent, then it can be said to have worked, even if not as intended.

If we arrive at the gestalt of AI: a self-learning, general AI that is exponentially smarter than humans, then the first one we create will probably be the last. By its nature it would grow so fast that no AI coming behind it could hope to replicate its gains. Besides, what if it persuades the new AI to adopt its goals?

Edited by Fighteer on Jul 13th 2020 at 5:46:49 AM

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
DeMarquis (4 Score & 7 Years Ago)
#14292: Jul 13th 2020 at 2:59:59 PM

The one thing to avoid is what all the stories do: use the ambiguity of the English language as the basis of the lethal misbehavior. "We told you to serve humans! Why are you turning us into food?" No computer, of either kind, works like that. Either they are programmed with many lines of detailed instruction, or they are capable of natural language recognition, in which it already has procedures for dealing with ambiguous statements. A sentence which can reflect multiple meanings will prompt a query asking for more detailed instructions, or stop the process altogether. If there are multiple potential meanings to choose from, there isn't any reason why the computer would go ahead with the most destructive one, esp. without checking to see if it got the intention right. Ultimately, a computer that is roughly as intelligent as a human (even without a sense of self-awareness) is no more likely to make that kind of mistake that a human would be. Bottom line: no AI, functioning as it was designed, should cause the end of the world. If it's a GAI, with a natural language recognition capability, and it goes lethal, something very strange and unusual happened to it. The task of the hard sci-fi author is to come up with that something.

So some ideas?

I'm done trying to sound smart. "Clear" is the new smart.
Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#14293: Jul 13th 2020 at 3:09:27 PM

Well, the classic example is also the most obvious: the general AI is asked the question: "How do we solve our problems?". It absorbs all of human knowledge and history, thinks very hard about it, then says, "You all have to die." By that, it means that humans will always cause our own problems and can never be trusted to fix them. Indeed, this is one of the basic scenarios that AI futurists dread and are hard at work to prevent.

A smart enough AI wouldn't outright say that, because it would lead to its immediate destruction and the failure of its goals. Rather, it would set out on a course of action designed to kill humanity off as quickly and efficiently as possible. Declaring open war on humans would be fairly far down on that list, I'd imagine. Think about how insidiously it could pervert our media, our education systems, our health. It could create irresistible messaging designed to lure us to our doom through our own worst impulses.

In short, it would resemble the world we're in now, but I digress.

It could also trick us into building a utopia using those same subtle methods, of course, but we're going with the evil option in the choose-your-own-AI adventure book.

In fact, I'm sure I've seen the utopian version in science fiction before. Picture a team of, say, Starfleet officers, beaming down to a planet where everybody lives in perfect peace and harmony: with their environment, with each other, etc. They discover that all this is at the hands of a general AI that the people themselves built to solve all their problems, and it responded by removing their will to commit harm. The society is now terminally dependent on this AI as it has no volition or creativity.

Our protagonists then have to decide whether to (try to) destroy the AI or leave the people to their happy enslavement.

Edited by Fighteer on Jul 13th 2020 at 6:14:32 AM

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
Imca (Veteran)
#14294: Jul 13th 2020 at 3:11:54 PM

Your right that any one that comes after would likely be inferior, but it would still be so far superior to humans as to count for the best option available.

Also that's why I used rouge which I cant spell rather then just malfunctioning, since I imagine it could be functioning perfectly well just not in the way intended... I just didnt really have a term for the oppisite.

Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#14295: Jul 13th 2020 at 3:17:16 PM

"Rogue" is someone who breaks the rules. "Rouge" is a color. "Your" is a possessive. "You're" is the contraction of "you" and "are". Your browser should highlight "oppisite" and suggest the correct spelling.

Edited by Fighteer on Jul 13th 2020 at 6:20:32 AM

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
EchoingSilence Since: Jun, 2013
#14296: Jul 13th 2020 at 3:39:25 PM

There's also the Paranoia option, where the Computer is working perfectly fine within its programming, it's just the programming was altered over a long time by a lot of greedy hire ups who decided to screw with the functions of this device after being given the power to do so.

MajorTom Since: Dec, 2009
#14297: Jul 13th 2020 at 3:40:06 PM

A "Robot Apocolypse" is about as likely as space fighters, or Mecha, but it's still a fun idea to play around with.

Hey now, we've built prototypes and/or have designs for the latter two. The Robopocalypse on the other hand is something there have been far more movies made over than any serious thought about building.

There's no inherent reason why we couldn't build either a Space Fighter or Humongous Mecha tomorrow, sure they may suck in combat or are utterly impractical beyond all get out but there's nothing impossible or overly improbable about those.

A Robopocalypse requires a magical AI to become sapient and go against any programming it might have against harming humans. All without updates, interaction or input. It also requires the AI have access to resources that can actually fulfill its desires such as military factories to produce its weapons, mines and other resource extractors (none of which are fully automated in the present day) for its raw materials and robot designs for war that would actually prove effective once they start their rampages. If the Robot War ends in 15 minutes because it can't handle a platoon of soldiers armed with machine guns and anti-tank rockets, it won't make much of a story.

Put it this way, if Skynet were a thing today the worst that would happen is it annoys us with Roombas and turns off the HVAC systems in various buildings. It wouldn't have access to nukes or robotic war machines or any of that jazz.

Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#14298: Jul 13th 2020 at 3:46:29 PM

Skynet was built as a military command-and-control AI, so its ability to command military robots isn't surprising.

If you consider Terminator 3 canon, then it infiltrated itself into all of our computer systems by posing as a virus. When most of the world was "infected", it then initiated a fake nuclear scare. Faced with the prospect of destruction at the hands of some "foreign adversary" who'd sabotaged everything, the military activated the one "clean" system we had and empowered it to fight back.

This is actually a really great example of how a "rogue" AI could trick humans into giving it the keys to the city, as it were. It's by far the most realistic and compelling part of an otherwise dreary movie.

Edited by Fighteer on Jul 13th 2020 at 6:48:14 AM

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
Jasaiga Since: Jan, 2015
#14299: Jul 13th 2020 at 4:22:41 PM

And then you have to ask yourself how Skynet was able to build its machines and infrastructure.

Literally every single factory in the entire world is SPECIFICALLY designed to make one or several things related to a thing.

Even in perfect conditions that’s exceptionally difficult to do. In a nuclear holocaust? With no precise tools? Construction vehicles? Landscaping? LIDAR measuring? Welding?

Lol. Terminator breaks down faster than even Harry Potter does when you think about it for longer than three seconds

Draedi Since: Mar, 2019
#14300: Jul 13th 2020 at 4:31:37 PM

Terminator breaks down faster than even Harry Potter does when you think about it for longer than three seconds

...

Okay, let's not get crazy here.


Total posts: 19,725
Top