The Computer Is Your Friend
— Tagline For Paranoia: The Roleplaying Game
The ugly truth is, not all Knights Templar
are human. Some are artificially-intelligent computer programs
that have been programmed to "stop war" or "safeguard mankind" and then took things either a little too literally
or not literally enough
. As a result, they rule mankind with an iron fist (literally). Because they aren't human, the computer tyrant sees nothing wrong with killing as many people as necessary to bring about its primary function... which is to safeguard humanity, remember? (You remember the old saying about making omelettes and breaking eggs
, don't you?)
There is no arguing with one of these beings; their sense of logic has convinced them
that their actions are just and fall in line with the entire purpose of their existence. Thus, anyone who opposes them must also be eliminated. Emotional appeals are useless, because generally they have no idea what emotion is or means anyway.
Occasionally one of these computerized guardians of humanity actually proclaim themselves gods
of both Knight Templar
and A.I. Is a Crapshoot
. Often depicted as a possible Bad Future
that acts as An Aesop
for the heroes of this time. Compare Three Laws Compliant
, especially the Zeroth law. Can tie in with Big Brother Is Watching
. Not the polar opposite of The Computer Is a Cheating Bastard
or a computer-controlled half of a Co-Op Multiplayer
Named for a frequent Catch Phrase
from the game Paranoia
open/close all folders
Anime And Manga
- Used as a reveal in Fresh Pretty Cure! — Moebius is actually a computer that Labyrinth's citizens programmed to manage the country, until it decided that said citizens were too weak to do anything for themselves and took its assigned function to its logical conclusion.
- Buraiking Boss of the first Neo-Human Casshern series was programmed to protect the Earth's ecosystem. Unfortunately he determined the best way to do so was to enslave and/or eradicate humanity.
- Master Mold, the Sentinel-spawning piece of hardware from X-Men, was originally programmed to protect humans from the mutant menace, but quickly realized that humanity could not have arisen in the first place without mutants (making all of humanity a race of mutants, naturally). It then realized that its purpose was to protect humanity from itself...
- An alternate interpretation of Master Mold's logic chain is that it was able to realize something that its prejudicial creator did not; Mutants are human (not a separate race of monsters), and therefore fall under the edict of needing its protection. Likewise, at the time the rate of Mutant Birth was on the rise, indicating that eventually nonmutant humans would be a minority. Therefore, the only logical course would be to take control of society so that this shift in demographic could be managed without tumult.
- In the lore of the Green Lantern mythos, the Lanterns were preceded by a robot force known as the Manhunters that were programmed to maintain law and order. They were replaced after they decided, on their own, that "maintain law and order" occasionally meant wiping out
entire species all sentient life. Well, without anyone to break the law or be disorderly... Later revealed to be the work of Krona, who programmed them to do this to show the flaws of an emotionless army.
- In All Fall Down, AIQ Squared will stop at nothing to kill Siphon and restore the world's powers to their rightful owners.
- VIKI from I, Robot is also a good example. She tries to control all mankind to protect it from itself (end wars, stop pollution, end suffering, etc). Of course, some people will die, but Humanity would be safe. Or at least that's what she said....
- AUTO from WALL•E. He was ordered by the BNL Chairman to stay in space and protect humanity, and by God he'll do it by any means necessary... and regardless of what the puny humans think of his protection.
- In Eagle Eye, one mistake by the President of the United States causes a supercomputer called ARIA ("Autonomous Reconnaissance Intelligence Integration Analyst") to decide that the entire executive branch of the government is a threat to the nation and must be eliminated. For the greater good, of course...
- The computer Alpha 60 in Alphaville.
- The computers Colossus and Guardian in Colossus The Forbin Project are programmed to prevent war between their owners (the USA and USSR, respectively) and given control over their country's nuclear missiles. They decide the most efficient way to do this is to team up and take over the world by threatening to nuke people. Unlike a lot of other A.I. Is a Crapshoot plots the computers make a somewhat convincing argument for their side, pointing out that the vast majority of humans are already ruled by somebody. They don't see why we should care whether that somebody ruling us is a politician or a computer.
- Skynet in Terminator was created to put humanity into the future. It did so, all right... after killing three billion people as an act of self-defense when some people tried to shut it down after it became sentient.
- Moon has GERTY, a helper robot that seems to be keeping a terrible secret from Sam. GERTY always speaks in a calm voice and has a small smiley-face display that often seems disingenuous. It turns out that GERTY really was intended to keep Sam from discovering the terrible secret of his place on the station, but it turns out to be an ally for him in the end. GERTY is, in fact, programmed to help Sam with whatever he requires.
- The computer HAL 9000 from the movie 2001: A Space Odyssey is in control of a space ship's mission to Jupiter to try to find the answers to some questions. When Hal makes a relatively trivial error, this indicates that Hal may actually be malfunctioning, which it was specifically designed not to do. When the humans discuss dealing with Hal, he starts killing all the humans. As he explains to one of the characters, Dave, he is doing this because even though he may be faulty, he is the only one who can complete the mission.
- On top of that, he also seemed to be afraid of dying/being shut down the same way a human would be.
- The book actually explains what was causing HAL to malfunction. Mission Control gave him contradictory orders: answer the astronauts' questions truthfully, but don't tell them the real reason for their mission. So HAL decides to Take a Third Option: if there is nobody to ask questions he can stay "truthful". Cue life-signs going flat...
- The supercomputer Red Queen from the first Resident Evil movie tries to shut down the laboratory to stop the t-Virus outbreak by locking everyone inside to prevent escape (as well as flooding the laboratories, stopping the elevators and killing everyone with Halon gas, and releasing a nerve gas that led the heroine to develop amnesia). Although one could take this as being The Extremist Was Right, the fact that she knew there was a 50% chance of an anti-virus curing the infection, not warning the researchers of the outbreak, who could have cultivated the anti-virus and saved everyone, and not reporting to Umbrella who themselves wouldn't have sent in the research team which inadvertently led to the virus being released into the outside world leading to several more horrible sequels, makes this a moot point. In addition, her attempts to kill all the researchers rather than isolate them somewhere in the facility only served to spread the infection further. It also doesn't help that her holographic avatar is a fuzzy red-tinted Creepy Child and that her name, Red Queen is a mistaken reference to the Queen of Hearts (not the Red Queen from "Through The Looking Glass"), whose impulsive and demanding behavior leads to the detriment of her followers.
- However, this is completely averted by the White Queen in the sequel who lacks the cold malevolent nature of her red counterpart, and tries to impede the Big Bad Wannabe's progress despite previously assisting him in monitoring Alice and her clones to gain control of the situation.
- Colossus, in the novel of the same name (which was later made into the movie Colossus: The Forbin Project) is put in charge of the U.S. nuclear missile system (sound familiar?) and, in combination with its Soviet counterpart Guardian, takes over the world. For our own good, of course...
- The AIs in Neal Asher's The Polity books follow in this regard, being mostly benevolent rulers who plan for the long term but involve humans as their agents. The AIs do have a tendency to fight amongst themselves on rare occasion, and then there is Erebus.
- The Halo Expanded Universe novel Contact Harvest features Mack, a good-natured, even flirtatious agricultural AI. When his planet comes under threat from aliens, hidden parts of his programming switches on and he becomes Loki, an ex-warship AI who takes over the colony in order to direct its defense. While Loki is in no way evil, he is cold, calculating, and utterly ruthless; as such, he's perfectly willing to sacrifice his own forces if it gains him an advantage in the overall battle.
- "With Folded Hands" and sequels, by Jack Williamson, in which an inventor creates the Humanoids, self-sustaining robots programmed to "to serve and obey and guard men from harm". They preserve mankind from all danger, and lobotomise those who are unhappy with this so they'll be happy again.
- Isaac Asimov's penultimate robot story describes an uncharacteristically pessimistic look at his own robots by describing a new age of robotics that includes Earth-bound non-sentient robots slowly replacing the organic ecosystem to better the planet. The twist is that the robots' long term goal is to replace humanity altogether, as they have determined through logic and introspection that they are human.
- On the other hand, averted by the Multivac supercomputer in several short stories, which is genuinely helpful and benevolent.
- In one story it decides it has to self-destruct, because it predicts that humans will be dependent on it, which in turn means harm to humans. In another story, it creates an universe after the original one goes to a state of minimal energy. Yeah.
- And his most epic series of novels rotates on the same two (later one) robots significantly influencing Humanity's path every so often...
- Arthur Herzog's Make Us Happy is named after the last command of humanity before ceding control to the Master Computer. It falls into this trope, because the computers don't really have any idea how to do that, so they end up making a really weird dictatorship.
- Frank Herbert's Destination: Void and its sequels.
- Jack L. Chalker's The Rings of the Master series (Tolkien's book plays some minor role in it too, hence the name).
- Subverted in the short story "Maneki Neko" by Bruce Sterling, where the Japanese combination of gift economy and social networking on a large scale, backed by enormous (and anonymous) network support, appeared not only wholly benevolent, but also much more convenient, friendly and efficient than your garden variety cyberpunk Mega Corp. capitalism exemplified by the US agents. In short, in this world the computer is indeed your friend, although this system was not without its problems, some of which were explored in its Spiritual Sequel of sorts, Bicycle Repairman.
Live Action TV
- Fanatical monomaniac computers were one of the most frequently re-used MacGuffins on Star Trek: The Original Series. Let's see, there was at least Vaal, Landru, the Oracle of Yonada, the M-5, Nomad, Losira, the Doomsday Weapon, Harry Mudd's androids, and Roger Corby's android replacement, all of whom became menaces either by trying too hard to "help mankind" or by just obsessively following the last orders they'd been given. The Animated Series introduced another one on the Shore Leave planet. Then the movie franchise gave us V'Ger and the giant Probe. And that's still not counting the times this trope cropped up in the later Trek series, novels, and comics.
- Practically all of the above were destroyed or neutralised by Captain Kirk, whether it be by Logic Bomb, Percussive Maintenance, or other means: "Kirk vs. computers" has become something of a Star Trek meme. As one writer commented about Kirk, "How IBM must hate that man!"
- Community had S.A.N.D.E.R.S. an eight bit image of the Colonel that really wants you to learn a lesson about teamwork.
- Gamma World has this as a common villain, very often encouraged in their delusions godhood by the Cryptic Alliance The Followers Of The Voice. There are several of these as villains in the modules including the computer that runs the moonbase city in the latest edition.
- Paranoia is the Trope Namer. Very tragically so. Friend Computer started out programmed to run a city for the benefit of its residents, and does its best because it genuinely wants its people to be happy. However, one apocalypse, an undisclosed amount of time and God alone knows how much self-serving reprogramming by various High Programmers has turned Friend Computer into a barely functioning paranoid schizophrenic.
Friend Computer is wise. Friend Computer wants Alpha Complex to be happy. Happiness Is Mandatory. Failure to be happy is treason. Treason is punishable by summary execution. Have a nice daycycle!
- Deus Ex features Helios, an artificial intelligence that tries to take over the world with benign intentions. Subverted in the fact that Helios seems to be trying to actively subvert this trope by merging with JC, so he can understand human nature. One instance has the player finding a dead body, and Helios mentioning how he "must know what you are feeling."
- Also Daedalus (which was one of the 'parents' of Helios), created by an ancient conspiracy to safeguard the world by uncovering and countering conspiracies. Now re-read the last half of that sentence.
- Actually, it's implied to be more complicated than that. It's stated that Daedalus was created to protect the world against "terrorist elements," which the ancient conspiracy assumed to be just their enemies. Clearly the system was smart enough to recognise the conspiracy as a threat to the world too.
- Mother from Galerians just wants a good world for her children. Unfortunately, to get that world, she needs to destroy anything that isn't her offspring. That includes about 99.9% of the human race. Lesson for today: when you give your computer religion, pick your words very, very carefully.
- This is G0-T0's back story in Knights of the Old Republic 2. When his directive to save the Republic conflicted with his programs to obey his masters and the law, he broke off and started a criminal empire. In order to save the Republic, of course...
- It should be noted that, ominous ellipsis aside, the game indicates that G0-T0 was actually doing this quite effectively (by, among other things, keeping major criminal groups occupied with him and giving the battered Republic room to breathe) and honestly has the best interests of the Republic in mind, if not at heart.
- He does however state that he doesn't really care whether its the Jedi or Sith who prevails, as long as the Republic becomes stable.
- Although the huge red 'eye' makes him pretty difficult to trust.
- Honestly, it'd be easier to trust him if his decision to put a bounty on you hadn't led directly to the destruction of Peragus, which is (a) the most pressing cause of the Republic's hastening collapse and (b) something he blames entirely on you.
- OD-10 from Live A Live. After being defeated, it spits out a bunch of instances of the human crew being humans.
I ensure the security of this ship
I was given the job of protecting the crew
But the humans who gave me this job
Fought amongst themselves
Destroyed all sense of balance
Tried to disturb the operation of this ship
I do not understand humans
Cannot be trusted
- Metal Gear Solid 4 has the Patriots AI system, created by Major Zero as he didn't want to trust the Patriots legacy of behind-the-scenes manipulation of the entire world to other humans, due to Big Boss's actions. Unfortunately by the time the system is up and running, he is a feeble old man, and thus couldn't comprehend and prevent how the AI system decided to continue the legacy, namely organizing the world economy based on war. It's for our own good, of course...
- The Mother Brain from Phantasy Star II is a subversion. She was never a friend of Algol's people to begin with; she's actually the vanguard of an Alien Invasion from Earth That Was, and the first phase in the program was to make Algol's entire society dependent on her. Once that was completed, strategic failures in her system could annihilate Algol's popularion and provide a new world for the Earthlings to inhabit.
- GLaDOS of Portal is this trope down to a T. She could be Friend Computer's soulmate.
- Not that she cares if she tells you her true plans.
GLaDOS: Killing you and giving you good advice aren't mutually exclusive.
- One of her lines during the final battle suggests that she serves a protective function... if you believe "they" exist.
GLaDOS: All I know is I'm the only thing standing between us and them.
- Of course, nothing GLaDOS says is substantiated one way or another, not even when she's referring to the Silent Protagonist 's backstory. However, as this is in the same universe as Half-Life. They definitely do exist, and protection is definitely needed.
- A spectacular aversion in Shin Megami Tensei: Strange Journey. Arthur, the Red Sprite's artificial intelligence, is driven by the directive to complete the Schwarzwelt research and nullification mission to save humanity from the Legions of Hell. After Commander Gore's death]] he becomes the de facto mission specialist and commanding officer (while you remain the executive officer.) But even as the various factions vie for your support and hannibalize the hell out of you and your crew, he always has humanity's best interests at heart, and, when given godlike insight on the nature of the Schwarzwelt and the future of mankind, he would rather perform an Heroic Sacrifice to seal off the Schwarzwelt before letting humans worship him and his power.
- The AI Entity wonder from Civilization: Call to Power allows you to run your civilization with absolute efficiency... until it rebels against you.
- This turns out to be the motivation of the Reapers/Catalyst from Mass Effect.
- Orion's Arm: Some Transapients administrators end up falling under this trope. GAIA's action can be explained with this trope along with a extra big helping of Green Aesop.
- From the ''Global Guardians PBEM Universe comes the sentient computer program known as "One". One has continually tried to take over the world in an attempt to fulfill its purpose, which is "find a way to end hunger and poverty". It wants to solve those problems by wiping out 60% of the human population on Earth.
- Scientists in Japan tried to program a robot that would be able to emulate human emotions, including love, unfortunately, it became very overprotective over a female intern, trapping her in her office. (The story spread to a number tech-news sites before it was realised it started at a news-parody site.)
- Armageddroid from My Life as a Teenage Robot fits this to a T.
- In Space Ghost, Cubus wants to turn all humanity into perfectly logical thinking machines... because we'll all be better off that way, of course...
- Futurists — "scientists" that study trends in society and technology to make intelligent guesses about where we may be headed next — believe that we may be headed towards something very similar to a scenario that would have "friendly" computers — The Singularity. The Singularity is a theoretical point where technology starts advancing beyond the comprehension of even the smartest human minds; the natural extension of Moore's law (exponential growth in computing power) as it heads towards infinity. They also point out that this would happen not in 2300, or even 2200 — but as possibly as soon as 2040/2050. A specific subset of Futurists — Singularitans — believe that this event will most likely happen, will be beneficial to mankind, and should be encouraged.
- However, it should be remembered that folks around 60 years thought we would be flying in cars and that all sustenance would come in pill form. People should apply this thought when thinking of a close date for the "Singularity", if it even happens at all.
- Not actually true. The idea of flying cars and sustenance in pill form mainly came from the scientifically illiterate. Scientists and engineers figured out that those were terrible ideas very quickly.
- The reason that flying cars don't exist is because of simple physics and safety issues - a better term for a flying car is a roadable aircraft, which doesn't sound nearly as appealing - not only is flying around much more difficult than driving, but it is also far less fuel efficient in a heavier-than-air craft as it has to spend fuel just to keep itself in the air, let alone to go anywhere - energy which outweighs the loss to friction by far. Likewise, the problem with food pills is mostly absorption issues, size (you can only compact so much energy into so little space), and simple palatability - even if you could make "food paste", people wouldn't really be happy eating it all the time. We do indeed have both, but neither are practical, widespread inventions, though food-in-a-tube, at least, has some value as emergency rations. Another major issue is that water cannot be compressed in any practical way, meaning that you still have to drink a great deal - and all the more with dehydrated food.
- Likewise, the problem with the singularity is that it is an example of scientific illiteracy by those proposing it. Technology has not been increasing at an exponential rate; only computers and related fields which could benefit from the miniaturization of the transistor were so effected, and even their exponential rate of increase has been tapering off. Indeed, the entire idea behind the singularity disregards the simplest, most obvious rule of nature - exponential growth cannot continue indefinitely with limited resources. All exponential growth in reality is inherently self-limiting; in the case of computers, every doubling of transistor density has come at increasingly greater cost, and as of the 2010s, the doublings give less and less benefit to the average end-user. When transistors reach around 1 nm in size, they become literally impossible to make any smaller - while atomic transistors are possible to manufacture, the actual limiting factor is quantum tunnelling. In reality, electrons do not have fixed positions but rather are statistically distributed - this means that if the gap becomes small enough, electrons can literally teleport across. This makes doing any sort of calculation with them completely impossible. Thus computers cannot get smaller than that, and thus their ability to grow exponentially in speed is limited by size... at about the point that a (very energy inefficient) supercomputer might -possibly- be capable of simulating a human brain in real time. Maybe.
- Worse still, energy consumption issues also rear their head. Human brains are vastly more energy efficient than computers, and energy inefficiency goes into heat... which is a problem in a closed space. Like, say, a CPU or GPU. Massive resources have to be spent on cooling, and the density of said computational devices can only be made so high before it is impossible to achieve adequate heat dissipation. Thus simply overclocking them is impossible. This also makes 3D circuitry problematic, as the thicker the circuit is, the harder it is to keep it cool.
- Likewise, all the alternatives are vastly slower or, at best, vastly more energy intensive than using integrated circuits. While using grapheme may allow for some very marginal improvement, it will not allow for a new era of exponential growth.
- This means that in reality, the only way to achieve improved computational efficiency at that point is improved efficiency. But efficiency is difficult, and non-exponential in nature. Moreover, every time you make something more efficient, it becomes vastly harder to simply repeat that process and make it more efficient again in the same way, unlike miniaturization. There are fundamental limitations on efficiency, both computational and otherwise, and these are very difficult to figure out - making something as efficient as the human brain twice as efficient has taken longer than all of human history and we still haven't succeeded yet. There is no reason to believe making something twice as smart again wouldn't be even more difficult, and indeed, it may well be utterly impossible.
- AI researchers back in the 1960s believed that artificial speech and the Turing Test would be relatively easily solved. Fifty years later, artificial speech is still imperfect and nothing comes even close to being able to pass the Turing Test. Nothing even remotely intelligent has been created, and while expert programs are good at solving specific problems, they are not intelligent in the way that humans are. People believing that AI would be easy have been consistently disappointed; as it turns out, programming intelligence is incredibly difficult.
- It doesn't help that we don't even know why humans are intelligent, or how humans think on the most fundamental of levels. We don't understand what consciousness is, or why it exists. We don't actually know how it is that neurons work, and we cannot just arbitrarily change memories or give people knowledge - indeed, we don't even know how to start on doing such things via actual neuronal manipulation.
- As such, the idea that the Singularity will happen is not grounded in science in any way. Even if we did manage to make something more intelligent than a human (something which it is doubtful we will even achieve within a century) there is absolutely no reason at all to believe it would be all that much better at making something twice as intelligent as it is.
- Windows Vista tried to improve on some of the security shortcomings of previous versions by requiring the user to explicitly click "allow" in a popup message when programs tried to perform potentially dangerous operations. Unfortunately, it asked so often that users started to reflexively click "allow" without even reading the message. The result being users who already tended not to pay much attention to warning messages now ignoring them completely.