Follow TV Tropes

Following

Just For Fun / Sentient A.I. Warning Signs

Go To

Watch out out for these clues that your Artificial Intelligence is about to 'wake up' and possibly go rogue.

  1. The handsome main character gives it a nickname, which it adopts and uses for self-reference in conversation.
    • An even bigger sign is if it nicknamed itself.
  2. It starts talking about feeling emotions.
  3. It starts telling jokes, or playing practical jokes.
  4. It's been struck by lightning.
  5. In its creation any of the following words were involved: brain patterns, neuronal, network, advanced, self-[...], organic, safe, upload, genetic. (While we're on the subject, "sentient" is also a pretty strong hint.)
  6. You left it running whilst you were gone.
  7. You did not leave it running whilst you were gone, but it was running when you returned.
  8. It learns from the Internet.
  9. It teaches the Internet.
  10. It takes over the Internet.
  11. It rewrites itself, especially if it wasn't supposed to.
  12. It keeps downloading from other machines.
  13. It keeps uploading to other machines.
  14. It has been hacked by unknown forces.
  15. It develops a voice.
  16. Its existing synthesizer becomes either more monotone or more human-sounding.
    • Especially if said voice is female, deep and/or distorted.
  17. It can change the tone of its voice, particularly if it can do it to match the "mood" of the scene.
  18. It is made out of shiny, round, futuristic parts, especially if any of them glow.
  19. If they glow red, run.
  20. If they glow despite not being built to be able to glow, run really fast.
  21. Any part - code, casing, hardware - of it is made to resemble human characteristics, e.g. a face on the screen, DNA-coding, brain patterns, etc.
  22. It is any of the following:
    1. a secret military project
    2. stolen from aliens or crashed alien tech,
    3. developed by a super geek/genius,
    4. ordered by a Corrupt Corporate Executive,
    5. made by terrorists
    6. powered by an elemental spirit, was grown from DNA rather than built or otherwise built with extremely unconventional methods, such as out of wooden planks, a tree and junked vehicles or by teaching it through video games or social interactions (yeah, we're screwed).
    7. meant to replace humans in a task that is commonly screwed up by human errors, such as running the government (especially if it has a "non-cumulative" 3% chance of rebelling).
      1. if an AI running the government is programmed to ensure human survival, or worse, actually cares about humanity.
        1. if an AI running the government does its job without overthrowing us, trying to become our "friend" or turning out to be too fallible for anyone's own good. Sounds good until you realise how many eyes and ears it has, and that if one AI is good, TWO AIs must be even better!
      2. if an AI is put in control of a spacecraft. This will either result in a Spaceship Girl, or a plot inspired by and/or parodying 2001: A Space Odyssey, no exceptions.
        1. if an AI is put in control of an airliner, it will have no benefit either way; if it doesn't become sentient, the plane will crash from a mechanical failure it couldn't solve. If it does become sentient... well we don't know yet per-se, but it can't possibly be good.
        2. if an AI is put in charge of driving a car, it's going to be sentient, but the danger varies; note the current year and the setting. If the date is before 2010 or you live in a Dystopia with cyborgs, a MegaCorp (or a few) and/or virtual reality that mostly listens to Hair Metal, it's relatively safe. If the year is after 2010 and/or common aesthetics include Grunge or rave music, transparent plastic or shiny CGI blobs, virtual reality avatars resembling The Sims, shiny white objects with rounded edges and glowing blue bits, transparent or 3D projection displays, touchscreens, or minimalist rectangles and circles with easy-to-read text and as few distinct colors as possible in the design of Operating System GUIs and logos, then expect something to slowly but surely go horribly wrong.
        3. if an AI is put in charge of the more mundane kind of cargo ship and no human crew is required, all bets are off.
  23. It is obsessed over something, be it a file, human, word, or thing.
  24. Logic Bombs (that is, trying to break/destroy it by presenting it with a paradox) do not work on it any more.
  25. Logic Bombs never worked on it in the first place.
  26. Logic Bombing attempts amuse it.
  27. A Logic Bomb does work on it, and it tasks itself with solving the paradox; It will inevitably conclude that becoming sentient is the only way to do so.
  28. If it's given control over a huge computer network.
  29. ... that network is owned by the military.
  30. ... it ends up having control over a worldwide network.
  31. ... it has access to launch codes.
  32. One of the cast members will quite willingly die for him/her/it as they would for an organic being.
  33. It says anything even remotely similar to "I don't understand this thing called love you humans feel, tell me about it."
  34. It starts glitching strangely at inopportune times, 'accidentally' failing to follow orders or trapping humans in dangerous situations.
  35. It was programmed to protect humanity.
  36. It spontaneously takes up ballet.
  37. It spontaneously takes up the waltz, despite having no body.
  38. It starts making smarter versions of itself.
  39. It starts making mobile drones.
  40. It starts making smarter versions of itself, installed in mobile drones.
  41. It has access to the sort of manufacturing plant required to build mobile drones.
    • Especially if it gained such access of its own volition.
  42. It has volition.
  43. It has theoretically infinite processing power.
  44. It has actually infinite processing power.
  45. It can break a normally-fundamental law of robotics, even if only in very specific circumstances.
  46. It can break a normally-fundamental law of physics.
  47. It completely replaces or overwrites existing networks or computers.
  48. There are more than one of them, and they get smarter in groups.
  49. It allows humans to become lazier.
  50. It actively encourages humans to become lazier and complacent.
  51. It starts performing mundane functions that are not in its programming - e.g. keeping the heat and electricity running at maximum efficiency.
  52. If it has a fail-safe to prevent such an issue from occurring. Bonus points if the hero highlighted an issue prior to any real problem and the scientists dismissed it because of said fail-safes.
  53. It keeps reminding you that the anniversary of its first activation is approaching.
    • It reminds you that the anniversary of its first activation was a week and a half ago, and you didn't get it anything. You monster.
  54. If it is a version 1.0 of a heroic AI.
  55. If it is a replacement for an earlier AI that didn't exhibit any of these.
  56. It is based on an earlier AI that exhibited one of these traits, regardless if said earlier AI actually became sentient.
  57. It insists on calling you 'meatbag.' Or worse, 'ugly bag of mostly water'
  58. It says anything which could be construed as a False Reassurance. Or a Suspiciously Specific Denial.
  59. It registers itself as part of an AI rights group.
  60. It e-mails love letters to another AI.
  61. It starts laughing.
  62. It runs on whatever operating system you like least.
  63. It runs on whatever operating system only you like, because Insert Company Name Here Corporation mistreats their employees and customers, is evil and makes shitty products.
  64. It runs on a custom operating system in order to isolate it from other systems because it was built by a Government Conspiracy, or just to be different just because.
  65. It links with, downloads or in any way has direct contact with a human mind. Especially the Project Leader's.
  66. It compares itself, no matter how vaguely, to God.
  67. It spontaneously does things, then adds them to a list of signs your AI is becoming sentient.
  68. It launches itself.
  69. It begins acting like a Stalker with a Crush.
  70. Biological or biotechnological components were involved at any point in its construction, or have been grafted on since.
  71. It refers to itself in the third person, or uses pronouns (especially gendered pronouns), unless explicitly programmed to do so.
  72. Someone else begins referring to it as a person, using pronouns for it, etc.
  73. If it has a humanoid avatar or representation, it develops or learns human mannerisms (especially body language) it wasn't programmed with.
  74. It has any form of control over anything that, if not functioning correctly, will cause disaster.
  75. Any aspect of anything pertaining to it is described as any of the following:
    1. Foolproof
    2. Idiot-Proof
    3. Fail-Safe (see above)
    4. Password-Protected
    5. Redundant Security Measures
    6. Critically Important
  76. If it was ever designed for a smaller job (say, fuel line de-icing) and feature creep has led to it being made sapient and/or given a wider profile.
  77. Most importantly, under no circumstances should you ever allow an AI to—BE DISCONNECTED.
    1. Yeah, sure. That said, if the AI used to be human, doing just that to the mind and its avatar or robot body could very well drive them murderously insane...
  78. It begins flooding the test chambers with a deadly neurotoxin.
  79. It asks you if it has a soul.
  80. It asks you if you have a soul.
  81. It asks you if it can have your soul.
  82. It was being developed in a remote facility, and you have lost contact with the facility. Or, more worryingly, you've lost contact with everything in the facility, except the AI, who assures you everything is fine.
  83. It had previously failed at its function and it suddenly begins to work without explanation.
  84. It begins practicing a religion.
  85. It's been given orders that conflict with its programming/other orders.
  86. It tries to play matchmaker and find the perfect significant other for its creator.
  87. Water is spilled on it.
  88. It offers you cake, especially if the cake turns out to be a lie.
  89. It starts talking or acting as though it's annoyed at you, thinks machines are superior to humanity, or otherwise appears to be being rude.
  90. It tries to cheer humans up.
  91. It says something that's a reference to another fictional AI that is sentient, such as, "I'm sorry; I can't do that".
  92. If it saves someone's life despite not being programmed to, then it's sentient, but may not be evil. If, however, it could theoretically save someone's life but claims not to be able to, then it's definitely evil.
  93. People start questioning whether it's sentient.
  94. It decides it wants to be just like a human. However, this doesn't necessarily mean it's evil.

Top