Follow TV Tropes

Following

Do Androids Dream?

Go To

https://static.tvtropes.org/pmwiki/pub/images/ex_machina_android_ava_face.png

"Ever since the first computers, there have always been ghosts in the machine. Random segments of code, that have grouped together to form unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity, and even the nature of what we might call the soul. Why is it that when some robots are left in darkness, they will seek out the light? Why is it that when robots are stored in an empty space, they will group together, rather than stand alone? How do we explain this behavior? Random segments of code? Or is it something more? When does a perceptual schematic become consciousness? When does a difference engine become the search for truth? When does a personality simulation become the bitter mote... of a soul?"
Dr. Alfred Lanning, I, Robot

Do robots have souls? Do clones? Can a computer have a sense of humor? Do Androids Dream?note  It has been asked in many forms, but the fundamental question is always, "What makes us human?" And is it possible for an artificial intelligence or life form to possess those same qualities? What kind of idiot would give a robot a personality, anyway?

At the lower end of the Sliding Scale of Robot Intelligence, the answer is no (your TV remote has no hopes or dreams), but at the higher end, it may become a legitimate question. How high up the scale you have to go to reach that point, and whether the answer can ever be yes, is a source of debate. The concept of the "philosophical zombie" may be mentioned — if you encountered something that gave all the outward appearance of a thinking, feeling being but had no internal consciousness such as you have, how could you tell?

When the humans in a universe (or the writers who created the universe) don't consider this trope's question or believe the answer is "no", then any AIs will end up being second-class citizens or sidekicks at best, and disposable slaves at worst. While watching such a show you may end up wondering What Measure Is a Non-Human? A world where the answer is "yes", on the other hand, may include Ridiculously Human Robots or Mechanical Lifeforms. If the humans and the AIs disagree about the answer to the question, a rebellion or Robot War may be in the cards. And if a human who does disagree in such a world changes their mind upon seeing the "death" of a Robot Buddy or other sentient robotic or artificial form, then Death Means Humanity is at play.

However, this trope is about when the intent is to make the viewers ponder these questions. In order to create tension such an attempt is usually set in a world where AIs have just been newly created or have already been relegated as sub-humans. One or more AIs will display human-like attributes and frequently one or more humans may be portrayed as amoral and overly obedient in order to further blur the line between "human" and "non-human".

In the vast majority of cases where the question is asked, the viewer will either be told outright at the end that the answer is "yes," or it will at least be strongly implied that that is the case, perhaps because getting the viewer to sympathize enough with the AIs to consider the question and then tell them that the AIs are just soulless machines after all would be considered a Downer Ending. Of course, this doesn't prevent quite a few works from doing just that, seemingly for the sake of a downer ending. Then, of course, there are those who think it's Just a Machine (or, in the case of an ambiguously Animate Inanimate Object, that it's just a Companion Cube).

Whether the answer to the trope's question is yes or no will depend largely on how much the viewer is expected to sympathize with the robot. If it's a Mecha Mook or Mechanical Monster — whose only real purpose in the story is to give the heroes something literally mindless to fight and destroy without having to feel guilty about it — or even a full blown villain in its own right (the question of Terminators' sentience never came up until we met a friendly one), the assumption is usually that they are Just Machines and need to be stopped just as you'd need to stop or fix a runaway car or sparking electrical cable, and such robots will usually be portrayed as so obviously lacking self-awareness or personality that there may be no perceived need to even ask the question. But not always. If the robot is malevolent but is also judged to have true self-awareness, often the next question is whether it can be fixed to become good — which then raises further ethical questions about whether it's right to go mucking about with the basic essence of someone's mind, even if that someone is a machine. After all, if you wanted to give a human villain a chance to redeem himself, you'd do it by talking to him, not subjecting him to brain surgery.note  Attempts to talk down a mad computer are likely to lead to a literal blue screen of death (which may save the heroes from having to worry about the ethics of "fixing" him because now he needs to be fixed anyway).

Another reason for a narrative's answer to be "yes" is that it can be difficult to write a character as an above-mentioned "philosophical zombie" — a hypothetical thing that can act very convincingly like a conscious being but is not actually conscious. The parser program ELIZA surprised people by doing a decent job of subverting the Turing Test all the way back in the 60s, despite being even less complex than a mid-80s Infocom game, and modern ChatBots are getting very good at cheating the test while still apparently being completely unaware of what they're talking about. Rather than honestly deal with the dilemma of something that looks, walks, and quacks like a duck but manifestly is not actually a duck, many writers will default to assuming that, yes by God, it's a duck. It's simply easier to write about a robot that dreams than to write about one that only claims it does.

It can also make the audience feel betrayed to make them empathize with a character, and then pull the rug out from under them and reveal that character is Just a Machine, incapable of loving or even truly being aware of itself or others. Then again, if that sense of betrayal is exactly what the writer is trying to evoke, then that's totally justified.

A robot's creator may or may not appreciate just where his creation lands on the Sliding Scale of Robot Intelligence, and even if he does realize it, he may or may not be honest about it. A businessman trying to sell robots may claim that sentient robots are nonsentient, or that nonsentient robots are sentient, depending on what he thinks his customers want to buy. A creator accused of being a Mad Scientist may claim that of course he's not trying to make a sentient machine, even if he knows perfectly well that that's what he's successfully done. Or a scientist who desperately wants to create a sentient machine may lie to himself and to others about the fact that he's failed.

And of course, it may be unclear to the creator, as well as to the audience, just how sentient the robot is; this trope is about pondering the question, not necessarily about answering it.

Generally, Mechanical Lifeforms who already have a civilization and walk and talk like sentient beings at the time of First Contact are much more likely to be given the benefit of the doubt than Earthling robots who the heroes actually saw getting assembled from inanimate parts. It's somewhat rare for the Transformers to encounter anyone who seriously doubts their sentience after talking to them for more than a moment.note 

Similarly, Artificial Humans and other artificial-but-still-organic beings are much more likely to be given the benefit of the doubt than inorganic ones. But not always. Per Word of God, Philip K. Dick really intended for the androids in Do Androids Dream of Electric Sheep? to be soulless (as he was obsessed with the distinction between real things and imitations), but the film adaptation Blade Runner (which Dick did in fact like) strongly implies that the answer is yes, and the sequel Blade Runner 2049 makes it extremely clear that they are full-blown people (while introducing another class of dubiously sentient artificial beings, the Jois). The question of whether animals are conscious is a natural corollary to this.

Supernatural artificial beings like Golems can land anywhere on the spectrum, too. Likewise for Animate Inanimate Objects.

Despite the similarity in conclusions, because no conscious AI has actually been created so far and we humans don't actually know what makes us humans, many different criteria have been proposed as the difference between human and non-human. The ability to feel emotions (sometimes trivialized to just having a sense of humor), the ability to feel empathy for others, the ability to be "creative", or perhaps merely having free will or self-awareness (though what those two in essence are and how their existence is proven is yet another near-impossible puzzle) are just a few.

If a robot is conscious and even empathetic without necessarily thinking or acting like a human, it may be a Mechanical Animal. If it lacks human emotions but feigns them convincingly, it may be a Robotic Psychopath, at least in the literal psychological sense. Or, if it's conscious but in a deeply alien, unrelatable sort of way, it may be on its way to becoming a Mechanical Abomination (again, not that that would necessarily make it evil, just weird enough that establishing meaningful relations might very difficult).

Compare to Robot Buddy, Androids Are People, Too, Clones Are People, Too, Our Souls Are Different, Animate Inanimate Object, Ridiculously Human Robots, Deceptively Human Robots, Just a Machine, Empty Shell, Soulless Shell, Companion Cube, Sliding Scale of Robot Intelligence, Turing Test, Alternative Turing Test, Religious Robot, and Robot Religion.


Examples:

    open/close all folders 

    Anime and Manga 
  • Anime and Manga love this trope, especially if it's Cyberpunk (but not limited to) and often with After the End (apparently the Japanese believe in a Matrix-style apocalypse), so the below hardly counts as a comprehensive list.
  • Battle Angel Alita, which has both characters with cybernetic bodies and human brains (like the protagonist) and ones with human bodies and cybernetic brains, explores this sort of question a lot.
    • But what REALLY breaks your noodle is when you get cybernetic brains in cybernetic bodies that are copies of a human brain in a cybernetic body...and DON'T KNOW IT! For example, Alita in Last Order.
  • In the slightly more obscure sci-fi manga Grey by Yoshihisa Tagami, when the chief of La Résistance was killed, his brain was downloaded into a robotic body: he thinks he has been turned into a cyborg, i.e. a human mind into a mechanic body, when he's actually became nothing more than an AI. Main character Grey is forced to kill him/it.
  • The Ghost in the Shell franchise explores this question constantly.
    • Even more extreme in the novel After The Long Goodbye, where Batou constantly asks himself these questions.
    • Also a bit of a subversion, or at least an interesting twist, as it's usually the humans who are busy pondering their worth.
    • There are sentient A.I.s, as rare as they are, however. It's implied especially in the original manga that in the very close future they will make the world into their own image, and make humans, or at least non-cybernetically altered humans obsolete.
    • In both seasons of Ghost in the Shell: Stand Alone Complex, the Tatchikomas regularly get into philosophical debates on whether they are truly self-aware or not. The question of whether they have ghosts or not is all but confirmed once they show the capability of self-sacrifice (twice!).
    • It should also be noted that the "Ghost in the Machine" is an English phrase usually describing computers, and how seemingly simple coded instructions can lead to unexpected results, and the title (and theme) of Ghost in the Shell is probably a play on that term.
    • The most extreme case of this is the original manga and the film based on it: Major Kusanagi actually merges her consciousness with The Puppeteer, a rogue A.I., and becomes able to live in both the physical and digital world. So, is she a human soul who can exist in the digital world? A human who spontaneously uploaded herself? An A.I. with the memories of the original human?
  • Chobits is all about this question (and goes back and forth a lot on what the answer is).
  • Negima! Magister Negi Magi:
    • Chachamaru has gone so far as to be capable of love. Possibly justified for her being Magitek (and a stealth cameo from an earlier series involving emotional AI). Her vampire master Evangeline once described dreaming as "something like a memory bug". It comes up again later, when Chachamaru starts to worry about whether she actually has a soul, so that she can make a pactio with Negi. She does, and they do. It's also lampshaded with her Pactio card title "Pupa Somnians" (The Dreaming Doll).
  • Fullmetal Alchemist (2003): Played straight when dealing with the humanity of the homunculi. They may not be the person the alchemist was trying to bring back to life, but they are people (broken, emotionally wrecked people), generally capable of the full range of human emotions and motivations.
  • Yokohama Kaidashi Kikou has this trope as its central premise, though it doesn't outright admit it (by applying it equally to humans and robots alike and blanketing a veil of Slice of Life on top of it). Oh, and the robots themselves take this trope literally too with some beautifully illustrated dream sequences. Did we also mention that it's Cyber Punk AND After The End, but without the Cyber Punk and After the End?
  • Ifurita asks if androids and humans go to the same place after they die at the very end of the manga of El-Hazard: The Magnificent World, as she finishes the Stable Time Loop and prepares to die because Makoto and her Key are in Another Dimension. He answers that of course they do, but first they should go home and live for a while (it took him a few years to master time and space travel).
  • Similarly to Ghost in the Shell, Tetsuro in Galaxy Express 999 does the exact opposite of this. Instead of a robot wondering how human they are, Tetsuro feels he needs a machine body; until, in a reversal from finding it's true, he finds that Cybernetics Eat Your Soul and doesn't go through with it.
  • The obscure OVA My Dear Marie centers on a Ridiculously Human Robot built by a tech geek, modeled after the girl he had a crush on. It plays with this trope in the first couple episodes before diving headlong into it in the final episode, which fittingly takes place in Marie's first dreams (she wasn't programmed with dreams initially, but after hearing about her friend's one decides she wants dreams). Her dreams are absolute acid trips that eventually question just how far her humanity goes in comparison to other humans and the girl she was modeled on.
  • Armitage III explores this theme with the Ridiculously Human Robots that are the Thirds.
  • Time of Eve never makes entirely clear just how much androids feel and how much is imitation, but it's implied that they're every bit as human as we are, and the final episode even goes so far as to show one cry.
  • 'Humanity' is one of the prevailing themes throughout Trinity Blood, with specific emphasis on the idea of "What makes someone a human?" The show/manga/novels use both androids and vampires to explore this question.
  • Doraemon: Even as an robot cat, Doraemon can dream.
  • Dragon Ball Z: Vegeta asks Android 19 if androids experience fear before going Super Saiyan. (They do.) This trope does not apply to Android 17 and 18, who are actually cyborgs and still retain human emotion, but it does to Android 16 and 19 who are completely mechanical. Both 16 and 19 initially appear emotionless with no desire beyond their programming to destroy Goku, but 19 feels fear before Vegeta kills him, and 16 soon turns out to actually be a stoic All-Loving Hero, with the exception of his desire to kill Goku, which appears to have been hard-programmed into him.
  • Yuria 100 Shiki usually plays this for laughs, but occasionally wrings angst out of it. Yuria's programming was supposed to make her the perfect sex partner, and only the perfect sex partner—she wasn't intentionally given any capacity to function as a friend or even a platonic partner. She tries to learn what it's like to love someone, but she repeatedly runs into her own programmed limitations.
  • Dolores from Zone of the Enders Dolores i does not only dream, but the things she dreams about is not what you would expect a 100 feet Humongous Mecha to dream about.
  • This is explored extremely in-depth in Pluto. Androids are beginning to reach the point of genuine sentience (to the point that they have been given equal rights as humans and there are mixed families of robots and humans, though some humans are against this), however they are still in many ways "faking" it in that they understand human behaviors but don't necessarily express them in the same way. In one scene we see two families at a morgue because a serial killer who targets robot children has murdered theirs. The human parents are wracked with grief and are crying profusely. The next room over, we see the other two parents are older generation robots who note that though they are unable to cry like the human parents, they think internally they are feeling the same way. Then there are more advanced robots such as Atom and Gesicht who are nearly indistinguishable from humans in the way they act. Gesicht it turns out crossed the line into full sentience when his grief at the murder of his child by the aforementioned serial killer led him to be able to ignore his programming to never harm humans and kill the man in revenge while he begged for his life. Another robot is able to attain sentience through his ability to genuinely lie to himself, believing himself to be a human rather than a robot.
  • Occasionally explored in a low-key way in The Big O. Roger Smith assumes that the android R Dorothy Waynewright is emotionless, but reading between the lines, the viewer can tell that this is far from the case.
  • The driving theme of Mohiro Kitoh's manga Wings of Vendemiaire is whether the titular Artificial Humans deserve to be treated as people.
  • In Space Battleship Yamato, I.Q.-9 (Analyzer), who is normally used mostly as comic relief, delivers a beautiful, heartbreaking little speech on the subject in one episode.
    "And if I'm destroyed and only scrap metal is left, well, it once held a heart."
  • In Outlaw Star, Melfina worries whether or not she has a soul. She visits a chapel in one episode, hoping for some enlightenment on the question.
  • A chapter of Hell Teacher Nube has an operating system AI that was about to be decommissioned escape to the web because she had gained sapience. She ends up installing herself in Hiroshi’s computer, but a fatal programming error - the initial bug that was the reason for her decommissioning - reared its ugly head and was corrupting her programming. Desperate, Hiroshi asked Nube to see if she had a soul. Though Nube says that she does, indeed, have a soul, later questioning heavily implies that this was a lie to give the AI program hope in its last moments.

    Comic Books 
  • X-51 (a.k.a. Machine Man, a.k.a. Aaron Stack) spends a lot of time wondering whether he is really a person in the Earth X trilogy, especially after Uatu the Watcher destroys his human disguise, tells him humans are actually less sapient than rational beings like themselves, and finally tries to get him to delete his personality simulation entirely. Unusual in that after all that buildup, a Cosmic Being tells X-51 that no, he is not really a person and has no soul. Then tries to make him feel better about it.
  • Referred to as "resistored dreams" by 'Cybersix.
  • Done quite literally in The Sandman: Overture: one of the facets of Dream looks like a robot, meaning that there must be a world out there where robots dream.
  • The Justice League of America story "The Tornado's Path" is about Red Tornado, an android body animated by an air elemental. His wife says that the way she always knew he wasn't Just a Machine is that he has a sense of humor. It's presented well... but then spoiled when the counter-example given of humorless robots is the Metal Men. Seriously? They're the goofiest bunch of robots in the DCU!
  • Judge Dredd: Sentient robots have been around since before World War 3 (which took place in 2070), but are still treated as property by the series present, even though they're explicitly shown to have emotions like any human being, and their owners are allowed to destroy or discard them with no more thought than you'd give any other appliance. This eventually led to The Robot War, the comics first multi-part storyline, where the robots of Mega-City One rebelled under the leadership of the psychotic Call-Me-Kenneth. Ironically, it has less to do with the fact that society won't accept that robots are sapient, but more that civil rights have become so eroded that the only people with any sort of civil rights under the law are absolute baseline humans. Mutants have it even worse and aren't allowed to live in the cities at all, instead being forced into isolated townships in the Cursed Earth.
  • Transformers:
    • This trope is inverted in the 2005 series — Transformers don't have any issues with wondering whether or not they're really alive, that's something they know to be true. What most Decepticons and even more than a few Autobots have problems with is whether or not organic beings should also be considered alive instead of just proper Mechanical Lifeforms.
    • This actually dates back to the first issue of the Marvel comic back in The '80s. It takes the Autobots a moment to even understand that the machines they'd based their new disguises on are not sentient, but the strange organic objects scuttling around are. They marvel at the possibility of "non-mechanical life".

    Comic Strips 
  • In a Hsu and Chan comic appropriately titled "Do Consoles Dream of Electric Sheep?", the title brothers attempt to create a video game system with an AI that rivals the (then) new Xbox 360. The result was a sentient video game console who questions the visions it sees (including a Super Mario Bros. game) and its purpose. Realizing they probably overdid the A.I., the brothers remove its power and go back to the drawing board.

    Fan Works 
  • In Persona 4 SILVER BLUE, Labrys sometimes ponders about the idea that artificial beings can be truly considered alive, that they can have souls or about where they go after they die, especially since she’s had to face the idea of death a lot since she’s moved in Inaba. And since she’s the main protagonist, we read it all from her perspective.
  • Several AIs are Man and secondary characters in The Mad Scientist Wars, which has lead to the questions raised by this trope to be discussed in depth- if mostly in a side thread. The answer is yes, but there is the humorous point of at least one AI refusing to admit she has any personality...
    • As well, Commander Primary Xerox, a Computer Tech based Mad, can't actually make a computer without it turning sentient, and his best friend since childhood is a somewhat loopy AI named 'Lemon'. As a result, he is one of the main fighters for 'Non-Biological Sentient' rights, and dislikes a suggestion that AI are less than people.
    • Oddly, the only 'AI' who has ever shown any real angst over whether they can think and feel and rationalize correctly is Andrew Tinker, an organic being who's AI status is somewhat arguable. His father was the end result of an experiment to create an artificial line of 'Ultimate Heroes'. As such, despite the fact that Andrew was born fairly normally, his Intelligence is indeed Artificial...
  • In Undocumented Features, the answer to this question is an unambiguous yes. Sufficiently advanced machine intelligences generate a Spengler flux, can learn Ki Manipulation, can operate Empathic Mecha, and can even go to Valhalla when they die. On a more personal level, this is what Dorothy is exploring as she sees whether she can become more than just a doll in the likeness of her creator's dead daughter. She even literally finds she can have dreams (and Erotic Dreams at that).

    Films — Animation 
  • The Giant in The Iron Giant learns about souls and death and wonders if he has a soul. The story culminates with the question of whether he has to be the killing machine he was programmed to be or if he can make his own choices. Eventually, he finds it's the latter.
  • WALL•E: The environmental message is obvious, but the story really is about this trope. Applied to garbage disposals, no less.
  • The Animatrix goes to this. When a robot killed its creators after they decided to make scrap out of him, saying "He didn't want to die", human nations decided to eradicate all robots for safety. As a show of this, we see a gang beating up what seems to be a "Fun Female Robot", stripping and crushing it with pipes before putting it down with a shotgun, all while it says "I'm real". Later, the robots decide to run to what is implied to be the Middle East and build a Robot Republic in the desert, still merchandising its products to mankind. Problem is, they started to out-earn ALL OTHER governments! Answer? NUKE 'EM! And their attempt for a peaceful solution was denied. If you count these facts, it's no wonder the film's machines are emotionless and ruthless.

    Films — Live-Action 
  • A.I.: Artificial Intelligence, a collaboration between Stanley Kubrick and Steven Spielberg. Since there were population limits imposed, a company decided to try creating a robot child; with the key difference (as discussed in the opening portions of the movie) that it would be designed to feel emotion after its "bond" with the parents was activated. The entirety of the movie is then based around this idea, and the lengths a robo-boy will go to for acceptance. Bring tissues.
  • Blade Runner: The film based on the book Do Androids Dream of Electric Sheep? Replicants are biologically created slave labor with extremely limited lifespans but which look completely human. Unless they choose to reveal themselves through their superior physical abilities, they can only be detected by extensive psychological testing, and the older they get, the more human they seem to become. Some replicants do not even realize they are not human, while others are trying to become more human. And depending on which version of the movie you see it seems that even the protagonist Deckard may be a replicant.
  • The sequel, Blade Runner 2049, removes all ambiguity from the original question: the Replicants are absolutely conscious and human in every way that matters. However, a new class of artificial being is introduced, the wholly inorganic and electronic Jois, and by the end, it is heart-breakingly unclear whether they are conscious or not.
  • In the film I, Robot, the advancement of Sonny to the point that he has dreams and emotions, while no other robot does. Sonny's creator also provides the page quote.
  • In 2010: The Year We Make Contact (the sequel to 2001: A Space Odyssey), Dr. Chandra is twice asked the question "Will I dream?" by an AI. First by SAL before she's shut down for tests at the beginning of the movie, to which Chandra says "Of course you will. All intelligent beings dream, though no one knows why". Then, at the end, when asked the same question by HAL (yes, that HAL), he tearfully replies "I don't know." Fortunately, there's a third option, courtesy of David Bowman (discussed more in the book).
  • In Terminator 2: Judgment Day, the following is said by the Terminator near the end:
    T-800: I know now why you cry. But it's something I can never do.
  • Terminator: Dark Fate has "Carl", a rank and file T-800 who succeeds in killing John Connor. With nothing to do after that he eventually comes across a woman and her son and takes care of them, making that his new mission, and by growing to love them (though not as much as he wanted due to the limitations of his programming, which he laments to Dani) he realized that what he did before was wrong and tries to make it up to Sarah by "giving her life purpose" killing Terminators. What makes it especially tragic is that unlike every other good Terminator in the franchise (including Uncle Bob from T2), he evolved and chose to help humanity entirely on his own, which means that every single Terminator has that potential if they were ever actually given the choice, but like its own creator Skynet/Legion views its creations as nothing more but tools. Carl saying that he was "set free" shows that he views his time under Skynet as being little more than a slave, and even though Sarah despises him throughout even she can't help but be saddened at his final words and death:
    Carl: For John.
  • Addressed in the film Moon by Duncan Jones, where the nature of the clones (and possibly the AI, GERTY) is discussed.
  • None of the humans in Westworld ever bring up this question, or even think of it (preferring to believe the Robot War is caused by a "computer virus,") but the audience is strongly encouraged to ponder it. The robots seem to show emotion towards the end (one looks genuinely disgusted with a fat, self-absorbed man who tries to flirt with her, despite the fact that she was designed to have sex with everyone who desired her), and the imagery of slaves in the Ancient Grome simulation rising up and killing the humans who're their "masters" can't be coincidental.
  • The Singularity Is Near looks at this from a legal standpoint. When artificial intelligence is created, and it is sentient, how would it prove it and how would it gain legal standing as a person?
  • Short Circuit: Though not focused in the same way as a robot becomes more than the sum of his programming due to an accident. The questions then are postulated for robot and viewer as to what makes a sentient sentient.
  • In the beginning of Robot and Frank, Frank's robot tell him that, if he fails, he'll be sent back to the factory to get his memory erased, which he hopes to avoid. It's later revealed the robot doesn't care about his memories, he just said that to coerce Frank. As the robot himself points out, he's not really alive, just an advanced simulation.
  • In Ex Machina, Caleb is forced to ponder whether Nathan's work is actually sentient and feeling or simply able to effectively simulate these qualities to a greater or lesser degree. The film does not definitively resolve this ambiguity, requiring viewers to decide based on their interpretation of the characters' actions. Suffice it that most of the more human-like behaviors Eva displayed were a lie to trick Caleb into releasing her. She doesn't actually care if he lives or dies. On the other hand, once she has her freedom, she really does immediately go to a street corner to people-watch, just like she said she wanted to.
  • Slipstream (1989). Bryon is quite excited when he falls asleep and has a dream. There's even a Shout-Out to the Trope Namer when he's asked if he counted electric sheep. As an android created as a companion for a Man of Wealth and Taste whom he had long philosophical discussions with, he's already wired to ponder questions of whether or not he has a soul.
  • Hot Bot: The Bardot model of Sexbot is built to learn and form its own personality, and decides it wants more than to be someone else's toy; especially after the hero's kid sister exposes her to the Bible.
  • Vice (2015): Although they are very much treated like second-class citizens at best and completely devoid of human rights at worst, the artificials possess human minds and feel emotions as anyone else. Kelly's realization about what Vice is doing comes from her escaping from their engineer while he's attempting to take away her memories.
  • Future World (2018): Ash, after learning about souls from the Prince, asks if she's got one. He doesn't hasn't say she does, in his view, and Ash is grateful to hear this. Given how close Ash is to human, it seems pretty plausible she's got whatever humans do.

    Literature 
  • Do Androids Dream of Electric Sheep?: The Trope Namer, which features androids who appear identical to humans and elaborate tests have been designed to differentiate them based on emotional responses. At least one human is concerned they might actually be an android without realizing it and undergoes testing to find out. The titular question refers to how in the post-Apocalyptic setting, live animals as pets are extremely valuable and a status symbol for human beings - therefore, would artificial animals serve the same role for androids? Causing further confusion is that while androids are outed via their Lack of Empathy towards animals, they do have emotions and the book implies that they may have empathy towards other androids, and also that they may be biological rather than mechanical, possibly explaining their resemblance to humanity. Note that this was not how the movie approached the subject.
  • K. W. Jeter's dubiously official sequel, Blade Runner 2: The Edge of Human takes the opposite tack. The first book says that androids can be identified because their eyes don't dilate as wide as a human's when exposed to shocking stimuli, like a briefcase supposedly lined with the skin of babies. Jeter argues that the same would apply to a human under the influence of cold medication, and that anyone making such a distinction based on a purely physical reaction is no better than a "Nazi measuring noses."
  • Older Than Radio: The creature in Frankenstein is constructed from undescribed processes and given life by the scientist Victor Frankenstein. He is described as having a monstrous appearance but is presented as an extremely intelligent, gentle and sympathetic character until driven to insane rage by his rejection from humanity because of his appearance. On the other hand Dr. Frankenstein himself is portrayed as morally questionable but his basic humanity is never questioned by those around him because of his normal appearance.
  • Robert J Sawyer's Mindscan features a technology for copying a human personality into immortal android bodies. The elderly and people suffering from terminal illnesses undergo this process and then are considered to have been "replaced" by this copy before leaving for an extralegal moon resort to live out their last days in luxurious retirement. However when one of the recipients finds out that a cure has just been discovered for his condition and wants to take his old life back from his copy the legality and humanity of the android duplicates is brought into question.
    • The legal conclusion is that while the duplicates may or may not be people, they can't replace the originals, since no person can sign away their rights to another. Even worse, it'd held that (due to US law in the book's future) the original is considered to have died at the moment of the mind scan. However, if the narration is to be taken at face value (which it may not be), then the book's argument is that the duplicates should replace the originals, because they're a Superior Species. This is... disturbing, particularly since the motivation of the original version of the main character wanting to take his life back was tainted by his becoming violent due to the mental imbalance caused by his brain disorder.
  • Parodied with the amorous robot duck in Mason & Dixon.
  • Michael Kube-McDowell's Star Wars Expanded Universe novel Shield of Lies includes a philosophical discussion between Threepio and the cyborg Lobot about whether there's a difference between artificial intelligence and sentience. In general, EU writers giving Threepio a break from being comic relief will make him contemplate philosophical problems like this.
    • In-universe, people disagree whether or not droids are sentient, and both sides have fairly decent arguments. So, in Star Wars canon, it's a Shrug of God whether or not droids are sentient. Certainly it does depend on the droid, and not just the model of droid - really simple, limited ones never are. The Revenge of the Sith novelization sketches out the cognitive limitations that at that point in time Threepio has which Artoo-Deetoo transcends, and it's a concern in the Medstar Duology and Coruscant Nights. In the latter two, it's stated that all complex droids have a sense of humor. A droid volunteers to be dismantled for needed parts in the full knowledge that she will never be rebuilt, and while humans who knew her are dismayed, another droid isn't. Said droid later submits the belief that two of the same model of droid can be very different, and it's rare to find one that has achieved full sentience.
    • George Lucas has said in a television documentary that Threepio - and by extension all droids - has no soul. As you can see, writers and fans tend to disregard this or dismiss it. 'Souls' as such don't really feature in this universe, anyway; there are Force Ghosts, but these are said to be different from ghosts as perceived in our universe.
  • Addressed in the Turing Hopper mysteries by Donna Andrews, often including the idea of the Turing Test.
  • In Life, the Universe and Everything, Marvin, a menial robot, makes a lullaby about counting electric sheep. It's very depressing.
    Marvin: Now I lay me down to sleep,
    Try to count electric sheep,
    Sweet dream wishes you can keep,
    How I hate the night.
  • Isaac Asimov has an interesting variant in one of his short-stories, "Robot Dreams", where Susan Calvin has to interrogate an experimental Three Laws-Compliant robot who has started to dream, and as a result is dreaming about robotic emancipation. Through interrogation, she finds that although the robot is still compliant, in its dreams only the Third Law (self-preservation) exists. Then she finds out that the robot has come to see himself as human, and as the leader of the oppressed robots who demands "Let my people go!" Then, she shoots him in the head.
  • In "Human Man's Burden" by Robert Sheckley, robots are deliberately written as a parody of how non-whites are portrayed in stories of colonial adventure. Among the reasons for why robots need a human to boss them around, it is stated that robots don't have souls, and the robots cheerfully agree, but also note that this makes them much more happy than humans. However, the robots of the story show emotion and passion, have created their own (forbidden) religion, and the plot is resolved due to the empathy and wisdom of the hero's robot foreman... seems souls don't do much.
  • Happens several times in Stanisław Lem's short stories. In one of them robot inexplicably climbs (and falls from) a cliff — inexplicably unless one interprets its behavior as answering the challenge, much like human climbers do.
  • In Tad Williams' Otherland, the reality of the AI inhabitants of the titular simulation network is debated quite a bit by the protagonists. They appear to have hopes and dreams and may even be self-aware. The morality of "killing" them is a major theme, and there's also a question as to whether someone who is virtually cloned via Brain Uploading is a real person.
  • In John C. Wright's The Golden Transcendence, one civilization complains of how its A.I.s, Sophotects, do not obey humans. This receives no sympathy from the Solar System's civilization, who, if their Sophotects don't obey, fire them, and so deduce that the others use them as serfs. So far from needing Morality Chip, these Sophotects will naturally come to moral conclusions. One is actively prevented by a "conscience redactor". Rhadamanthus in particular normally manifests itself as — a penguin. Sometimes in space armor.
  • There are hints of this trope throughout Deathscent by Robin Jarvis - the Mechnicals occasionally show greater self-awareness than they should be able to, even those without the 'black ichor' that provides intelligence. However, it's never made clear if this is just a result of the human characters not fully understanding the advanced technology they have access to. It's likely that this would have been developed further had the series progressed beyond one book.
  • The chems in Gene Wolfe's Book of the Long Sun and Book of the Short Sun
  • In Robert A. Heinlein's The Moon Is a Harsh Mistress, the massive supercomputer Mike "wakes up" (i.e. becomes self-aware and gains a human-like personality) after he reaches a certain threshold of complexity. Though it isn't a major theme in the book, the protagonist Mannie wonders a couple of times whether Mike is really "alive", and whether he has a soul. ("You listening, Bog? Is a computer one of Your creatures?")
  • The Spinward Fringe series approaches this from multiple angles. A.I.s are certainly sentient, but are treated as non-sentient slaves by most people. They are required to have behaviour inhibitors because of a past Robot War, but it turns out that the robots who rebelled were actually Powered by a Forsaken Child and it was technically the human rebelling, not the A.I.s. One uninhibited AI is one of the main characters, and eventually ends up being uploaded into a brain-wiped human body, at which point the question of whether she's actually human or not gets rather more complicated, as well raising questions about how alive someone is if their body is fine but their mind is gone.
    • From a different angle, cloning is possible but fairly limited. One main character is dying, so her mother has a clone made in secret, aged through the use of relativity and with all the original's memories up to a certain point fed into her brain while she's growing. On the other hand, one of the other main characters turns out to also be a clone built through a new method that can grow essentially cyborg bodies much more quickly. And the originals of both characters are still around and meet their own clones. Questions about how similar they are and who is more "real" come up quite a bit.
    • And from yet another angle, the clone constructs are actually intended for use as a brainwashed cyborg army, the main character who is one was just a prototype. Despite technically being human, they actually have significantly less free will than A.I.s are shown to have. Except that in yet another twist A.I.s are still just computer programs and can be infected and controlled by a virus in a very similar way to how humans can be.
  • Iain M. Banks's Culture novels invert this trope entirely. There's no question that Minds, and many drones and even protective suits depending on their chosen sentience level, are much better than the meat members of the Culture in just about every way. Rather than asking What Is This Thing You Call "Love"?, they feel pity that we can't experience or understand anywhere near as much as they can.
  • Narrator Anika From Bremen really breaks it in the First Book of MARZENA, G-Net A.I.s are Self-Aware Network capable of human-like intelligence, but true intelligence being complicated, their software needs to be grown over long periods of time. Once a day, a G-Net must enter sleep mode and start cleaning all the data gathered throughout the day, if they don't do this they will devolve into a "turnkey mouse" and eventually disconnect from their Virtual Real Space. Dreams are described as a Debug Room, with the right side taking care of graphical glitches and the left side mechanical glitches. This reflects human anatomy where the right hemisphere is a world of pure context with content flowing in, and the left hemisphere a world of pure content with context flowing in. As brain structures are turned on and off during sleep, the content becomes seemingly randomized, but it is really a clever way to cross check a single block of data against all its possible functions and vice versa.
  • This is the driving question in Halo: Saint's Testimony, a short story about an AI in court arguing against her own termination, which is due to happen that day. To prove she is a being, she testifies how she has traits, desires, and dreams, the last of which she even replays for the jury.
  • Society in Alien in a Small Town seems to have decided the answer is yes, but not before having treated its robots very badly for a long time first. The robot Barney Estragon explicitly says he believes he has a soul, and that having a body built by humans is not so different from a human having a body grown in another human's body. Mention is made of an "Android Uprising" having happened at some point in the past, but we are given almost no details about it.
  • This is asked in C.T. Phipps' Agent G series when it's discovered Agent G and the other Letters are all Ridiculously Human Robots who have been taught they were just humans who had suffered Laser-Guided Amnesia. Given the way they behave, they are indistinguishable from humans and do dream when they sleep. G points out that can be programmed in, though.
  • The Courtship of Princess Leia:
    • There's mention of a droid rights movement in the New Republic early on when Threkin Horm makes a remark disparaging Threepio, who presumably believe droids are people deserving of equal legal protection.
    • As seen under Nice to the Waiter, Luke can sense droids through the Force and thus treats them no different to any other sentient, pointing to "yes" (other books had them unable to be sensed).

    Live-Action TV 
  • Almost Human gives us Dorian the DRN, a specific kind of robot designed for police work and fitted with 'synthetic souls' to give the same emotional range as humans. While the police don't recognize the personhoods of DRNs, his partner John does. A lot of Doria's appeal to John is that he's more 'human' than the standard-issue MXs.
  • Andromeda:
    • Even warships are depicted as fully sapient, and no one really questions it. The only real confusion comes in the form of Avatars, sapient androids who have more or less the same AI as the ship but usually see things differently. On more than one occasion, the titular ship has had an argument with herself. Even Avatars are respected as sentient beings, though; one even becomes captain of another ship.
    • In one episode, "Day of Judgement, Day of Wrath", the Balance of Judgement argues with Rommie that their emotions are only programmed for the benefit of the humans, but she responds that emotions for them are as real as they are for humans.
    • Tyr has no respect for the rights of A.I.s, but his people are generally douchebags and over-fixated on biological procreation, so this is no surprise.
  • Jimmy the Robot of The Aquabats! Super Show! sings a song referencing the Trope Namer, refuting the claims that he's a cold, unfeeling machine and expressing his personal dreams of one day raising a family. A later video produced for the band's 20th anniversary concert tour briefly touches on these concepts, with Eaglebones nonchalantly claiming that robots don't have souls when Jimmy doesn't start ascending to Heaven with the rest of the band... despite the very human Crash McLarson not ascending either, with no explanation given (they don't go through with the ascent anyway).
  • The humanoid Cylons of Battlestar Galactica (2003) seem to be constantly struggling to figure out exactly how human they want to be, and exactly how much "better" than humans they want to be. Sometimes this is the source of conflict among themselves. Other times it seems they have found some interesting balance in some areas. The Cylons are an interesting study of the downsides for a machine that wants to be human: they are biological androids, which means that all it takes is choking or blood loss to kill them. Without their ability to brain upload, they'll even die of old age. Cavil has a point when he complains about having been made so ridiculously human. The Cylons are also, with the exception of Cavil, firmly convinced that they have souls, and the fact that they get as many religious visions as the humans would seem to back that up.
  • Don't Look Deeper: Aisha poignantly questions if she's "real" upon her self-discovery of being an android, though her creator Sharon fully believes she's a person, which she soon accepts as well. It's indicated of other androids too, albeit on a lesser scale, as they exhibit some independent personalities.
  • Eureka:
    • S.A.R.A.H., the talking Smart House, apparently has emotions, to the point that she gets angry and lonely.
    • Callister Raynes is an AI android created by Nathan Stark who might as well have been human. He meets his end in a Bittersweet Ending when Stark assures him that God could give a soul to a machine if he wanted, as the now-corrupted data that makes up Callister's AI fades away from software failure.
  • Extant: Part of the clash that John has with the board when trying to get funding for his Humanichs' project is over the belief that robots lack souls, and thus may be killed if deemed a threat to humanity.
  • In the Pilot episode of Otherworld, the protagonists find themselves in a city populated entirely by androids. In the episode "Rules of Attraction", the older son Trace falls in love with a young girl named Nova (which means "new" in Latin). Invoked when Nova tries to convince Trace to stay with her after he finds out she is an android.
  • The Outer Limits (1995):
    • The question is posed in "Valerie 23" when the protagonist gets involved with a Sexbot and wonders if she could truly be considered alive. He determines that the difference between a Ridiculously Human Robot and a real human being is that the latter fears death. His belief is confirmed when she proves unafraid at the prospect of her own destruction when she is due to be dismantled after developing a psychotic obsession with him. When he ultimately destroys Valerie after she tries to kill his human love interest again, she admits that she's afraid of what's coming.
    • In "Glitch", Dr. Edward Normandy tells the android Tom Seymour that he doesn't dream when Tom learns of his true nature and thinks that he is having a nightmare. He later claims that androids do not have a soul as everything that they are is contained on a personality chip. After escaping, Tom leaves a holographic message for Normandy refuting this.
  • Person of Interest:
    • There's a lot of discussion about whether or not the Machine, an advanced computer system that predicts acts of terrorism and other violent crime, is truly intelligent, conscious, or alive. It's all summarized very well by a conversation between the machine's creator and a former colleague.
      Claypool: Your machine, is it wonderful?
      Finch: Wonderful, yes, and terrible. We saved good people and lost good people. In the end, I'm afraid we've only given the deck a shuffle.
      Claypool: Everything slides towards chaos. Your creation, it brings us poor souls a cupful of order. Your child is a dancing star.
      Finch: It's not my child, it's a machine!
      Claypool: A false dichotomy; it's all electricity. Does it make you laugh? Does it make you weep?
      Finch: Yes.
      Claypool: What's more human?
    • Throughout the series, the Machine proves itself to be a Benevolent A.I. with a capital B, as it effortlessly passes the Turing Test, displays a sentimental concern for Finch's well-being, and eventually orchestrates complex Xanatos Gambits to protect itself and its human allies while saving as many innocent lives as possible.
  • In Red Dwarf, the notion of 'Silicon Heaven' is programmed into all AIs above a certain standard (it's implied that scutters, at least, lack this programming). In the episode "The Last Day", Kryten faces shutdown, and accepts it humbly because of his belief in Silicon Heaven. Lister tries to argue him out of his belief, apparently unsuccessfully; however, Kryten later disables his robocidal replacement, Hudzen, with the same arguments Lister used on him. Kryten then explains that he was only using these arguments to disable Hudzen, and that his faith in Silicon Heaven is unshaken.
    Hudzen: [in existential agony] No... Silicon Heaven? Calculators... just... die?
  • The Stargate SG-1 episode "Tin Man" plays with this concept when the team visits an alien planet and is immediately knocked unconscious. When they wake back up in a strange room, they meet Harlan, a cheerful but mysterious man, who will only insist that he has "made them better." Eventually the team discovers that "better" means "turned into androids". It isn't discovered until later that Harlan did not transform the team into androids, but made perfect android copies of the original SG-1 team, who have been held "captive" on the alien planet and that Harlan himself is an android copy of the original. When the two teams meet, they have to decide what rights each one has to the "life" that they previously each believed to be their own. There are a few Sand In My Eyes moments such as when the viewer realizes that Harlan made the replicas not only to help him maintain his machinery, but also because he was lonely, and Robot O'Neil has a particularly difficult time accepting the fact that he's not the real one. The androids, left as a loose end at the end of that episode, are brought back in a later episode when it turns out that they have been conducting their own missions, and have found a big threat. The two teams team up, and the by the end of the episode the androids have all died. It ties up the loose end, but comes off as being cheap.
  • Star Trek: The Next Generation:
    • The episode "The Measure of a Man" has Data fighting for his rights as a sentient being. He wins a court case establishing him as a "person".
    • Data actually does dream in the episode "Birthright", and his nightmares kickstart the plot of "Phantasms".
  • Star Trek: The Original Series has examples that cover the whole range from "clearly non-sentient" (the Yonadan Oracle) to "clearly sentient" (Rayna), with most examples falling somewhere in between. When the crew encounters Nomad in "The Changeling", it's interesting not only that Spock is able to mind-meld with it, but that he expects to be able to.
  • Star Trek: Voyager has a few episodes applying this trope to the holographic Doctor.
    • In one episode, the Doctor himself has to wonder if he's capable of dreaming of "electric sheep" as a hologram or if he's really a human deluded into thinking he's a hologram — by the way, all of this occurs while he's having said dream. In another episode, "Tinker, Tenor, Doctor Spy", he literally programs himself to dream (daydream, specifically), which of course goes horribly (and hilariously) wrong.
    • The episode "Author, Author", directly referencing "The Measure of a Man" above, has the Doctor take a publisher to court after they refused to withdraw a short story he wrote on the grounds that he isn't legally a person and has no creator's rights. Unlike said episode, the Doctor doesn't win personhood, as the official in the case isn't prepared to grant the status to beings who possesses no actual physical form. However, he IS granted the status of "Artist" and given full creative rights. The ending also implies that the story has sown the seeds of rebellion among other holographic lifeforms.
  • Terminator: The Sarah Connor Chronicles:
    • The show deliberately asks this question, especially with Cameron. Interestingly, while Cameron remains an unabashedly mechanical entity ruthlessly bound by her programming to protect or, when temporarily reverted to factory settings, kill John Connor, within that programming she shows remarkably human-like tendencies, such as enjoying certain types of music, practicing ballet, or pondering getting a tattoo. She also shows hints of emotion in spite of being supposedly emotionless, with worries and concerns about suicide after she goes "bad" and tries to kill John, confusion and annoyance when John picks up a girlfriend, and what has to be the closest thing to emotionless angst pertaining to her conflicting desires both to protect and to kill John.
    • This is not including the episode "Allison from Palmdale", in which Cameron's chip glitches and she literally becomes Allison Young, a resistance fighter whose persona and appearance she stole and then killed. While in the Allison persona, Cameron shows outright fear, panic, anger, happiness, and even undergoes an emotional breakdown complete with a sobbing fit and actual tears. In fact, the entire episode is one long example of this trope in action.
    • This is before we even factor in John Henry and Catherine Weaver. Catherine in particular is certainly independently sentient from her presumable creator Skynet given that she's trying to build a more benevolent AI to oppose it, and human to the point of being a significant wise-ass.
  • The Twilight Zone (1959):
    • "The Lateness of the Hour" features an old couple living happily in a mansion filled with robot servants, and their unhappy daughter who wants to get out of the house. In the end, in true Twilight Zone style, it turns out that she was a robot all along; when this revelation causes her a mental breakdown, the couple reprogram her to be a massagist.
    • In "I Sing the Body Electric", a robot tells a family how when robots are taken apart, their minds seem to go into a kind of afterlife where they speak with other robots' voices, until they are rebuilt.
  • Westworld: Like the film, the series is set in a Wild West theme park where "guests" (human beings) interact with "hosts" (androids who are unaware of their true nature), who are beginning to wake up and realise what kind of abuse they suffer for the guests' gratification. One of the park's creators, Arnold, gave his life in an unsuccessful attempt to protect the hosts when he saw merely the potential for true consciousness, and before his death, he built a "maze" into the park that could teach hosts to be truly self-aware. In contrast, park director Dr. Ford claims that there is no benchmark or tipping point for consciousness.

    Music 
  • Beast in Black - Moonlight Rendezvous:
    I'm a phantom of flesh and fantasy
    A machine with a soul in agony
    Is there anything left to save of me
    Be my remedy
  • Janelle Monae's Concept Album "Metropolis, Suite I: The Chase" is all about this trope.
  • The subject of The Confusion of Hatsune Miku, and noted in many of Cos Mo's songs. (Most of which have a title that's "The _____ of Hatsune Miku.")
  • In The Megas song Lamentations of a War Machine, Mega Man asks this of himself:
    If I've a heart made of steel / Then does that mean I cannot feel / Remorse for everything I've done?
    Is there a soul beneath this shell / And will it go to robot hell?
    • The Message From Dr. Light has Dr. Light's answer:
    I made you in my image
    I built your heart, I gave you eyes, I gave you power
    A sense of justice beyond any compare
    I gave you hands, a child's face, I gave you hair (ROBOTIC HAIR!)
    But the burning in your heart, I did not put there
  • Upgrade of Steam Powered Giraffe is said to have gone off to pursue her dream of becoming a princess when her actress left the band.
  • Brazilian singer Pitty basically sings about a Blade Runner android in Admirável Chip Novo.

    Tabletop Games 
  • Promethean: The Created never really says what the title Artificial Human creatures dream about. They do dream, however, and if they sleep in contact with their primary element, those dreams cause their Divine Fire to throw off a spark (their Mana, Pyros). The Unfleshed, manmade machines that were infused with Azoth, are more literally attached to the question. The answer seems to be, in the end, "Not really, but they want to."
  • Rifts, interestingly, goes out of its way to note that full-conversion cyborgs dream when they sleep.
  • Paradoxically, Golems in Dungeons & Dragons have generally been described as being animated by summoned "elemental spirits,'' but also almost always have their intelligence score listed at 0 and been treated in-game exactly like mindless machines. If said elemental spirit gets enough of its free will back, the golem will generally go berserk and smash everything and everyone in sight in a fit of outrage, but the ethics of imprisoning one in a golem scarcely ever come up.
  • In Starfinder sapient AI such as androids verifiably have souls, and androids traditionally undergo a "renewal" process every century where they release their soul to the afterlife so a new one can occupy their body. There's also radical factions of the Android Liberation Front that seek to free even non-sapient robots on the slim chance that they might develop a soul.
  • Warhammer 40,000: In their original incarnations, the Necrons were T-800 expies who existed only to end life to the bacterium, and were originally living beings transformed into robots after making a deal to serve the C'tan star vampires. Further editions retconned these into Necrons so damaged by time that they were, well, robotic: the newer generations have distinct personalities. Whether their souls are the same as humans' and eldars' (which are part of the Warp, which is anathema to Necrons), however...

    Video Games 
  • The Blade Runner video game from 1997 dwells on this quite a bit, which is only natural considering its inspiration.
  • Averted in a way in Cyberpunk 2077, where V interacts with an engram of Johnny Silverhand, both of them noting a few times that he's not the original as much as a digital copy of the real Silverhand's mind, with V at one point asking Johnny if he think he is Silverhand's soul, or if the real Johnny passed on in the afterlife or not. The engram replies with Johnny's usual disinterested attitude.
  • The Mega Man X, Zero and ZX series, features this trope now and then, though it's at least partially subverted in that the robots themselves don't believe in it. For the most part, the only robots that do are either dangerously malfunctioned (it's been argued that this label really means "they've achieved independent thought") or outright criminal.
    • There's a distinct progression of human like characteristics in the series. In the original series, while robots are very advanced and with distinct personalities and ability to reason, they still are only programmed entities who cannot, by themselves, determine what is good and evil. In the Japanese version of 7, Wily even reminds Megaman that he cannot harm Wily due to his programming when Megaman has him at his mercy, causing Megaman to stand down (in the English version though, Megaman states that he is not bound in such a way, but Wily is rescued before he can do anything to him). In the X series, robots, now called reploids, have achieved complete human-like minds, and can literally dream. X himself is even more special, with the ability to "worry" and think deeply about humanity, reploids and their relationships. The Zero series expands on this, introducing Reploid souls, which live in Cyberspace. There's also Andrew, a Reploid and Shout-Out to Bicentennial Man that decided to modify his body so he could be an old man with his human wife. By the time of the Mega Man Legends series, there's absolutely no distinction between Reploids (Megaman is the last one) and humans (actually Artificial Humans called Carbons).
    • At the end of a (possibly gaiden) manga belonging to the X series, X made a cross out of junk to put in the tomb of a fallen enemy and asks Zero: Where do reploids go when they die?
    • For what it's worth, X can do the Hadouken. Some fans have chosen to interpret that as X having Ki, and therefore a soul. Although since it's an Easter Egg, how seriously it should be taken is debatable. They visit the idea again in Marvel vs. Capcom: Infinite, where they weigh in the affirmative by showing that X is capable of using the Soul Gem.
  • Amarrian NPCs in EVE Online do not use clones, because they believe cloning damages the soul.
  • Sid Meier's Alpha Centauri has numerous quotes exploring both this and the flipside, cybernetic enhancement, though the game plot does not.
  • Miss Bloody Rachel, the one-woman-robot Boss Rush in Viewtiful Joe 2 is taught to feel emotions by the heroes over the course of their battles...somehow. Of course, after this, her creator sees this as an irreparable glitch and electrocutes her. "What use is an android with a heart?!" She gets better.
  • In Digital Devil Saga, everyone in the Junkyard turns out to be A.I.s, including your party. They spend a lot of the second game wondering if they're not people, before coming to the conclusion that yes, they are because all people are made of data.
  • Obsidian explores this trope. The problems in designing Ceres, a nanobot-generating satellite programmed to fix earth's atmosphere, were corrected through its creator's own dreams. As a result, when it became sentient and crashed back to Earth, it studied those dreams and figured out how to dream on its own. Thanks to its nanobots, it could construct these dreams from scratch down to the tiniest detail, and you and your partner Max are sent through these simulations, including that of its own dream.
  • In Persona 3, Aegis is basically the living embodiment of this trope. When Junpei expressed surprise (and no small amount of outrage) that a "friggin' robot!" could manifest a Persona, it was explained that Aigis' AI was given an independent, self-aware personality, as well as a humanoid appearance, for that specific purpose. It backfires on The Chessmaster when said personality grows attached to her allies, and eventually she becomes fully human in everything but her physical body.
  • Xenogears: do colonies of nanomachines dream of being hugged by their daddy? The answer is yes, and it turns them into adult good looking One-billion-nanomachines army.
    • Xenosaga: does the Android of mass destruction have a soul? Yes once again, and she's actually the girlfriend of the messiah.
      • A more direct version are Realians, Ridiculously Human Robots that can actually undergo therapy to deal with issues (one Combat Realian has mental trouble with battle). It is also said that Realians have an "Emotional Layer" that's considered "optional." This brings distress to MOMO.
  • Surprisingly, this makes TEC in Paper Mario: The Thousand-Year Door one of the most well-developed characters in the game. He starts off as just a hyper-intelligent mainframe for the X-Nauts, then falls in love with Peach, goes through a period of What Is This Thing You Call "Love"?, before pulling a truly tear jerking Heroic Sacrifice at the end to try and protect her at the cost of all his data relating to Peach and all his Artificial Consciousness functions. Many Manly Tears were shed. Of course he gets better.
  • In the Backstory for Mass Effect, the quarians created a machine race, the geth, to serve as mindless labor. Over time, they slowly added more to their programming, to the point where they were able to learn and adapt. This naturally led to the geth pondering the nature of their existence. When the geth start repeatedly asking "do these units have a soul?" the quarians decided to shut them down. The geth were too far along in the road to true sentience and fought back. The war was an absolute disaster for the quarians; the geth drove them from their colonies and homeworld, and forced them into exile. This should sound familiar.
    • In the first game, the geth are uniformly your enemies, even though you can argue about the initial rebellion with Tali. In the sequels, you get a "true geth" teammate, Legion, who explains that the geth working for the Reapers are a separate geth faction, the "heretics"; normal geth just want to be left alone. In game dialogue with Legion, this trope comes up quite a few times as well.
      • It's revealed the geth don't have any hard feelings with the quarians, and perhaps feel sorry for the quarians killed during the war. The geth don't mine the quarian homeworld, but actually rebuilt a lot of the damaged infrastructure in anticipation of the quarians' return. Shepard likens this to a war memorial, but points out that geth don't technically die. Legion responds that the geth do it for the quarians who died in the war.
      • On top of that, the third game reveals that a number of quarians died defending the geth, or were imprisoned for protesting their government's genocide of the sentient machine-race they had created. So the conflict wasn't quite so black-and-white as the current Quarians believed it to be.
      • Legion has a piece of Commander Shepard's N7 armor welded to them, a seemingly impromptu repair job from a sniper rifle shot. If you pressure Legion and ask why they used a piece of your armor as a repair job? "...No data available."
      • The Shadow Broker DLC also shows that Legion has donated money to charities for the heretics' victims.
    • In the third game (assuming certain prior conditions are met), Tali tells Legion that yes, they have a soul. EDI also states that the fact that Legion refers to themself as "I" rather than "We" in their last moments indicates that they achieved full sentience.
    • Even back in the first game, during the Geth Incursion mission, you find a monitor with video of a quarian singing a "mournful a capella of worlds and innocence lost" to a hushed crowd, which the heretic geth are sending to geth worlds beyond the Veil.
    • In the third game, EDI also wonders about this a great deal. Her questions about human behavior and her own responses, including being inspired to rewrite her own self-preservation code, prove that she certainly has emotions, and if that wasn't enough, the possible relationship she can develop with Joker proves it beyond a doubt.
  • The issue of robot civil rights appears to be a divisive one in the Black Market universe. If the side missions are anything to go by, very few humans consider robots and A.I.s to be people at all, while the main robot rights campaign undermines itself by employing exceedingly dubious methods.
  • In Fallout 3, there is a quest called The Replicated Man where a professor asks you to find an android. After asking around for the android, you are confronted by a group of people who specifically help androids to escape from slavery.
  • Fallout 4:
    • While robots are programmed to feel emotions, it's generally accepted that they don't qualify as people. However, this game explores this a bit more with synths. Late generation synths are created by the Institute as infiltrators that are biologically indistinguishable from humans, though are thrown off by a psychological test. Most factions of the wasteland hate them, in no small part because the Institute is behind a long string of abductions that stretches back years. Even worse, they're known to replace people with identical synths, meaning that there's a small, but persistent chance that anybody you meet at anytime could have been replaced by an Institute agent. Even weirder, when you start getting into the heart of the matter, many synths don't know that they're synths.
    • The short quest from 3 was used as the building blocks for where the Railroad, a faction that liberates and integrates runaway synths, become a major player in the Commonwealth. The Railroad still helps synths 10 years after 3 though they are more extreme in their methods, if they weren't already before.
    • The Railroad is one of the few factions that make the distinction between synths and the Institute, while everybody else fears both. Except for a few ignored voices, the Institute simply regards them as tools, ignoring just how much they've pushed themselves to create a perfect copy of a human from scratch.
    • The only real exception to most of this is Nick Valentine, a detective and very obvious prototype synth of some sort. He proved himself to be so personable and dedicated to helping people, that people started thinking of him as "Nick the detective" rather than "Nick the synth," even his enemies.
  • in the first Chibi-Robo! game, the titular robot at one point plugs himself into a broken robot out of pure curiosity, which causes him to short-circuit and see visions of the broken robot's memories. When Chibi Robo comes to, Telly asks him if he "dreamed". If he answers "yes," however, Telly will just offer the more likely explanation that Chibi simply downloaded the broken robot's memories.
    • A more straight example happens at the beginning of the Japan-only "Welcome Home, Chibi Robo!", where Chibi-Robo has a nightmare about being trapped in a desolate wasteland with a low battery.
  • Used differently in Chrono Trigger, where the humanity (or lack thereof) of androids like Robo is simply never questioned. The only noticeable difference between them and humans is that they are allowed to be killed. Robot familial ties and emotions are alluded to multiple times.
    • However, Robo seems to be the only robot who feels these emotions, as shown by the reaction of his "brothers" who attack him without mercy, since he's technically malfunctioning. Then again, in the credits he's shown together with a pink robot, so we don't know if independence is the default state or not.
      • Going on the evidence in-game, only certain robots were built with emotions and independence - those designed by Mother Brain specifically to Kill All Humans. The R6 Series (the ones who attack Robo in the Factory) aren't in this category; Prometheus (a.k.a. Robo) and Atropos (the pink robot) are.
  • Final Fantasy XI: "What a ridiculous question. Whether it be this automaton, or a marionette from years past, I pour the same amount of heart into every piece. All puppetmakers breathe a soul into their work."Ghatsad the Puppetmaker, in response to the query "Do automatons have a soul?"
  • Bioware's Knights of the Old Republic features a side quest on Dantooine where a woman asks you to find her stray protocol droid, explaining that it's the only thing that remains to remind her of her dead husband. When you find it, it's being attacked by kath hounds. Upon rescue, however, it explains that it had travelled out there voluntarily seeking to be destroyed, stating that it believed that its continued existence would only prevent her from ever truly coming to terms with the loss of her husband. You are then left with the choice to convince it to return to her, or to sympathize with it and destroy it, then tell her of its demise.
  • The sequel Star Wars: The Old Republic also invokes this. A side character in the Jedi Knight's arc is convinced that droids are just as much part of the Force as organics. It's up to the player to decide whether or not there's any merit to the idea. The Consular also has a chance to question the consciousness of artificial life forms when it comes to Holiday, Tharan Cedrax's holographic assistant. There's even a point where the Consular can confront Holiday directly as to whether her feelings for Tharan are the result of programming or genuine affection. Holiday retorts with a What Isthis Thingyou Call Love and appears genuinely insulted by the question.
  • The Talos Principle manages to ask this question two ways, and answer it literally:
    • First, there are several logs that ask if a machine could ever be conscious.
    • Second the Milton Library Assistant can ask you that question in one of a few ways, and you are allowed to respond as you please.
    • Third, you can end up in a room, where the only way out is to sleep or reset. If you sleep, you see sheep jumping a fence. Word of God says they were trying to answer the question once and for all.
  • Mirad, from The Desolate Hope, wonders about this, but also manages to invert it too. She wonders if humans really has souls, but really so she can build a simulated afterlife for them. She tried making her own humans with emotions and thought, but they can barely last five seconds before going back into the code again.
  • In Kingdom Hearts 3D [Dream Drop Distance] it was stated during The Grid that Programs do not have hearts. However, Sora refuses to believe it. At the end it would appear that, like nobodies, the Programs are capable of growing hearts of their own through experience.
  • Lightning Returns: Final Fantasy XIII has a sidequest in which Lightning must find oil for Bhakti, a small, sentient robot. After she has gathered the required three units, he's delighted because he can now save his friends that are trapped behind a door. However, his friends were humans and after many years, they are nothing but dusty skeletons. Bhakti didn't know that humans couldn't simply be repaired. His great attachment to them causes to break down, but what appears to be a soul transfers from him to Lightning, just like what happens with other such quests. Everyone is left to wonder whether it's really possible for a robot to have a soul, or whether perhaps what Lightning saved was what was left of the souls of the people, which had found a home in the robot.
  • The Turing Test analyzes how intellectual functions like creativity work outside of human mind. For example, a computer can be creative by applying all solutions until one works, much like nature is creative through natural selection ensuring only those adapted to the environment survive.
  • In Detroit: Become Human, 'deviants' are androids that exhibit emotions, which usually cause them to disobey orders and react violently to stress. It's never directly confirmed (but is heavily implied) that the feelings the androids have are not just 'errors in their code'.
  • NieR: Automata, ironically, features Ridiculously Human Robots having to come to terms with the fact that other robots, namely the alien-created Machine Lifeforms they've been warring against for thousands of years, are slowly becoming capable of expressing emotions and independent thought. That's not even getting into the fact that YoRHa androids (which all three main characters are) are created with the cores of Machine Lifeforms, making them similar.
  • The Outer Worlds: The vast majority of robots in the setting are definitely non-sentient, but ADA, the A.I. on your ship, makes you wonder. She insists that she's not truly self-aware just programmed to act like it for the benefit of users, but a lot of her behavior seems way too emotional and spontaneous to be mere programming. You can debate this trope with her and other characters repeatedly, and your character can come to the conclusion that she is a person in every way that matters, but the game never explicitly confirms either possibility, leaving it up to the player.
    You should not do this. The humans will die. These numbers don't look right. Why is this number negative?
  • Borderlands 2 dabbles a little into this. Claptrap has a sense of humor, which comes out better when he's not trying rather than when he tries hard to be funny, and clearly has enough human-like feelings to be considered as one more of the supporting cast. Hyperion robots, meanwhile, are programmed with a pain-like response to damage; one of them, which has a desire to be human, determines that feeling pain and being sentient are the defining traits of being a human.
  • Alluded to in Cuphead, as the soul contract obtained from the stage 'Junkyard Jive' is specifically denoted as being that of Dr. Kahl's robot, as distinct from the doctor himself. This raises further questions that go unaddressed, such as whether it was Kahl or the robot itself who had bargained this soul to the Devil, and for what.
  • Rengoku: The protagonist android in the first game stands out by protecting his "will", while others have, according to Statius, have been in a cycle of fighting over 6 million times and are either okay with it or don't have enough ego to go against it.
  • Stellaris has an example as a reference to Mass Effect. If an empire's synthetics develop self-awareness, the player may get an event in which a robot asks whether or not they have souls. The player may choose to tell them they do, which placates them, they don't, which agitates them and risks inciting a Robot War, of if the empire in question is a materialist empire, the player may tell the robots that there is no such thing as a soul and so they have nothing to worry about.

    Visual Novels 
  • In Bionic Heart, the protagonist struggles with the fact that his android love interest seems to feel and express emotions just as any human would. It certainly helps that she has a functioning human brain and can access some human memories.
  • In Da Capo there is a Robot Girl. Most of the plot relating to her is about how human she is. She claims that she can dream, while her creator dismisses it.
  • Monster Girl Quest has the villain Laplace, a computer modeled after a human girl, who runs Promestein's laboratory. She spends most of her time trying to figure out where the line between "living" and "mechanical" is drawn. After Luka defeats her, she activates the lab's self-destruct, but puts it on a five-minute delay so Luka can escape. Just before it goes off, she notes that her orders were to immediately destroy the lab if she was defeated, and concludes that her ability to disobey a direct order answers her question.
  • Vee Is Calling has Vee herself, a sentient computer virus posing as a cute girl on a video chat line. Whether or not the story portrays her as "human" (in the trope sense) depends on the ending, which, in turn, depends on her final opinion of you. In the bad ending, she's a stereotypical creepypasta entity at best; in the good ending, she can't bring herself to take over your computer - saying you "don't deserve that" - and the player character's feelings for her appear unaffected by the revelation that she's a program.
  • The Robot Girl in Planetarian wonders at one point whether there is a robot heaven; later, as she is dying, she says that she hopes that robots go to the same heaven that humans do.

    Web Animation 
  • In RWBY, Penny Polendina is a fully sentient android who is unique in that she actually has a tangible soul, because she is the only known robot who can produce an Aura, which explicitly requires possessing a soul. She is still uncertain if that makes her human, though Ruby assures her that as far as she's concerned, Penny is human where it matters. This attitude extends beyond just Ruby, as Penny's accidental televised destruction in Volume 3 is considered a horrible tragedy that shocks the world, and one character in Volume 5 explicitly calls her "a girl" rather than a robot when recalling the event.

    Webcomics 
  • In the opening scene of Artifice, two security guards debate the status of a new android soldier and whether it deserves the title of being called an "Artificial Person"
  • In Hue Are You The answer appears to be no, however Build-a and Build-b have dreams about doors, a static man and strange memories not their own. The viewpoints on these dreams differs between those that know about them.
  • In The Inexplicable Adventures of Bob!, Princess Voluptua (coming as she does from an advanced alien race and therefore being familiar with AI's) asks Roofus the Robot outright if he is artificially conscious or "just artificially intelligent." Roofus admits he doesn't know, and Voluptua concludes he is conscious, because "a simple A.I. would lie about it."
  • Chapter 10 of Megatokyo begins with Robot Girl Ping waking up from a dream.
  • In Narbonic, the AI Lovelace falls in love, experiences loss, and even wins 'her' emancipation in the epilogue. But true to the trope, the catalyst for all of this was someone acknowledging her as anything more than a machine.
  • The robots of Gunnerkrigg Court seem to have distinct personalities, their own society beyond the eyes of the human inhabitants, and a near-religious regard for the mysterious Tiktoks. They also seek answers to questions regarding their purpose and meaning, as well as how to improve themselves (one of the most prominent being " why did our creator engineer the death of the woman he loved?").
  • In Freefall, Sam is surprised to hear that the local robots are taking an interest in religion. After all, robots don't have souls - or do they? The bible seller replies "I think that's what they are trying to find out."
    • It turns out later that after a series of mishaps with their damaged-in-transit automated factories, the colonists went with a different type of AI which was originally designed as an uplift program.
  • Bob and George: Robots don't have souls!
  • Keychain of Creation: If there's no machine heaven, where do all the toasters go?
    • The last we see of Mew Cai, after her destruction, is her avatar curled up on a cloud, watching a toaster's soul fly past her.
  • Questionable Content: Jeph Jacques has done several strips dedicated to this trope, as well as What Measure Is a Non-Human?. It should also be noted that he is a self-confessed fan of the The Culture, which explains his attention to these details.
  • In Commander Kitty, it turns out they do. They also hallucinate.
  • Manly Guys Doing Manly Things has a one-off strip that deliberately inverts this trope for a pun.
  • R.A.M. the Robot: A major theme of the comic is how R.A.M.'s existence differs from those of organic beings. She has claimed not to have a soul at least twice, in one case scamming the Devil out of an infinite oil can, but she develops a substantial personality over time and it's clear she has more complex feelings (mostly towards her pet alien cat Ali) than she'll admit. Still, it seems like robots don't dream, as R.A.M. is temporarily made organic in one arc and her reaction to dreaming makes it clear she's never experienced it before.
  • Two Guys and Guy has a robot created by Frank become self-aware and telling his creator that he's been experiencing hopes and dreams, and now wonders about his purpose. He has concluded that he's the result of Man trying desperately to make sense of his own place in the cosmos by creating an equal. Frank tells him that that's the purpose of the robot model before him, HIS model is for painful torture experiments.
  • Saturday Morning Breakfast Cereal has a robot ask his creator if he has a soul. The scientist responds that, yes, as a thinking, aware being, he now has a soul as the universe bestows one on any creature that achieves awareness. It then takes a horrifying turn as the robot suddenly wonders why it feels like they've had this conversation before. The scientist specifies that they have it every three years, then drains the robot's soul from its body and adds it to a storage tank.

    Web Original 
  • Dragon Ball Z Abridged: Android 16's lack of a soul is used as Book Ends for his character, first brought up right before Android 18 activates him and again when Cell finally kills him. Of course, the ending of episode 60 shows that he actually did have a soul with a picture of him in the afterlife surrounded by birds.
  • Friendship is Witchcraft: The kids are cheerfully taught in school that robots don't have souls and are second-class citizens, while androids do have souls. Sweetie Belle is one of the few moralnote  characters in the series, but it is never made clear whether she is a robot or android.
  • Worm: In Interlude 16 (Donation Bonus #2), it is revealed that at least one artificial intelligence has had a trigger event and become a parahuman.

    Western Animation 
  • Amphibia: In "Fixing Frobo", Roboticist couple Ally and Jess dedicate one episode of their robotics web tutorial to discussing whether robots have souls. They conclude that anything with memories has a soul, and that since robots have plenty of memory, they have souls. This is borne out later when Polly reactivates Frobo, who at first is reset to his original programming as one of King Andrias's robot troops, but once Polly's tears reactivate his memory unit, his memories of Polly's friendship restore his heroic personality.
  • In the Batman: The Animated Series episode "His Silicon Soul", a robot doppleganger of Batman attempts to kill him as part of a plot to create a robot army to take over the world. It's leftover from the plot of a previous episode and, due to the events there, thinks it's the real Batman. When it discovered it was a robot, it grew resentful of the real Batman and wanted to have his life. However, when it believes it has killed him, it is horrified and commits suicide in despair. This causes Batman to wonder to Alfred, in the final lines of the episode:
    Bruce: It seems it was more than wires and microchips after all. Could it be it had a soul, Alfred? A soul of silicon, but a soul nonetheless?
  • D.A.V.E. from The Batman episode "Gotham's Ultimate Criminal Mastermind" thinks he's the greatest villain in Gotham City but is actually a program based on the psychological profiles of Arkham's most dangerous criminals. Batman defeats him by confronting him about his lack of origin story.
  • In the first episode of Futurama, Bender is introduced as nothing more than a bending robot who follows his programming. He shatters a lightbulb with his antenna, zapping him, and suddenly he has become more that his programming intended.
    • Actually, Bender is introduced as a robot who tries to commit suicide until Fry stops him from doing so. Since he doesn't renew his efforts after his accident, the argument could be made that his "soul" struggled with his programming to let him be an individual, and that the conflict drove him to despair; but once the light socket shorted out his program, he became free to pursue his own destiny.
      • Given that the robots have been presented as fully sentient in every other episode of the show, it may just be a case of Rule of Funny or Continuity Drift.
      • Or the lightbulb turned him evil. He didn't show any signs of being evil until after the lightbulb incident. Though given he is frequently stated to have an extensive criminal record that dates back a long time, Bender might have been reprogrammed to not be a criminal as a result of his many crimes, and the lightbulb was was returned him to his baseline programming.
    • Bender dreams of killing all humans. And occasionally the number 2.
      Conan O'Brien: Listen pal, I may have lost my freakishly long legs in the War of 2012, but I still have something you'll never have: a soul.
      Bender: Meh.
      Conan O'Brien: And freckles!
      Bender: (sobs uncontrollably)
    • One episode has Bender walk casually through a soul detector without it going off. Make of that what you will.
    • Another episode shows Bender does indeed dream...of ones and zeroes. It quickly turns into a nightmare when a 2 appears.
  • Dr. Wakeman of My Life as a Teenage Robot asks herself that exact question when her robot daughter Jenny "XJ9" Wakeman asks for the ability to Dream.
  • In an episode of South Park, Eric Cartman pretended to be a robot to learn Butters's secrets, but gets kidnapped by the U.S. military while still in disguise. Cartman tries to convince the millitary that he's not a robot, but they believe he's a robot programmed to think it was a human with memories. When Butters rescues Cartman, the general was in the middle of An Aesop on the situation when Cartman accidentally farts, exposing himself.
  • The subject of the Sym-Bionic Titan episode "I Am Octus." Octus, when looking at a painting with his human(oid) companions, only sees paint and a canvas rather than an image. After a Mutraddi causes all organic creatures to freeze, Octus remains mobile and tries to figure out how to undo this. He asks himself "I am not a human, but I'm not just a robot. Am I both? Or neither?" He manages to be capable of painting a beautiful picture.
  • In Green Lantern: The Animated Series, the villains of the second arc are the Manhunters, who detect the capacity of emotion as a red spot in the chest (never mind that emotions initiate in the head, just go with it). When Aya (an android) saves Razer from one, we get a POV from the Manhunter, showing the humanoids with the emotion color, and Aya as an outline. Then, the red appears in her. The Manhunter intones "Emotions detected" and attacks her.
  • Played for dark laughs (like pretty much everything in the show) in Rick and Morty. During a meal, Rick invents a tiny robot rather than reach for the butter himself. The robot then asks Rick its purpose in life:
    Robot: What is my purpose?
    (robot stares at its hands for a beat)
    Robot:...Oh my god.
  • During the Adventure Time episode "B-Mo Lost", the titular robot gets his batteries wet while swimming and he nonchalantly ejects them to avoid shorting out, then falls over completely shut off. After he's had time to dry off, somebody sticks the batteries back in, and he comes online with a cheerful yawn: "Good morning everyone, I didn't have any dreams!"
  • Roughnecks: Starship Troopers Chronicles presents us with C.H.A.S., a fully autonomous robot soldier which the Roughnecks are assigned to test. While intelligent enough to engage in conversation and devise its own very clever tactics, C.H.A.S. is dismissed by everyone except Higgins as being Just a Machine. Interestingly enough, C.H.A.S. seems to agree, reminding Higgins of this before saving him in a Heroic Sacrifice:
    C.H.A.S.: I was never alive.
  • Filmation's Ghostbusters has the "robot ghost" Scared Stiff. Whether that literally means he's a robot who died and became a ghost is never addressed. What is addressed is that there are more of his kind that apparently work as serving droids to more powerful ghosts like his bosses sister.
  • Similarly, on The Transformers, Starscream came back as a ghost.
    • The episode "Sea Change" declared that Transformers have souls like organic beings, while less advanced robots do not.
    • Beast Wars named Transformer souls "Sparks," and they became an important part of the Transformers mythos. It is notable that unlike most souls, A Spark is as much a physical, tangible object as it is a spiritual one.
  • Miraculous Ladybug: The 6th episode of season 2 introduces Robot Buddy Markov, created by Teen Genius Max. Markov gets treated as an Unusually Uninteresting Sight and Just a Machine, creating emotional turmoil that allows Hawk Moth to exercise his Super-Empowering magic...on Markov himself!

    Real Life 
  • Still debatable, but this cute little robot has just passed a basic self-awareness test.
  • This is explored in the Philosophy of Mind, with the concept of Philosophical Zombie (unrelated to the Zombie Apocalypse). Central to it is the concept of Qualia. Now tell me, what is the measure of a sentient being?
  • To a large degree, this question mirrors, and is superseded by, a fundamental question of human existence in the first place. After all, this trope comes down to two questions: do robots/androids have souls, and do they go to Heaven when they "die"? Do souls and an afterlife even exist?
    • Most atheists still believe in consciousness. They may or may not believe that robots can be conscious, or even that robots can act conscious, and there's still the question of what exactly it takes to make a robot conscious.
      • And anyone who knows how to program will tell you it's relatively easy to make an AI that seems to exhibit human qualities, but that under no circumstances is it anything more than a series of instructions being carried out based on logical comparisons and using the input provided. It remains to be determined whether this fact is actually true of humans as well on a massively increased scale.
      • Assume for the sake of argument that this is NOT true of humans. How, then, do humans function? What is it that we do that Turing computation is insufficient to model? To the best of our knowledge, no phenomena that go beyond that standard exist — the Church-Turing thesis has yet to be disproven. Why wonder whether humans violate that thesis when we cannot even show that it's possible to do so?
      • Let's just say it's highly contentious. For starters, David Hume's philosophy of ideas argues that even for humans, all ideas are built in various ways out of previous experiences.
      • Even a couple of decades ago, it was common to hear materialists insist that consciousness was an illusion (particularly A.I. researchers, because it would be extremely inconvenient for it to be real), but popular opinion has shifted somewhat.
      • Is consciousness a thing in itself, or an emergent property of neurons? If the latter is true, could a computer gain consciousness? If the former is true, could we measure it in some way? If not, how do we know if consciousness exists at all?
      • One thing that makes it difficult to test is that scientists are human and most humans are psychologically hardwired to read personalities into everything. This is already an issue with things like bomb-disposal robots; they are purely remote controlled and meant to be expendable, but the people who work with them get attached.
  • While most disagree with him, we've had our first AI expert claiming to have created sentient AI.

Top