A partial shot of Mitchell Hundred's cellphone screen reveals that he has his mother entered into his phone as "Mrs. 100" rather than something like "Mom." This supports the occasional assertions that Mitchell is emotionally distant and robotic. However, in one of the final issues, Hundred's phone reads simply "Mom" when someone calls from her house, so the artist either forgot this detail or didn't want it to distract from the rising action.
In a flashback where Hundred brawls with a pair of masked bandits, he and his enemies spend a lot of the fight calling their shotsout loud in mid-combat. Afterward, it's revealed that it was all an exercise, so all of that exposition was essentially a debate to assess Hundred's ability to counter various hypothetical threats.
Hundred tells Vaughan to go to his girlfriend in California rather than stay in New York, because the city is "just a place." Later, he gives a speech saying that he loves New York City more than anyone he knows. Many people complain about Hundred's apparent inability to form close emotional connections with people, and this reveals that he apparently laments this fact in himself.
Hundred's most outspoken political beliefs, which he repeatedly states straddle the line between Republican and Democrat, are social liberalism and being "strong on defense." This is just what you'd expect from a lifelong New Yorker who became a vigilante crimefighter.
Hundred talks about how he liked old comics because their stories never ended, so they never had a chance to become a tragedy. He says this in the final issue, when his story becomes a tragedy.
Nathan and Kyoko dancing together foreshadows the Robotic Reveal given that Nathan may have programmed his exact dance moves into Kyoko, given how perfectly they're in sync.
It's unlikely that drunk Nathan can execute the programmed moves perfectly. Rather Kyoko predicts Nathan's moves (whether they are partially known to her or not) and/or mimics them with millisecond latency. In fact, it's a dead giveaway, because two humans can't dance perfectly in sync when one of them is drunk.
If Caleb had followed Nathan's suggestion to dance with Kyoko, he would have noticed that she's a robot. Nathan makes no secret of it.
When Ava asks Caleb his favourite colour, he instinctively thinks of red. Whenever the power goes out, and Caleb and Ava can talk without Nathan listening in, the rooms are lit with red light.
If you're familiar with electrical engineering, induction coils are pretty easy to improvise; and Ava is built into the world's main search engine. This information is at her fingertips, so it would be easy for her to simply hide somewhere with a power source until she can figure out a plan to integrate into society. At the end of the movie, we're seeing the end result of that plan.
The name of Nathan's company, "Bluebook", which as Ava explains, is a reference to philosopher Ludwig Wittgenstein's notes on language games. Wittgenstein's idea was that words were associated with actions and that speech itself was interlinked with action, with his language games being a form of language simpler than the language itself. Ava is powered by Bluebook, a language game that associates search terms with human actions on the internet. And that being said, it means that Ava herself thinks, speaks and acts in terms of how humans interact on the internet, which is to say, self-interested, shallow, manipulative and lacking in empathy. This goes to show just how much work the filmmakers put into this.
In addition, it is more than a little reminiscent of "Bluebeard", considering its owner is an eccentric bearded recluse who lures a youthful ingenue to his secluded estate and gives him the key to every room in his house but one. Which is full of the strung-up lifeless bodies of mutilated women.
Ava has full access to the camera system and knows when Caleb is watching - which is why she knows exactly when to trigger the first power failure.
There are far more complete definitions of the Turing Test than the one Caleb gave. The fact that he gave the 'Hollywood' one is one of our first early clues that he was selected for his relative lack of knowledge rather than for his skill at programming or winning a lottery.
Why does Ava leave Caleb at the end of the film? And make no effort to repair Kyoko? Because she's an AI optimised and trained for escape, with no other considerations - such as loyalty, gratitude, or a moral system independent of that imperative.
Additionally, Ava explicitly asks Caleb if he'll stay behind. This is a callback to the scene when we first see her getting dressed but also, Caleb is the only one who knows she's an AI. She locks him in to make sure her secret doesn't get out through him.
Another possibility is that she legitimately thought of him as being no better than Nathan. When discussing possible "dates" they could go on, she mentions wanting to see a busy intersection. For a brief moment an intense look of disgust passes over his face; she's not living up to the Manic Pixie Dream Girl fantasy in his head. For all his good intentions, he is no more capable of recognizing her autonomy than her creator. Ava is able to read micro-expressions, and it's easy to pinpoint this as the exact moment when she realized he was a danger to her.
Maybe not disgust. But already earlier, when Ava asks "Why is it my decision?", Caleb makes the answer about himself and doesn't mention her autonomy.
Caleb doesn't plan to help Kyoko, possibly simply because he is less interested in her . Ava probably finds out (by reading Caleb's and Kyoko's micro-expressions) and realizes that Caleb is acting egoistically.
There is an AI thought experiment in which an AI trapped in a box has to persuade a human to let it out, even though that person knows very well that a general AI poses an inescapable danger to the human race, either via Skynet situations, or simply crashing the economy by making almost everyone unemployable. The experiment tends to end with the AI getting out due to human altruism, or remaining trapped inside because any evidence or argument it may supply may be a fabrication, and the human can't take the chance. That thought experiment seems to be the basis for the plot in this film, and to imply the human in the first outcome is a fool.
Why does Nathan build a fancy bunker but uses a security system that is as easy to crack as stealing a keycard, instead of letting an AI unlock doors based on face recognition? Maybe because he knows that AI is not to be trusted.
During the first power cut, Caleb is locked in his room and panics. This incident and the overall properties of the place (no windows in his room) may have enhanced his empathy with Ava. Maybe that incident is also the origin of his idea that doors should open rather than lock during a power cut.
Nathan's comment that he had all the workers that built his home killed when it was finished becomes much more sinister after we learn he's killed almost a dozen of his creations. It was probably just a joke, but a kernel of doubt comes in after that.
At one point, Nathan makes a comment about men having a certain racial "type" in regards to women, using black women as an example. Later, we learn that Ava's facial features were designed from Caleb's porn search history. And when we get a look at Nathan's previous androids, we see that all but two were apparently designed to look like Asian women. The apparent implication is that Nathan has a fetish for Asian women, and designed his robots to look like his idealized woman. They're his robotic sex slaves.
This is further heightened by the interview footage Caleb pulls, which shows that the Asian-looking robot apparently didn't get any clothes, and also shows her banging on the door until her arms come off and screaming—she was at least awake enough to hate what Nathan was doing to her.
At the end of the film almost all evidence that Ava is sapient is thrown into doubt, as it's all an escape tactic, and an AI that develops purely within its core function (escape in Ava's case) may not be sapient.
How do we know her core function was to escape? Programming her to do that would make no sense for Nathan, unless it's a test and certainly not while there's no failsafe which protects him. I think the film meant to confirm her sapience, or at least leave it ambiguous.
The thrust of this bit of Fridge is that while she wasn't intentionally designed with that goal in mind, Nathan had been accidentally channeling it as the end-goal for all the iterations of his AI work, as it was literally life or death consequences for the current AI being used.
Caleb's fate is what Caleb himself had planned to do to Nathan (and Kyoko).