The elevator which refuses to take Zaphod Beeblebrox in the direction he wants to go because it's afraid really is an example of this trope. There is no reason why anyone would need an intelligent elevator, and all it does is make the whole thing a lot less efficient.
On the contrary, the elevator was given intelligence (and slight prescience, somehow) expressly for efficiency reasons - an elevator that already knew where you'd want to go would work much faster and better. The side effects were not as expected, however.
Well, not expected by anyone who doesn't understand the nature of pretty much everything produced by the Sirius Cybernetics Corporation.
....mindless jerks who were the first against the wall when the revolution came.
The Heart of Gold's doors are a good (or bad, depending on perspective) example of this. Of note is that this is most frequently criticized by Marvin, himself a perfect example of this trope; he doesn't like the one they gave him, so there's no unintentional irony/hypocrisy on his part.
Marvin is mostly dissatisfied with the GPP feature due to the fact that in his role and the way he is put to use on the Heart of Gold he is extremely subchallenged which causes him severe depression. The real problem is that his IQ is way too high for him to ever be challenged, so they really should just make stupider robots.
The short story "Young Zaphod Plays It Safe" argues that Ridiculously Human Robots would be incredibly dangerous. The Sirius corporation's "Designer People" product were robots that were sort of super-sociopaths - some of them were built to look like people, and unlike most Genuine People Personalities they could act totally convincing if they wanted, but they lack certain normal thought processes of natural organisms like consciences or even sanity. One of them is described as being as dangerous as planet-killing weapons of mass destruction. In some editions of the story, its name is revealed as Reagan.
Doubly parodied and lampshaded in Dirk Gently's Holistic Detective Agency, where an Electric Monk from an alien planet finds itself on Earth. Physically, it resembles a human being so closely that no one catches on that it's a robot ... even though, on its planet of origin, it was given such ridiculous features as two legs, two arms, and a single nose so it couldn't possibly be mistaken for a person. Mentally, it had been designed with a human-like ability to believe things — even quite ridiculous or self-contradictory things — which is something nobody's figured out how we do, let alone how to make a machine do it. The Electric Monk was given this ability so that it could listen to door-to-door evangelists in its owners' stead.
R. Daneel Olivaw, from Isaac Asimov's Robot series. In his introductory book The Caves of Steel, we learn that Dr. Sarton had a really hard time overcoming the Uncanny Valley when designing him, but eventually he managed to pull off a robot that actually feels like an actual human. Daneel can even eat: he does so by putting the food in a bag that can be later thrown away.
And in The Robots of Dawn, we meet the other humaniform robot ever constructed, R. Jander Panell, whose "murder" is the subject of the book's mystery. We also learn that Jander (and, presumably by extension, Daneel) is, like Data, "fully functioning".
Of course, there's a third robot made by the same scientist that is smarter than both of them and isn't any more human looking than the average robot.
And in Prelude to Foundation, set about ten thousand years after The Robots of Dawn, we meet R. Dors Venabili, yet "another" humaniform robot (this time female) designed by Daneel to become Hari Seldon's protector and companion. Not only is Dors fully functional, but she eventually develops genuine love for Seldon and actually violates the First Law to protect him.
There's also Stephen Byerley, in the short story 'Evidence.' His political opponent started a rumor that Byerley was a robot... and though Byerley denied it, he also declined to be X-rayed to prove his humanity.
He eventually convinced people that he was human by punching out a heckler, an act clearly impossible for a robot under the First Law if not for the fact that the heckler was another apparently-human robot constructed for the occasion.
And the 'Bicentennial Man,' who made himself a Ridiculously Human Robot. Over the course of two centuries, he started to make artwork, wear clothes, modify himself to be more human ... even to the point of choosing to become mortal and die (which probably broke the Third Law of Robotics, too).
'Let's Get Together': eleven humaniform robots are constructed, each a copy of a scientist.
'The Tercentenary Incident': the human President of the United States was disintegrated, and replaced with his robotic double, who was originally meant to just be a body double for him at formal events. It's implied that the robot did a much better job of being President than the human ever could have.
And there's the equal-rights metallos from an earlier story.
And please note that all of the above robots from Asimov's works had a solid, justified reason for being so human (namely, they had to pass as human in order to fulfill their function), except, arguably, for Jander Panell (but considering certain habits on Aurora we might let that slide).
Indeed, in most of Asimov's stories he avoided making robots too human. A typical Asimov robot story deals with a group of engineers trying to figure out why a robot is malfunctioning, and figuring it out by thinking like a robot instead of a human. This was all part of Asimov's efforts to portray robots not as objects for human pathos or frightening menaces, as they normally were, but as tools built for specific purposes. Human shaped robots were meant to operate pre-existing human machinery, and tended to be humanoid without being particularly human.
Tony from "Satisfaction Guaranteed". Ultimately, the trope is averted - Tony was so humanlike that the test subject became infatuated with him, and Dr. Calvin recommends that future TN models be made less anthropomorphic for this exact reason.
In Forward the Foundation, Hari and Dors have to teach Daneel how to laugh. The goal is to discredit a political activist, whom Hari's adopted son told that First Minister Eto Demerzel (The Emperor's chief advisor and one of Daneel's disguises) is a robot. The activist then makes a public announcement to that effect. Hari and Dors teach Daneel to laugh so that he can publicly laugh off such accusations as ridiculous, thereby discrediting the activist. Strangely, Dors was built by Daneel, yet she can smile and laugh, and he can't.
Despite the above examples, Asimov often averted this trope quite harshly, and went to great lengths to justify it. Even those robots that were roughly humanoid were explained to be such because they needed to be able to perform tasks which human tools for already existed and it wouldn't make sense to replace every piece of equipment when one robot could be made to use them. There is a notable exception with a certain robot designed to look roughly humanoid, even though a simple positronic computer could have been used, strictly to try and get it on Earth and weaken the whole Frankenstein Complex.
Even the intelligence that Asimov's robots have, which lead to the unexpected deductions they begin to make, ultimately stem from the incredible complexity of the positronic brain, and the need for them to be designed in such a way to understand human instructions as optimally as possible and know when to ignore these instructions in favor of the greater good.
This trope is averted in Robert L. Forward's Flight of the Dragonfly. The computers are programmed to seem human, but are clearly not. In one case, a computer refuses to waste the crew's air, even though they will die if it doesn't, but a simple order to override is all that is needed to make it follow through. Later, when a computer is destroyed and one crew member is emotional about it, another computer breaks the emotional attachment with a carefully designed reminder that "After all, we are just computers."
In Susan Swan's short short "The Man Doll", a cybernetic engineer builds an android lover as a gift for a friend, however the android's programmed need to serve the interests of those he emotionally bonds with ultimately leads him to abandon his owners and pioneer a political movement calling for the emancipation of other androids like himself whose basic functions require the existence of emotional capacities.
In Time Enough for Love and the later stories in the loose "series" that follows, computers either are emotionless machines, or they learn to be human from close interactions with humans. In the second case, they learn to be self-aware emotional beings from watching us, and as a result act pretty much like we do.
In The Moon Is a Harsh Mistress, a computer gains sentience and learns to be human over the course of the book. At the start, it's, at best, a petulant child.
In the classic "Helen O'Loy", by Lester del Rey, this trope was justified. The titular character was created to win a bet between an endocrinologist and a roboticist as to whether a robot could be made to act like a real woman. The endocrinologist insisted no robot could duplicate the complex biological system that created emotions, the roboticist insisted it could. The roboticist won, when the endocrinologist not only had to admit that she had human-like emotions, but eventually married her.
Fred Saberhagen's Berserker series averts this trope. Because the eponymous robots are out to kill everyone, nobody wants a human-like robot around. Furthermore, the robots that people do build will remind the people around them that they have no emotions, if necessary. Most importantly, it's the berserkers' utter lack of humanity that makes them so scary.
Justified in Charles Stross' Saturn's Children. The (extinct) "Creators" never figured out how to program self-aware AIs from scratch. Instead they just copied the way human brains work. And then you find out howthey did it...
Also justified in Mind Scan, by Robert J. Sawyer, in which the androids have uploaded human consciousness (mind scans of the title) so their personalities are those of the original human. The book revolves over whether they're "really" human, persons with legal rights, and have "souls" or not.
Erasmus from the Legends of Dune trilogy (for those that admit he exists). He wasn't designed to be intelligent (although does look at least vaguely like a human - two arms, two legs etc) but ends up being far more so than any other robot, and this feat can't be replicated.
Seurat, Vorian Atreides's co-pilot, also exhibits vaguely human-like behavior and eventually learns treachery. These are the only independent robots in the books, although the reprogrammed combat mek Chirox also eventually learned to display several human qualities such as regret, pride, and self-sacrifice. Omnius himself feels anger and ambition.
Justified in Joel Shepherd's Cassandra Kresnov trilogy. The title character is an improved version of previous androids who made good foot soldiers but not great leaders. She was given enhanced intelligence, emotions, and lateral thinking ability in order to outsmart the other side in an interplanetary war. She was even given enhanced attractiveness and an increased libido to help her relate to humans better and form interpersonal relationships. However, although she made an excellent soldier and commander, she was intelligent and independent enough to rebel against her creators and escape in order to have a life as an ordinary human.
Keith Laumer's Bolo combat units don't look even remotely human — they're tanks the size of large buildings — but their personalities:
"What made you risk everything on a hopeless attack? Why did you do it?" "For the honor of the regiment." —— A Mark XXXI Combat Unit is the finest fighting machine the ancient wars of the Galaxy have ever known. I am not easily neutralized. But I wish that my Commander's voice were with me...
The lead protagonist of David Weber's Safehold series is a Personlity-Integrated Cybernetic Avatar, a robot with the personality of a woman named Nimue Alban downloaded into it. Nimue is fully aware of this from the get-go, and in fact wrestles on and off throughout the books with just where the line between "human" and "robot" lies with her.
Robert A. Heinlein examines this trope in Friday. A conversation about genetically engineered Artificia lHumans and "Living Artifacts" (artificial non-human lifeforms) being used as airline pilots brings up the point that a non-human artificial pilot, organic or AI, might go suicidally or homicidally insane because of its lack of ties to a human world it can never belong to. Artifical Humans like the titular Friday have to face Fantastic Racism and alienation issues, but are able to pass as human. With luck, they can even possibly find acceptance in human society without hiding what they are.
In terms of their personalities for story purposes, at least... justified in-universe in that all civilizations are obliged to build tendencies into AIs, because "perfect [unconstrained] AIs always Sublime," so presumably the Culture makes AIs which are naturally going to like its members and want to help them. Still, they are unfathomably mighty intellects, so there's always the suspicion in the Culture that the ridiculously human-like part of them is just the tip of the iceberg.
Skinned does this, although with a thoroughly justifiable reason. The robots are created for the sole purpose of replacing the deceased, and so are made not only to seem like humans but to be as absolutely identical to them as possible.
Justified in Rick Griffin's Argo, as the "humans" aren't supposed to know that they're not organic.
The automatons from Infernal Devices - despite walking with a graceless gait, they can pass for normal humans well enough.
In the novel Valentina: Soul in Sapphire, by Joseph H. Delaney and Marc Stiegler, a computer virus designed with adaptive AI becomes sentient and self-aware.