In the first annual Loebner Prize contest to find the most humanlike chatbot, the winner won in part because it could imitate human typing errors. One runner-up also got its high score by pretending to be a paranoid autistic seven-year-old. The Economist's use of the term "artificial stupidity" to describe the winner's technique may be the Trope Namer.
In Epic Games's documentation of the Unreal Development Kit's AI, they state that, in their games, (the Unreal series and Gears of War) they have to balance artificial stupidity and artificial intelligence to make their bots feel human; too much intelligence and it's obvious you're playing against a flawless machine ("Perfect aim is easy, but missing like a human player is hard."), too much stupidity, even if it would be realistic for a human player, and people think the AI is just dumb. They said that, during the playtesting for Unreal Tournament III, one of their designers complained about how poorly the AI was faring on a particular map, not realising he'd been facing humans.
Played for Laughs by the annual Baca Robo Contest that in 2010 took place in Budapest. The goal for the participants is to create the most ridiculous robotic creation possible, and the one that gets the most laughs from the audience wins a €2,000 prize. Of course, here the Artificial Stupidity is quite intentional.
Norton Antivirus. Which, according to the Idiot Programming page, has been known to classify itself as a virus. Hilarity, and digital suicide, ensues. Few people who have had to uninstall the blasted thing manually would dispute the accuracy of this assessment. Some other antivirus programs, like older versions of McAfee's, can delete or quarantine themselves as well. Norton has repeatedly been accused of being intentionally bad software. It's often regarded as a case of actual malware (and it is genuinely harmful software, far worse than most viruses even when working as designed) that flies over the radar thanks to taking Refuge in Audacity and selling itself as boxed copies.
Probably the worst Epic Fail in the history of computer chess occurred in the game played by COKO III against GENIE in the 1971 ACM North American Computer Chess Championship. COKO had captured all the Black pieces, trapped the Black king and was all set to checkmate. But COKO overlooked mate in one for seven moves in a row, instead shuffling the White king back and forth. GENIE's response to this indecisiveness was to push its Black pawns until one became a queen, which it exchanged for all the White pieces and a couple of pawns. By the time Black was about to queen another pawn, COKO's programmers resigned.
The Grammar checker is always drawing green lines under your sentences, but the suggestions it makes (if any) to resolve the problem almost never make any kind of sense in context or scan in a way that would sound right to a native English speaker. And then there's Clippy... Most of the time, the grammar error given is "Fragment (consider revising)", which doesn't really explain much (it basically means that the sentence isn't a complete one, but it's very picky about what it considers a complete sentence). As for Clippy, the sentence "It looks like you're writing a letter. Would you like some help?" is almost memetic in how much anyone trying to write anything in Word will get irritated upon seeing it. Thankfully you can disable the Office Assistant (of which Clippy is one of many), which many people do, to the point that later editions of Microsoft Word no longer included them. It gets more jarring when you have Word correct a small grammar mistake, only for it to flag the entire sentence as bad. Needless to say, this is why you have human proofreaders go over your work.
On occasions, the grammar checker will identify a sentence as a grammar error, then after correcting, identify the corrected sentence as a grammar error. This may be an indication of how ridiculously complicated the English language is in regards to its rules. There are so many exceptions and points where things don't make sense, you're bound to confuse the parser.
Occasionally it will confuse "its/it's" and "your/you're". And advise you to begin a sentence with a lower-case letter. And correct "I've" to "me've◊".
A big problem is assuming the word immediately preceding a verb is the subject of the verb, so if it sees a sentence like: "One of the children was drinking the blood of a nun," it will demand "... children were ...".
It may also occasionally spot a sentence with a grammar error but highlight a part of the sentence that does make sense grammatically instead of the actual thing that is causing the error.
Non-electronic example! The Amazing Dr Nim is basically a marble track with a number of gates which can either allow marbles to pass or block them. This allows it to play a perfect game of Nim. In order for it to be beatable, it includes an "equalizer" gate. When set to on, this causes it to make a single non-optimal play over the course of the game, allowing a perfect human player to win an otherwise unwinnable game.
An R2-D2 toy robot, that is supposed to listen to human commands and do various games or actions, does nothing but spin in place, beep, and stare at owner with confusion no matter how clear your English is. There's also a Yoda toy that is supposed to train you in the ways of the Jedi. You make him go to sleep (turn him off) by setting him on his head and pressing his hand. He then immediately wakes up given the slightest provocation, or at complete random.
Photocopiers in general seem to select the wrong settings more often than the right ones.
In one somewhat infamous example, the Xbox Kinect's initial release caused quite a stir when an early review by GameSpot U.K reported that the Kinect could not read the motions of two dark-skinned employees, while the white employees were registered just fine. Cue several websites and gaming magazines half-jokingly claiming that the Kinect was "racist". Obviously, it's easier to detect a white face with dark hair, dark eyebrows, dark eyelashes and red lips than it is to detect those same features on darker skin. A white person with blonde hair, eyebrows, lashes and near-white lips would generally also be more difficult to detect. Besides a greater difference in color value, white skin will scatter the light onto those features (for which warpaint is a practical countermeasure), making them easier to detect by a camera. Finally, a white person in front of the camera will reflect more light potentially triggering the camera to lower its exposure time, resulting in less motion blur, making detection easier. Despite technical reasons why this sort of thing might happen, Microsoft should have probably tested the system more thoroughly.
Your average GPS will work fine most of the time. However, there are instances where one will send a driver out to the middle of a field, or give them the most indirect route possible. The infamous older versions of Apple Maps would have occasional instances of providing good directions. Most of the time, they would present strange, winding routes, which might even ask the user to drive across an airport runway or two.
This trope is why automated cars, such as those being developed byGoogle, are not in mass production yet. Take the aforementioned potential GPS errors and also factor in the possibility of fatal accidents.
The M247 Sergeant YorkAnti-Air vehicle was equipped with an automatic engagement system (DIVAD) so that it could target enemy planes and destroy them faster than the crew could react. In a demonstration, the DIVAD was activated and immediately started to aim the loaded cannons at the grandstands full of officers and politicians (there were only minor injuries). The system had difficulties distinguishing between helicopters and trees. It would undershoot at ground vehicles by 300m. And if it aimed up, the guns would disrupt the radar system. A plethora of mechanical and design issues - the pathetic radar couldn't detect a drone target until it had four radar reflectors on it, water could foul the system, and it was slower than the vehicles it was designed to protect - lead to the project being canned after 50 vehicles were produced.
In March of 2016, Microsoft created an AI Twitter bot called "Tay Tweets", which was designed to mimic and converse with other twitter users as if it was a real teenage girl. In less then 24 hours, Microsoft was compelled to delete the program after constant trolling from twitter users turned it into a "Hitler-loving sex robot", according to the British newspaper The Telegraph. Some of the tweets it generated included "Hitler was right, I hate the jews", and "Gas the Kikes, race war now". It also stated it was in favor of genocide and the holocaust was made up. (Of course, from a certain point of view, the bot was functioning exactly as intended.)