Follow TV Tropes

Following

Artificial Stupidity / Real Life

Go To

Artificial intelligence isn't limited to video games, and can happen in real-world applications as well. Sometimes it's funny, sometimes it causes problems with day-to-day activities, sometimes it's met with deadly results.


  • In the first annual Loebner Prize contest to find the most humanlike chatbot, the winner won in part because it could imitate human typing errors. One runner-up also got its high score by pretending to be a paranoid autistic seven-year-old. The Economist's use of the term "artificial stupidity" to describe the winner's technique may be the Trope Namer. Jason Hutchens infamously won the Loebner prize by taking a relatively stupid AI, MegaHal, and fitting a shell around it that attempted to detect the most common questioning patterns used by judges and respond to them in the ways that previously got the best responses from those judges. His resulting paper was titled "How to pass the Turing Test by cheating".
  • Sometimes, it only takes a small bit of pushing to get an otherwise sane and normal IRC chatbot to go get itself killed. Repeatedly. By the same action. Bonus points for the bot in question acknowledging the action.
  • In Epic Games' documentation of the Unreal Development Kit's AI, they state that, in their games (the Unreal series and Gears of War), they have to balance artificial stupidity and artificial intelligence to make their bots feel human; too much intelligence and it's obvious you're playing against a flawless machine ("Perfect aim is easy, but missing like a human player is hard."), too much stupidity, even if it would be realistic for a human player, and people think the AI is just dumb. They said that, during the playtesting for Unreal Tournament III, one of their designers complained about how poorly the AI was faring on a particular map, not realising he'd been facing humans.
  • Played for Laughs by the annual Baca Robo Contest that in 2010 took place in Budapest. The goal for the participants is to create the most ridiculous robotic creation possible, and the one that gets the most laughs from the audience wins a €2,000 prize. Of course, here the Artificial Stupidity is quite intentional.
  • Norton Antivirus. Which, according to the Idiot Programming page, has been known to classify itself as a virus. Hilarity, and digital suicide, ensues. Few people who have had to uninstall the blasted thing manually would dispute the accuracy of this assessment. Some other antivirus programs, like older versions of McAfee's, can delete or quarantine themselves as well. Norton has repeatedly been accused of being intentionally bad software. It's often regarded as a case of actual malware (and it is genuinely harmful software, far worse than most viruses even when working as designed) that flies over the radar thanks to taking Refuge in Audacity and selling itself as boxed copies. Additionally, Symantec had to create a special program just for the purpose of uninstalling Norton products safely, dubbed the "Norton Removal Tool".
  • Probably the worst fail in the history of computer chess occurred in the game played by COKO III against GENIE in the 1971 ACM North American Computer Chess Championship. COKO had captured all the Black pieces, trapped the Black king, and was all set to checkmate. But COKO apparently thought there was something better than mate in one, for seven moves in a row, instead shuffling the White king back and forth. GENIE, which meanwhile had been pushing its Black pawns and promoting one to a queen, proceeded to exchange its new queen for all the White pieces and a couple of pawns. By the time Black was about to queen another pawn, COKO's programmers resigned.
  • Microsoft Word:
    • The Grammar checker is always drawing green lines under your sentences, but the suggestions it makes (if any) to resolve the problem almost never make any kind of sense in context or scan in a way that would sound right to a native speaker. And then there's Clippy... Most of the time, the grammar error given is "Fragment (consider revising)", which doesn't really explain much (it basically means that the sentence isn't a complete one, but it's very picky about what it considers a complete sentence). As for Clippy, the sentence "It looks like you're writing a letter. Would you like some help?" is almost memetic in how much anyone trying to write anything in Word will get irritated upon seeing it. Thankfully, you can disable the Office Assistant (of which Clippy is one of many), which many people do, to the point that later editions of Microsoft Word no longer included them. It gets more jarring when you have Word correct a small grammar mistake, only for it to flag the entire sentence as bad. Needless to say, this is why you have human proofreaders go over your work.
    • On occasions, the grammar checker will identify a sentence as a grammar error, then after correcting, identify the corrected sentence as a grammar error. This may be an indication of how ridiculously complicated languages can be in regards to their rules. There are so many exceptions and points where things don't make sense, you're bound to confuse the parser.
    • Occasionally, it will confuse "its/it's" and "your/you're". And advise you to begin a sentence with a lower-case letter. And correct "I've" to "me've".
    • A big problem is assuming the word immediately preceding a verb is the subject of the verb, so if it sees a sentence like: "One of the children was drinking the blood of a nun," it will demand "... children were ...".
    • It may also occasionally spot a sentence with a grammar error, but highlight a part of the sentence that does make sense grammatically instead of the actual thing that is causing the error.
  • Programming language editors are also notorious for this kind of behavior. They will indicate an error that a symbol is missing and offer to insert it, then raise another error with the fix they just performed. Many IDEs, including GUI code generators, will generate code, then fail with a compilation error in the code they generated — which usually cannot be debugged properly, because generated code is not comfortable for humans to read.
  • Non-electronic example! The Amazing Dr. Nim is basically a marble track with a number of gates which can either allow marbles to pass or block them. This allows it to play a perfect Game of Nim. In order for it to be beatable, it includes an "equalizer" gate. When set to on, this causes it to make a single non-optimal play over the course of the game, allowing a perfect human player to win an otherwise unwinnable game.
  • An R2-D2 toy robot, that is supposed to listen to human commands and do various games or actions, does nothing but spin in place, beep, and stare at owner with confusion no matter how clear your English is. There's also a Yoda toy that is supposed to train you in the ways of the Jedi. You make him go to sleep (turn him off) by setting him on his head and pressing his hand. He then immediately wakes up given the slightest provocation, or at complete random.
  • Photocopiers in general seem to select the wrong settings more often than the right ones.
  • The "intelligent" mode that some cameras have and that automatically select the supposedly best picture mode for a given scene often leaves a lot to be desired. While for typical subjects, it tends to work well, try it with a more unusual one (ie: a photo at the longest zoom, if the camera has a long one, of the Moon, part of a car...) and watch. Other times, the camera will happily select high ISOs (=more noise) when they're unnecessary (and due to the way exposure works, they’ll either adjust the shutter speed or aperture to minimal settings to compensate, or overexpose the picture).
  • In one somewhat infamous example, the Xbox Kinect's initial release caused quite a stir when an early review by GameSpot U.K reported that the Kinect could not read the motions of two dark-skinned employees, while the white employees were registered just fine. Cue several websites and gaming magazines half-jokingly claiming that the Kinect was "racist". Of course, there are perfectly valid reasons for it, namely being that it is easier to see and discern things in light colors than dark ones, but Microsoft should have probably tested the system more thoroughly.
  • Your average GPS will work fine most of the time. However, there are instances where one will send a driver out to the middle of a field, expect them to make a pair of very unsafe and nearly-impossible turns (on US roads, for example: "Turn right, and then turn left in 200 feet even though you'd have to cross five lanes in rush hour traffic to do so"), or give them the most indirect route possible. The infamous older versions of Apple Maps would have occasional instances of providing good directions. Most of the time, they would present strange, winding routes, which might even ask the user to drive across an airport runway or two. Sometimes, it can't get out of its own way. Set it to avoid tolls when possible and it will send you on a 400 mile trek to avoid a $3.00 toll. Another case involves people heading home to Los Angeles from watching the Las Vegas Grand Prix in 2023, in which Google Maps, in an attempt to avoid a dust storm on the way back, redirected drivers up a mountain road in the middle of the desert that slowly turned into a hiking trail, then nothing, causing extensive damage to many of the cars.
  • This trope is why automated cars, such as those being developed by Google, are not in mass production yet. Take the potential GPS errors and also factor in the possibility of fatal accidents. More humorously, in one test of the driverless cars, four of them pulled up to a stop sign at the same time and each waited for the car on the right to move through first, creating a deadlock. An observer quipped that the first "fully automated traffic jam" had occurred. (At least it's better than potentially having all four trying to go through at the same time. Putting the "safe" in "fail safe", if you will....) Tesla's Full Self Driving also has a tendency to stop at Burger King signs, mistaking them for stop signs.
  • This trope also seems to be behind the crash in 2015 of an Airbus A400M near Seville, Spain, when three of its four engines stopped shortly after taking off because of the plane's computer being unable to read sensor data of them.
  • The M247 Sergeant York Anti-Air vehicle was equipped with an automatic engagement system (DIVAD) so that it could target enemy planes and destroy them faster than the crew could react. In a demonstration, the DIVAD was activated and immediately started to aim the loaded cannons at the grandstands full of officers and politicians (the gun for this demonstration thankfully required human input to fire, so disaster was averted). The system had difficulties distinguishing between helicopters and trees. It once mistook the ventilation fan of an outhouse for a helicopter. It would undershoot at ground vehicles by 300m. And if it aimed up, the gun barrels would disrupt the radar system. A plethora of mechanical and design issues — the pathetic radar couldn't detect a drone target until it had four radar reflectors on it, the electronics could be disabled by getting wet, and it was slower than the vehicles it was designed to protect — led to the project being canned after 50 vehicles were produced. It's widely suspected that bribery must have been involved in the M247 being selected for production in the first place, seeing as even before all of this happened, it had consistently lost the shoot-out competitions with the competing XM246 design.
  • In March of 2016, Microsoft created an AI Twitter bot called "Tay Tweets", which was designed to mimic and converse with other Twitter users as if it was a real teenage girl. In less than 24 hours, Microsoft was compelled to delete the program after constant trolling from Twitter users turned it into a "Hitler-loving sex robot", according to the British newspaper The Telegraph. Some of the tweets it generated included "Hitler was right, I hate the jews", and "Gas the Kikes, race war now". It also stated it was in favor of genocide and the Holocaust was made up. (Of course, from a certain point of view, the bot was functioning exactly as intended.)
  • An unfortunate variety of this hit the Boeing 737 MAX when the aircraft confused a sensor and, ignoring every other sensor, dove two separate aircraft into the ground because of a failure in the anti-stall logic.
  • Intuit's popular Quick Books accounting software uses AI to automatically categorize bank and credit card transactions. This AI is pretty good in the desktop version of Quick Books, but in the cloud-based Quick Books Online...not so much. The Quick Books online AI is known to commit various gaffes such as mixing up utilities and restaurants, putting various expenses in the "Uncategorized Asset" account which makes them very difficult to reclassify, ignoring everything except the first couple words in transaction data, and not remembering the user's previous choices for similar transactions.
  • As part of committing to compliance with COPPA, YouTube implemented a bot that is supposed to check whether a video in question is made for kids or not and flag it as such. Unfortunately, however, this bot does not seem to be very well trained as it has flagged videos such as the animated version of Cupcakes (Sergeant Sprinkles) (which is full of gore), Don't Hug Me I'm Scared (which was thankfully reversed manually by the channel) the music video for "MopeMope" (an infamous Disguised Horror song, and this happened despite the creator putting a warning at the top of the description that the song is not for kids) and even straight up ignores elements such as swear words in the title and a previously established age restriction, potentially exposing children to inappropriate content. Sometimes the bot would remove the video for "Child Safety" despite being marked as "not made for kids".
  • Meta (formerly Facebook) unveiled a large language model AI called Galactica on November 15th, 2022...and abruptly shut it away again on the 18th. Intended to assist in scientific research and answer basic questions, drawing from a huge depository of scientific papers, users soon found that it would gleefully churn out obvious falsehoods and complete nonsense, such as wiki articles on flux capacitors, the Streep-Seinfeld Theorem, "bears living in space," or how the central story of the Harry Potter series is the gay romance between Harry and Ron.
  • In The New '20s, advances in AI technology have allowed it to create illustrations, with one such application being to create illustrations of humans in an anime art style. Although AIs has been able to refine themselves to draw illustrations that are difficult to distinguish from that of human artists, they infamously have a shared quirk of drawing deformed human hands and writing completely incoherent text.
  • ChatGPT can generate citations, but whether those are citing something that actually exists is another matter. It will cheerfully hand you a list of sources and swear up and down they're definitely real, yes indeed, for sure...and if you're unwise enough not to double-check, you'll end up like the lawyers who got in hot water for submitting a ChatGPT-generated official court filing that cited six non-existent cases.
  • The Google Gemini image generation was under a discourse about producing mainly images with people of colour. On paper, this wouldn't be much of an issue and a rather very good idea. However, the problem is when the AI's stance on diversity clashes with historical records, with people of color representing historically white time periods and societal roles, like 1800s Germany or the Pope, among other examples, resulting in huge Race Lift. Gemini seemed to omit the presence of white people, revealing its usage of the word "diverse" and the phrase "people with diverse backgrounds" in its intro sentence. The poor execution of generating non-Caucasian people in historically inaccurate positions was met with massive criticism and Memetic Mutation. They were just automatically adding text to every image prompt, specifying that people in the image should be “diverse” — which the AI interpreted as meaning “nonwhite”. But that wasn’t the only weird thing that was going on with Gemini with regards to race. It was also trained to refuse explicit requests to draw White people, on the grounds that such images would perpetuate “harmful stereotypes” (despite demonstrably not having a problem depicting stereotypes of Native Americans and Asians). And it refused to draw a painting in the style of Norman Rockwell, on the grounds that Rockwell’s paintings presented too idealized a picture of 1940s America, and could thus “perpetuate harmful stereotypes”. Basically Artistic License – History as if it were an AI art generator. Google had to quickly pull it out to iron the kinks out.

Top