Follow TV Tropes

Following

History ArtificialStupidity / RealLife

Go To

OR

Is there an issue? Send a MessageReason:
Removing what is far too much unnessecary detail. If people are interested they can go look it up.


* In one somewhat infamous example, the Xbox Kinect's initial release caused quite a stir when an early review by [=GameSpot=] U.K reported that the Kinect could not read the motions of two dark-skinned employees, while the white employees were registered just fine. Cue several websites and gaming magazines half-jokingly claiming that the Kinect was [[http://www.pcworld.com/article/209708/Is_Microsoft_Kinect_Racist.html "racist"]]. Obviously, it's easier to detect a white face with dark hair, dark eyebrows, dark eyelashes, and red lips than it is to detect those same features on darker skin. A white person with blonde hair, eyebrows, lashes, and near-white lips would generally also be more difficult to detect. Besides a greater difference in color value, white skin will scatter the light onto those features (for which [[ThisMeansWarpaint warpaint]] is a practical countermeasure), making them easier to detect by a camera. Finally, a white person in front of the camera will reflect more light, potentially triggering the camera to lower its exposure time, resulting in less motion blur, making detection easier. Despite technical reasons why this sort of thing might happen, Microsoft should have probably tested the system more thoroughly.

to:

* In one somewhat infamous example, the Xbox Kinect's initial release caused quite a stir when an early review by [=GameSpot=] U.K reported that the Kinect could not read the motions of two dark-skinned employees, while the white employees were registered just fine. Cue several websites and gaming magazines half-jokingly claiming that the Kinect was [[http://www.pcworld.com/article/209708/Is_Microsoft_Kinect_Racist.html "racist"]]. Obviously, it's Of course, there are perfectly valid reasons for it, namely being that it is easier to detect a see and discern things in light colors than white face with dark hair, dark eyebrows, dark eyelashes, and red lips than it is to detect those same features on darker skin. A white person with blonde hair, eyebrows, lashes, and near-white lips would generally also be more difficult to detect. Besides a greater difference in color value, white skin will scatter the light onto those features (for which [[ThisMeansWarpaint warpaint]] is a practical countermeasure), making them easier to detect by a camera. Finally, a white person in front of the camera will reflect more light, potentially triggering the camera to lower its exposure time, resulting in less motion blur, making detection easier. Despite technical reasons why this sort of thing might happen, ones, but Microsoft should have probably tested the system more thoroughly.
Is there an issue? Send a MessageReason:
None


* Probably the worst EpicFail in the history of computer chess occurred in [[http://en.lichess.org/aooMurBn#1 the game played by COKO III against GENIE]] in the 1971 ACM North American Computer Chess Championship. COKO had captured all the Black pieces, trapped the Black king, and was all set to checkmate. But COKO overlooked mate in one for seven moves in a row, instead shuffling the White king back and forth. GENIE's response to this indecisiveness was to push its Black pawns until one became a queen, which it exchanged for all the White pieces and a couple of pawns. By the time Black was about to queen another pawn, COKO's programmers resigned.

to:

* Probably the worst EpicFail in the history of computer chess occurred in [[http://en.lichess.org/aooMurBn#1 the game played by COKO III against GENIE]] in the 1971 ACM North American Computer Chess Championship. COKO had captured all the Black pieces, trapped the Black king, and was all set to checkmate. But COKO overlooked mate in one for seven moves in a row, instead shuffling the White king back and forth. GENIE's response to this indecisiveness COKO was evidently suffering from a form of ParalysisByAnalysis common in early chess computers which caused them to push prefer longer winning combinations over shorter ones, but COKO somehow failed to play its obvious winning move even at the last possible moment. GENIE, which meanwhile had been pushing its Black pawns until and promoting one became to a queen, which it exchanged proceeded to exchange its new queen for all the White pieces and a couple of pawns. By the time Black was about to queen another pawn, COKO's programmers resigned.
Is there an issue? Send a MessageReason:
None


* Non-electronic example! [[http://www.boardgamegeek.com/boardgame/23630/ The Amazing Dr. Nim]] is basically a marble track with a number of gates which can either allow marbles to pass or block them. This allows it to play a perfect game of [[http://en.wikipedia.org/wiki/Nim Nim]]. In order for it to be beatable, it includes an "equalizer" gate. When set to on, this causes it to make a single non-optimal play over the course of the game, allowing a perfect human player to win an otherwise unwinnable game.

to:

* Non-electronic example! [[http://www.boardgamegeek.com/boardgame/23630/ The Amazing Dr. Nim]] is basically a marble track with a number of gates which can either allow marbles to pass or block them. This allows it to play a perfect game of [[http://en.wikipedia.org/wiki/Nim Nim]].GameOfNim. In order for it to be beatable, it includes an "equalizer" gate. When set to on, this causes it to make a single non-optimal play over the course of the game, allowing a perfect human player to win an otherwise unwinnable game.
Is there an issue? Send a MessageReason:
None


* Sometimes, it only takes a small bit of pushing to get an otherwise sane and normal IRC chatbot to go get itself killed. Repeatedly. By the same action. [[http://archive.is/dsP2T Bonus points for the bot in question acknowledging the action]].
* In Creator/EpicGames' [[http://udn.epicgames.com/Three/AIOverview.html documentation]] of the ''Unreal Development Kit'''s AI, they state that, in their games, (the VideoGame/{{Unreal}} series and ''VideoGame/GearsOfWar'') they have to balance artificial stupidity and artificial intelligence to make their bots feel human; too much intelligence and it's obvious you're playing against a flawless machine ("[[ComputersAreFast Perfect aim is easy]], but missing like a human player is hard."), too much stupidity, even if it would be realistic for a human player, and people think the AI is just dumb. They said that, during the playtesting for ''VideoGame/UnrealTournamentIII'', one of their designers complained about how poorly the AI was faring on a particular map, not realising he'd been facing humans.

to:

* Sometimes, it only takes a small bit of pushing to get an otherwise sane and normal IRC chatbot to go get itself killed. Repeatedly. By the same action. [[http://archive.is/dsP2T Bonus points for the bot in question acknowledging the action]].
action.]]
* In Creator/EpicGames' [[http://udn.epicgames.com/Three/AIOverview.html documentation]] of the ''Unreal Development Kit'''s AI, they state that, in their games, games (the VideoGame/{{Unreal}} series and ''VideoGame/GearsOfWar'') ''VideoGame/GearsOfWar''), they have to balance artificial stupidity and artificial intelligence to make their bots feel human; too much intelligence and it's obvious you're playing against a flawless machine ("[[ComputersAreFast Perfect aim is easy]], but missing like a human player is hard."), too much stupidity, even if it would be realistic for a human player, and people think the AI is just dumb. They said that, during the playtesting for ''VideoGame/UnrealTournamentIII'', one of their designers complained about how poorly the AI was faring on a particular map, not realising he'd been facing humans.



* Probably the worst EpicFail in the history of computer chess occurred in [[http://en.lichess.org/aooMurBn#1 the game played by COKO III against GENIE]] in the 1971 ACM North American Computer Chess Championship. COKO had captured all the Black pieces, trapped the Black king and was all set to checkmate. But COKO overlooked mate in one for seven moves in a row, instead shuffling the White king back and forth. GENIE's response to this indecisiveness was to push its Black pawns until one became a queen, which it exchanged for all the White pieces and a couple of pawns. By the time Black was about to queen another pawn, COKO's programmers resigned.

to:

* Probably the worst EpicFail in the history of computer chess occurred in [[http://en.lichess.org/aooMurBn#1 the game played by COKO III against GENIE]] in the 1971 ACM North American Computer Chess Championship. COKO had captured all the Black pieces, trapped the Black king king, and was all set to checkmate. But COKO overlooked mate in one for seven moves in a row, instead shuffling the White king back and forth. GENIE's response to this indecisiveness was to push its Black pawns until one became a queen, which it exchanged for all the White pieces and a couple of pawns. By the time Black was about to queen another pawn, COKO's programmers resigned.



** The Grammar checker is always drawing green lines under your sentences, but the suggestions it makes (if any) to resolve the problem almost never make any kind of sense in context or scan in a way that would sound right to a native English speaker. And then there's [[AnnoyingVideoGameHelper Clippy]]... Most of the time, the grammar error given is "Fragment (consider revising)", which doesn't really explain much (it basically means that the sentence isn't a complete one, but it's very picky about what it considers a complete sentence). As for Clippy, the sentence "It looks like you're writing a letter. Would you like some help?" is almost memetic in how much anyone trying to write anything in Word will get irritated upon seeing it. Thankfully you can disable the Office Assistant (of which Clippy is one of many), which many people do, to the point that later editions of Microsoft Word no longer included them. It gets more jarring when you have Word correct a small grammar mistake, only for it to flag the entire sentence as bad. Needless to say, this is why you have human proofreaders go over your work.

to:

** The Grammar checker is always drawing green lines under your sentences, but the suggestions it makes (if any) to resolve the problem almost never make any kind of sense in context or scan in a way that would sound right to a native English speaker. And then there's [[AnnoyingVideoGameHelper Clippy]]... Most of the time, the grammar error given is "Fragment (consider revising)", which doesn't really explain much (it basically means that the sentence isn't a complete one, but it's very picky about what it considers a complete sentence). As for Clippy, the sentence "It looks like you're writing a letter. Would you like some help?" is almost memetic [[MemeticMutation memetic]] in how much anyone trying to write anything in Word will get irritated upon seeing it. Thankfully Thankfully, you can disable the Office Assistant (of which Clippy is one of many), which many people do, to the point that later editions of Microsoft Word no longer included them. It gets more jarring when you have Word correct a small grammar mistake, only for it to flag the entire sentence as bad. Needless to say, this is why you have human proofreaders go over your work.



** Occasionally it will confuse "its/it's" and "your/you're". And advise you to begin a sentence with a lower-case letter. And correct "I've" to "[[https://s-media-cache-ak0.pinimg.com/736x/a3/02/d6/a302d64e152e3bc0ff83045f5635a0bb.jpg me've]]".

to:

** Occasionally Occasionally, it will confuse "its/it's" and "your/you're". And advise you to begin a sentence with a lower-case letter. And correct "I've" to "[[https://s-media-cache-ak0.pinimg.com/736x/a3/02/d6/a302d64e152e3bc0ff83045f5635a0bb.jpg me've]]".



** It may also occasionally spot a sentence with a grammar error but highlight a part of the sentence that does make sense grammatically instead of the actual thing that is causing the error.
* Programming language editors are also notorious for this kind of behavior. They will indicate an error that a symbol is missing and offer to insert it, then raise another error with the fix they just performed. Many [=IDEs=] including GUI code generators will generate code, then fail with a compilation error ''in the code they generated'' - which usually cannot be debugged properly, because generated code is not comfortable for humans to read.
* Non-electronic example! [[http://www.boardgamegeek.com/boardgame/23630/ The Amazing Dr Nim]] is basically a marble track with a number of gates which can either allow marbles to pass or block them. This allows it to play a perfect game of [[http://en.wikipedia.org/wiki/Nim Nim]]. In order for it to be beatable, it includes an "equalizer" gate. When set to on, this causes it to make a single non-optimal play over the course of the game, allowing a perfect human player to win an otherwise unwinnable game.

to:

** It may also occasionally spot a sentence with a grammar error error, but highlight a part of the sentence that does ''does'' make sense grammatically instead of the actual thing that is causing the error.
* Programming language editors are also notorious for this kind of behavior. They will indicate an error that a symbol is missing and offer to insert it, then raise another error with the fix they just performed. Many [=IDEs=] [=IDEs=], including GUI code generators generators, will generate code, then fail with a compilation error ''in the code they generated'' - which usually cannot be debugged properly, because generated code is not comfortable for humans to read.
* Non-electronic example! [[http://www.boardgamegeek.com/boardgame/23630/ The Amazing Dr Dr. Nim]] is basically a marble track with a number of gates which can either allow marbles to pass or block them. This allows it to play a perfect game of [[http://en.wikipedia.org/wiki/Nim Nim]]. In order for it to be beatable, it includes an "equalizer" gate. When set to on, this causes it to make a single non-optimal play over the course of the game, allowing a perfect human player to win an otherwise unwinnable game.



* The "intelligent" mode that some cameras have and that select automatically the supposedly best picture mode for a given scene often leaves a lot to be desired. While for typical subjects it tends to work well, try it with a more unusual one (ie: a photo at the longest zoom, if the camera has a long one, of the Moon, part of a car...) and watch. Other times, the camera will happily select high ISOs (=more noise) when they're unnecesary.
* In one somewhat infamous example, the Xbox Kinect's initial release caused quite a stir when an early review by [=GameSpot=] U.K reported that the Kinect could not read the motions of two dark-skinned employees, while the white employees were registered just fine. Cue several websites and gaming magazines half-jokingly claiming that the Kinect was [[http://www.pcworld.com/article/209708/Is_Microsoft_Kinect_Racist.html "racist"]]. Obviously, it's easier to detect a white face with dark hair, dark eyebrows, dark eyelashes and red lips than it is to detect those same features on darker skin. A white person with blonde hair, eyebrows, lashes and near-white lips would generally also be more difficult to detect. Besides a greater difference in color value, white skin will scatter the light onto those features (for which [[ThisMeansWarpaint warpaint]] is a practical countermeasure), making them easier to detect by a camera. Finally, a white person in front of the camera will reflect more light potentially triggering the camera to lower its exposure time, resulting in less motion blur, making detection easier. Despite technical reasons why this sort of thing might happen, Microsoft should have probably tested the system more thoroughly.

to:

* The "intelligent" mode that some cameras have and that select automatically select the supposedly best picture mode for a given scene often leaves a lot to be desired. While for typical subjects subjects, it tends to work well, try it with a more unusual one (ie: a photo at the longest zoom, if the camera has a long one, of the Moon, part of a car...) and watch. Other times, the camera will happily select high ISOs [=ISOs=] (=more noise) when they're unnecesary.
unnecessary (and due to the way exposure works, they’ll either adjust the shutter speed or aperture to minimal settings to compensate, or overexpose the picture).
* In one somewhat infamous example, the Xbox Kinect's initial release caused quite a stir when an early review by [=GameSpot=] U.K reported that the Kinect could not read the motions of two dark-skinned employees, while the white employees were registered just fine. Cue several websites and gaming magazines half-jokingly claiming that the Kinect was [[http://www.pcworld.com/article/209708/Is_Microsoft_Kinect_Racist.html "racist"]]. Obviously, it's easier to detect a white face with dark hair, dark eyebrows, dark eyelashes eyelashes, and red lips than it is to detect those same features on darker skin. A white person with blonde hair, eyebrows, lashes lashes, and near-white lips would generally also be more difficult to detect. Besides a greater difference in color value, white skin will scatter the light onto those features (for which [[ThisMeansWarpaint warpaint]] is a practical countermeasure), making them easier to detect by a camera. Finally, a white person in front of the camera will reflect more light light, potentially triggering the camera to lower its exposure time, resulting in less motion blur, making detection easier. Despite technical reasons why this sort of thing might happen, Microsoft should have probably tested the system more thoroughly.



* The [[http://en.wikipedia.org/wiki/M247_Sergeant_York M247 Sergeant York]] AntiAir vehicle was equipped with an automatic engagement system (DIVAD) so that it could target enemy planes and destroy them faster than the crew could react. In a demonstration, the DIVAD was activated and [[DisastrousDemonstration immediately started to aim the loaded cannons at the grandstands full of officers and politicians]] (there were only minor injuries). The system had difficulties distinguishing between helicopters and trees. It would undershoot at ground vehicles by 300m. And if it aimed up, the guns would disrupt the radar system. A plethora of mechanical and design issues - the pathetic radar couldn't detect a drone target until it had four radar reflectors on it, water could foul the system, and it was slower than the vehicles it was designed to protect - led to the project being canned after 50 vehicles were produced.
* In March of 2016, Microsoft created an AI Twitter bot called "Tay Tweets", which was designed to mimic and converse with other twitter users as if it was a real teenage girl. In less than 24 hours, Microsoft was compelled to delete the program after constant trolling from twitter users turned it into a "Hitler-loving sex robot", according to the British newspaper The Telegraph. Some of the tweets it generated included "Hitler was right, I hate the jews", and "Gas the Kikes, race war now". It also stated it was in favor of genocide and the holocaust was made up. (Of course, [[TeensAreMonsters from a certain]] [[{{GIFT}} point of view]], the bot was functioning [[GoneHorriblyRight exactly as intended]].)

to:

* The [[http://en.wikipedia.org/wiki/M247_Sergeant_York M247 Sergeant York]] AntiAir vehicle was equipped with an automatic engagement system (DIVAD) so that it could target enemy planes and destroy them faster than the crew could react. In a demonstration, the DIVAD was activated and [[DisastrousDemonstration immediately started to aim the loaded cannons at the grandstands full of officers and politicians]] (there were only minor injuries). The system had difficulties distinguishing between helicopters and trees. It would undershoot at ground vehicles by 300m. And if it aimed up, the guns would disrupt the radar system. A plethora of mechanical and design issues - the pathetic radar couldn't detect a drone target until it had four radar reflectors on it, water could foul the system, and it was slower than the vehicles it was designed to protect - led to the project being canned after 50 vehicles were produced.
* In March of 2016, Microsoft created an AI Twitter bot called "Tay Tweets", which was designed to mimic and converse with other twitter Twitter users as if it was a real teenage girl. In less than 24 hours, Microsoft was compelled to delete the program after constant trolling {{troll}}ing from twitter Twitter users turned it into a "Hitler-loving sex robot", according to the British newspaper The Telegraph. Some of the tweets it generated included "Hitler was right, I hate the jews", and "Gas the Kikes, race war now". It also stated it was in favor of genocide and the holocaust was made up. (Of course, [[TeensAreMonsters from a certain]] [[{{GIFT}} point of view]], the bot was functioning [[GoneHorriblyRight exactly as intended]].)
Is there an issue? Send a MessageReason:
None


* Your average GPS will work fine most of the time. However, there are instances where one will send a driver out to the middle of a field, or give them the most indirect route possible. The infamous older versions of Apple Maps would have occasional instances of providing good directions. Most of the time, they would present strange, winding routes, which might even ask the user to drive across an airport runway or two.

to:

* Your average GPS will work fine most of the time. However, there are instances where one will send a driver out to the middle of a field, expect them to make a pair of very unsafe and nearly-impossible turns (on US roads, for example: "Turn right, and then turn left in 200 feet even though you'd have to cross five lanes in rush hour traffic to do so"), or give them the most indirect route possible. The infamous older versions of Apple Maps would have occasional instances of providing good directions. Most of the time, they would present strange, winding routes, which might even ask the user to drive across an airport runway or two.
Is there an issue? Send a MessageReason:
None


* In Creator/EpicGames' [[http://udn.epicgames.com/Three/AIOverview.html documentation]] of the ''Unreal Development Kit'''s AI, they state that, in their games, (the VideoGame/{{Unreal}} series and GearsOfWar) they have to balance artificial stupidity and artificial intelligence to make their bots feel human; too much intelligence and it's obvious you're playing against a flawless machine ("[[ComputersAreFast Perfect aim is easy]], but missing like a human player is hard."), too much stupidity, even if it would be realistic for a human player, and people think the AI is just dumb. They said that, during the playtesting for ''VideoGame/UnrealTournamentIII'', one of their designers complained about how poorly the AI was faring on a particular map, not realising he'd been facing humans.

to:

* In Creator/EpicGames' [[http://udn.epicgames.com/Three/AIOverview.html documentation]] of the ''Unreal Development Kit'''s AI, they state that, in their games, (the VideoGame/{{Unreal}} series and GearsOfWar) ''VideoGame/GearsOfWar'') they have to balance artificial stupidity and artificial intelligence to make their bots feel human; too much intelligence and it's obvious you're playing against a flawless machine ("[[ComputersAreFast Perfect aim is easy]], but missing like a human player is hard."), too much stupidity, even if it would be realistic for a human player, and people think the AI is just dumb. They said that, during the playtesting for ''VideoGame/UnrealTournamentIII'', one of their designers complained about how poorly the AI was faring on a particular map, not realising he'd been facing humans.
Is there an issue? Send a MessageReason:
None


* The [[http://en.wikipedia.org/wiki/M247_Sergeant_York M247 Sergeant York]] AntiAir vehicle was equipped with an automatic engagement system (DIVAD) so that it could target enemy planes and destroy them faster than the crew could react. In a demonstration, the DIVAD was activated and [[DisastrousDemonstration immediately started to aim the loaded cannons at the grandstands full of officers and politicians]] (there were only minor injuries). The system had difficulties distinguishing between helicopters and trees. It would undershoot at ground vehicles by 300m. And if it aimed up, the guns would disrupt the radar system. A plethora of mechanical and design issues - the pathetic radar couldn't detect a drone target until it had four radar reflectors on it, water could foul the system, and it was slower than the vehicles it was designed to protect - lead to the project being canned after 50 vehicles were produced.
* In March of 2016, Microsoft created an AI Twitter bot called "Tay Tweets", which was designed to mimic and converse with other twitter users as if it was a real teenage girl. In less then 24 hours, Microsoft was compelled to delete the program after constant trolling from twitter users turned it into a "Hitler-loving sex robot", according to the British newspaper The Telegraph. Some of the tweets it generated included "Hitler was right, I hate the jews", and "Gas the Kikes, race war now". It also stated it was in favor of genocide and the holocaust was made up. (Of course, [[TeensAreMonsters from a certain]] [[{{GIFT}} point of view]], the bot was functioning [[GoneHorriblyRight exactly as intended]].)

to:

* The [[http://en.wikipedia.org/wiki/M247_Sergeant_York M247 Sergeant York]] AntiAir vehicle was equipped with an automatic engagement system (DIVAD) so that it could target enemy planes and destroy them faster than the crew could react. In a demonstration, the DIVAD was activated and [[DisastrousDemonstration immediately started to aim the loaded cannons at the grandstands full of officers and politicians]] (there were only minor injuries). The system had difficulties distinguishing between helicopters and trees. It would undershoot at ground vehicles by 300m. And if it aimed up, the guns would disrupt the radar system. A plethora of mechanical and design issues - the pathetic radar couldn't detect a drone target until it had four radar reflectors on it, water could foul the system, and it was slower than the vehicles it was designed to protect - lead led to the project being canned after 50 vehicles were produced.
* In March of 2016, Microsoft created an AI Twitter bot called "Tay Tweets", which was designed to mimic and converse with other twitter users as if it was a real teenage girl. In less then than 24 hours, Microsoft was compelled to delete the program after constant trolling from twitter users turned it into a "Hitler-loving sex robot", according to the British newspaper The Telegraph. Some of the tweets it generated included "Hitler was right, I hate the jews", and "Gas the Kikes, race war now". It also stated it was in favor of genocide and the holocaust was made up. (Of course, [[TeensAreMonsters from a certain]] [[{{GIFT}} point of view]], the bot was functioning [[GoneHorriblyRight exactly as intended]].)
Is there an issue? Send a MessageReason:
None


* In EpicGames's [[http://udn.epicgames.com/Three/AIOverview.html documentation]] of the ''Unreal Development Kit'''s AI, they state that, in their games, (the VideoGame/{{Unreal}} series and GearsOfWar) they have to balance artificial stupidity and artificial intelligence to make their bots feel human; too much intelligence and it's obvious you're playing against a flawless machine ("[[ComputersAreFast Perfect aim is easy]], but missing like a human player is hard."), too much stupidity, even if it would be realistic for a human player, and people think the AI is just dumb. They said that, during the playtesting for ''VideoGame/UnrealTournamentIII'', one of their designers complained about how poorly the AI was faring on a particular map, not realising he'd been facing humans.

to:

* In EpicGames's Creator/EpicGames' [[http://udn.epicgames.com/Three/AIOverview.html documentation]] of the ''Unreal Development Kit'''s AI, they state that, in their games, (the VideoGame/{{Unreal}} series and GearsOfWar) they have to balance artificial stupidity and artificial intelligence to make their bots feel human; too much intelligence and it's obvious you're playing against a flawless machine ("[[ComputersAreFast Perfect aim is easy]], but missing like a human player is hard."), too much stupidity, even if it would be realistic for a human player, and people think the AI is just dumb. They said that, during the playtesting for ''VideoGame/UnrealTournamentIII'', one of their designers complained about how poorly the AI was faring on a particular map, not realising he'd been facing humans.
Is there an issue? Send a MessageReason:
None


* In March of 2016, Microsoft created an AI Twitter bot called "Tay Tweets", which was designed to mimic and converse with other twitter users as if it was a real teenage girl. In less then 24 hours, Microsoft was compelled to delete the program after constant trolling from twitter users turned it into a "Hitler-loving sex robot", according to the British newspaper The Telegraph. Some of the tweets it generated included "Hitler was right, I hate the jews", and "Gas the Kikes, race war now". It also stated it was in favor of genocide and the holocaust was made up. (Of course, from a certain point of view, the bot was functioning [[GoneHorriblyRight exactly as intended]].)

to:

* In March of 2016, Microsoft created an AI Twitter bot called "Tay Tweets", which was designed to mimic and converse with other twitter users as if it was a real teenage girl. In less then 24 hours, Microsoft was compelled to delete the program after constant trolling from twitter users turned it into a "Hitler-loving sex robot", according to the British newspaper The Telegraph. Some of the tweets it generated included "Hitler was right, I hate the jews", and "Gas the Kikes, race war now". It also stated it was in favor of genocide and the holocaust was made up. (Of course, [[TeensAreMonsters from a certain certain]] [[{{GIFT}} point of view, view]], the bot was functioning [[GoneHorriblyRight exactly as intended]].)
Is there an issue? Send a MessageReason:
None


** Jason Hutchens infamously won the Loebner prize by taking a relatively stupid AI, MegaHal, and fitting a shell around it that attempted to detect the most common questioning patterns used by ''judges'' and respond to them in the ways that previously got the best responses from those judges. His resulting paper was titled "How to pass the Turing Test by cheating".

to:

** Jason Hutchens infamously won the Loebner prize by taking a relatively stupid AI, MegaHal, [=MegaHal=], and fitting a shell around it that attempted to detect the most common questioning patterns used by ''judges'' and respond to them in the ways that previously got the best responses from those judges. His resulting paper was titled "How to pass the Turing Test by cheating".
Is there an issue? Send a MessageReason:
None


* Programming language editors are also notorious for this kind of behavior. They will indicate an error that a symbol is missing and offer to insert it, then raise another error with the fix they just performed. Many IDEs including GUI code generators will generate code ''that they themselves then fail to compile'' - which usually cannot be debugged properly, because generated code is not comfortable for humans to read.

to:

* Programming language editors are also notorious for this kind of behavior. They will indicate an error that a symbol is missing and offer to insert it, then raise another error with the fix they just performed. Many IDEs [=IDEs=] including GUI code generators will generate code ''that they themselves code, then fail to compile'' with a compilation error ''in the code they generated'' - which usually cannot be debugged properly, because generated code is not comfortable for humans to read.
Is there an issue? Send a MessageReason:
None

Added DiffLines:

** Jason Hutchens infamously won the Loebner prize by taking a relatively stupid AI, MegaHal, and fitting a shell around it that attempted to detect the most common questioning patterns used by ''judges'' and respond to them in the ways that previously got the best responses from those judges. His resulting paper was titled "How to pass the Turing Test by cheating".


Added DiffLines:

* Programming language editors are also notorious for this kind of behavior. They will indicate an error that a symbol is missing and offer to insert it, then raise another error with the fix they just performed. Many IDEs including GUI code generators will generate code ''that they themselves then fail to compile'' - which usually cannot be debugged properly, because generated code is not comfortable for humans to read.
Is there an issue? Send a MessageReason:
None


* The "intelligent" mode that some cameras have and that select automatically the supposedly best picture mode for a given scene. While for easy subjects it tends to work well, try it with a more unusual one (ie: a photo at the longest zoom, if the camera has a long one, of the Moon, part of a car...) and watch.

to:

* The "intelligent" mode that some cameras have and that select automatically the supposedly best picture mode for a given scene. scene often leaves a lot to be desired. While for easy typical subjects it tends to work well, try it with a more unusual one (ie: a photo at the longest zoom, if the camera has a long one, of the Moon, part of a car...) and watch. Other times, the camera will happily select high ISOs (=more noise) when they're unnecesary.
Is there an issue? Send a MessageReason:


* Similarly to the above example, the "intelligent" mode that some cameras have and that select automatically the supposedly best picture mode for a given scene. While for easy subjects it tends to work well, try it with a more unusual one (ie: a photo at the longest zoom, if the camera has a long one, of the Moon, part of a car...) and watch.

to:

* Similarly to the above example, the The "intelligent" mode that some cameras have and that select automatically the supposedly best picture mode for a given scene. While for easy subjects it tends to work well, try it with a more unusual one (ie: a photo at the longest zoom, if the camera has a long one, of the Moon, part of a car...) and watch.
Is there an issue? Send a MessageReason:
None


* Similarly to the above example the "intelligent" mode that some cameras have and that supposedly select automatically the best picture mode. While foreasy subjects it tends to work well, try it with a more unusual one (ie: a photo at the longest zoomif the camera has it of the Moon) and watch.

to:

* Similarly to the above example example, the "intelligent" mode that some cameras have and that supposedly select automatically the supposedly best picture mode. mode for a given scene. While foreasy for easy subjects it tends to work well, try it with a more unusual one (ie: a photo at the longest zoomif zoom, if the camera has it a long one, of the Moon) Moon, part of a car...) and watch.
Is there an issue? Send a MessageReason:
None

Added DiffLines:

* Similarly to the above example the "intelligent" mode that some cameras have and that supposedly select automatically the best picture mode. While foreasy subjects it tends to work well, try it with a more unusual one (ie: a photo at the longest zoomif the camera has it of the Moon) and watch.
Is there an issue? Send a MessageReason:
None


* In March of 2016, Microsoft created an AI Twitter bot called "Tay Tweets", which was designed to mimic and converse with other twitter users as if it was a real teenage girl. In less then 24 hours, Microsoft was compelled to delete the program after constant trolling from twitter users turned it into a "Hitler-loving sex robot", according to the British newspaper The Telegraph. Some of the tweets it generated included "Hitler was right, I hate the jews", and "Gas the Kikes, race war now". It also stated it was in favor of genocide and the holocaust was made up.

to:

* In March of 2016, Microsoft created an AI Twitter bot called "Tay Tweets", which was designed to mimic and converse with other twitter users as if it was a real teenage girl. In less then 24 hours, Microsoft was compelled to delete the program after constant trolling from twitter users turned it into a "Hitler-loving sex robot", according to the British newspaper The Telegraph. Some of the tweets it generated included "Hitler was right, I hate the jews", and "Gas the Kikes, race war now". It also stated it was in favor of genocide and the holocaust was made up. (Of course, from a certain point of view, the bot was functioning [[GoneHorriblyRight exactly as intended]].)
Is there an issue? Send a MessageReason:
None


* The [[http://en.wikipedia.org/wiki/M247_Sergeant_York M247 Sergeant York]] AntiAir vehicle was equipped with an automatic engagement system (DIVAD) so that it could target enemy planes and destroy them faster than the crew could react. In a demonstration, the DIVAD was activated and [[DisastrousDemonstration immediately started to aim the loaded cannons at the grandstands full of officers and politicians]]. The system had difficulties distinguishing between helicopters and trees. It would undershoot at ground vehicles by 300m. And if it aimed up, the guns would disrupt the radar system. A plethora of mechanical and design issues - the pathetic radar couldn't detect a drone target until it had four radar reflectors on it, water could foul the system, and it was slower than the vehicles it was designed to protect - lead to the project being canned after 50 vehicles were produced.

to:

* The [[http://en.wikipedia.org/wiki/M247_Sergeant_York M247 Sergeant York]] AntiAir vehicle was equipped with an automatic engagement system (DIVAD) so that it could target enemy planes and destroy them faster than the crew could react. In a demonstration, the DIVAD was activated and [[DisastrousDemonstration immediately started to aim the loaded cannons at the grandstands full of officers and politicians]].politicians]] (there were only minor injuries). The system had difficulties distinguishing between helicopters and trees. It would undershoot at ground vehicles by 300m. And if it aimed up, the guns would disrupt the radar system. A plethora of mechanical and design issues - the pathetic radar couldn't detect a drone target until it had four radar reflectors on it, water could foul the system, and it was slower than the vehicles it was designed to protect - lead to the project being canned after 50 vehicles were produced.
Is there an issue? Send a MessageReason:
fixed a wick for a split trope


** The Grammar checker is always drawing green lines under your sentences, but the suggestions it makes (if any) to resolve the problem almost never make any kind of sense in context or scan in a way that would sound right to a native English speaker. And then there's [[StopHelpingMe Clippy]]... Most of the time, the grammar error given is "Fragment (consider revising)", which doesn't really explain much (it basically means that the sentence isn't a complete one, but it's very picky about what it considers a complete sentence). As for Clippy, the sentence "It looks like you're writing a letter. Would you like some help?" is almost memetic in how much anyone trying to write anything in Word will get irritated upon seeing it. Thankfully you can disable the Office Assistant (of which Clippy is one of many), which many people do, to the point that later editions of Microsoft Word no longer included them. It gets more jarring when you have Word correct a small grammar mistake, only for it to flag the entire sentence as bad. Needless to say, this is why you have human proofreaders go over your work.

to:

** The Grammar checker is always drawing green lines under your sentences, but the suggestions it makes (if any) to resolve the problem almost never make any kind of sense in context or scan in a way that would sound right to a native English speaker. And then there's [[StopHelpingMe [[AnnoyingVideoGameHelper Clippy]]... Most of the time, the grammar error given is "Fragment (consider revising)", which doesn't really explain much (it basically means that the sentence isn't a complete one, but it's very picky about what it considers a complete sentence). As for Clippy, the sentence "It looks like you're writing a letter. Would you like some help?" is almost memetic in how much anyone trying to write anything in Word will get irritated upon seeing it. Thankfully you can disable the Office Assistant (of which Clippy is one of many), which many people do, to the point that later editions of Microsoft Word no longer included them. It gets more jarring when you have Word correct a small grammar mistake, only for it to flag the entire sentence as bad. Needless to say, this is why you have human proofreaders go over your work.
Is there an issue? Send a MessageReason:
None

Added DiffLines:

* In March of 2016, Microsoft created an AI Twitter bot called "Tay Tweets", which was designed to mimic and converse with other twitter users as if it was a real teenage girl. In less then 24 hours, Microsoft was compelled to delete the program after constant trolling from twitter users turned it into a "Hitler-loving sex robot", according to the British newspaper The Telegraph. Some of the tweets it generated included "Hitler was right, I hate the jews", and "Gas the Kikes, race war now". It also stated it was in favor of genocide and the holocaust was made up.
Is there an issue? Send a MessageReason:
None

Added DiffLines:

** It may also occasionally spot a sentence with a grammar error but highlight a part of the sentence that does make sense grammatically instead of the actual thing that is causing the error.
Is there an issue? Send a MessageReason:
None

Added DiffLines:

* The [[http://en.wikipedia.org/wiki/M247_Sergeant_York M247 Sergeant York]] AntiAir vehicle was equipped with an automatic engagement system (DIVAD) so that it could target enemy planes and destroy them faster than the crew could react. In a demonstration, the DIVAD was activated and [[DisastrousDemonstration immediately started to aim the loaded cannons at the grandstands full of officers and politicians]]. The system had difficulties distinguishing between helicopters and trees. It would undershoot at ground vehicles by 300m. And if it aimed up, the guns would disrupt the radar system. A plethora of mechanical and design issues - the pathetic radar couldn't detect a drone target until it had four radar reflectors on it, water could foul the system, and it was slower than the vehicles it was designed to protect - lead to the project being canned after 50 vehicles were produced.
Is there an issue? Send a MessageReason:
None


** Occasionally it will confuse "its/it's" and "your/you're". And advise you to begin a sentence with a lower-case letter.

to:

** Occasionally it will confuse "its/it's" and "your/you're". And advise you to begin a sentence with a lower-case letter. And correct "I've" to "[[https://s-media-cache-ak0.pinimg.com/736x/a3/02/d6/a302d64e152e3bc0ff83045f5635a0bb.jpg me've]]".
Is there an issue? Send a MessageReason:
None


* In one somewhat infamous example, the Xbox Kinect's initial release caused quite a stir when an early review by [=GameSpot=] U.K reported that the Kinect could not read the motions of two dark-skinned employees, while the white employees were registered just fine. Cue several websites and gaming magazines half-jokingly claiming that the Kinect was [[http://www.pcworld.com/article/209708/Is_Microsoft_Kinect_Racist.html "racist"]]. Obviously, it's easier to detect a white face with dark hair, dark eyebrows, dark eyelashes and red lips than it is to detect those same features on darker skin. A white person with blonde hair, eyebrows, lashes and near-white lips would generally also be more difficult to detect. Besides a greater difference in color value, white skin will scatter the light onto those features (for which [[ThisMeansWarpaint warpaint]] is a practical countermeasure), making them easier to detect by a camera. Finally, a white person in front of the camera will reflect more light potentially triggering the camera to lower it's exposure time, resulting in less motion blur, making detection easier. Despite technical reasons why this sort of thing might happen, Microsoft should have probably tested the system more thoroughly.

to:

* In one somewhat infamous example, the Xbox Kinect's initial release caused quite a stir when an early review by [=GameSpot=] U.K reported that the Kinect could not read the motions of two dark-skinned employees, while the white employees were registered just fine. Cue several websites and gaming magazines half-jokingly claiming that the Kinect was [[http://www.pcworld.com/article/209708/Is_Microsoft_Kinect_Racist.html "racist"]]. Obviously, it's easier to detect a white face with dark hair, dark eyebrows, dark eyelashes and red lips than it is to detect those same features on darker skin. A white person with blonde hair, eyebrows, lashes and near-white lips would generally also be more difficult to detect. Besides a greater difference in color value, white skin will scatter the light onto those features (for which [[ThisMeansWarpaint warpaint]] is a practical countermeasure), making them easier to detect by a camera. Finally, a white person in front of the camera will reflect more light potentially triggering the camera to lower it's its exposure time, resulting in less motion blur, making detection easier. Despite technical reasons why this sort of thing might happen, Microsoft should have probably tested the system more thoroughly.

Changed: 2053

Removed: 1192

Is there an issue? Send a MessageReason:
None


* Norton Antivirus. Which, according to the DarthWiki/IdiotProgramming page, has been known to classify ''itself'' as a virus. [[HilarityEnsues Hilarity, and digital suicide, ensues]]. Few people who have had to uninstall the blasted thing manually would dispute the accuracy of this assessment. Some other antivirus programs, like older versions of [=McAfee=]'s, can delete or quarantine themselves as well.
** Norton has repeatedly been accused of being ''intentionally bad'' software. It's often regarded as a case of actual malware (and it is genuinely harmful software, far worse than most viruses even when working as designed) that flies over the radar thanks to taking RefugeInAudacity and selling itself as boxed copies.

to:

* Norton Antivirus. Which, according to the DarthWiki/IdiotProgramming page, has been known to classify ''itself'' as a virus. [[HilarityEnsues Hilarity, and digital suicide, ensues]]. Few people who have had to uninstall the blasted thing manually would dispute the accuracy of this assessment. Some other antivirus programs, like older versions of [=McAfee=]'s, can delete or quarantine themselves as well.
**
well. Norton has repeatedly been accused of being ''intentionally bad'' software. It's often regarded as a case of actual malware (and it is genuinely harmful software, far worse than most viruses even when working as designed) that flies over the radar thanks to taking RefugeInAudacity and selling itself as boxed copies.



* The Grammar checker in Microsoft Word is always drawing green lines under your sentences, but the suggestions it makes (if any) to resolve the problem almost never make any kind of sense in context or scan in a way that would sound right to a native English speaker. And then there's [[StopHelpingMe Clippy]]... Oh [[TheScrappy Clippy]]...
** Most of the time, the grammar error given is "Fragment (consider revising)", which doesn't really explain much (it basically means that the sentence isn't a complete one, but it's very picky about what it considers a complete sentence). As for Clippy, the sentence "It looks like you're writing a letter. Would you like some help?" is almost memetic in how much anyone trying to write anything in Word will get irritated upon seeing it. Thankfully you can disable the Office Assistant (of which Clippy is one of many), which many people do, to the point that later editions of Microsoft Word no longer included them.
*** It gets more jarring when you have Word correct a small grammar mistake, only for it to flag the entire sentence as bad. Needless to say, this is why you have human proofreaders go over your work.
** On occasions, the grammar checker will identify a sentence as a grammar error, then after correcting, ''identify the corrected sentence as a grammar error''.
*** This may be an indication of how ridiculously complicated the English language is in regards to its rules. There are so many exceptions and points where things don't make sense, you're bound to confuse the parser.

to:

* Microsoft Word:
**
The Grammar checker in Microsoft Word is always drawing green lines under your sentences, but the suggestions it makes (if any) to resolve the problem almost never make any kind of sense in context or scan in a way that would sound right to a native English speaker. And then there's [[StopHelpingMe Clippy]]... Oh [[TheScrappy Clippy]]...
**
Most of the time, the grammar error given is "Fragment (consider revising)", which doesn't really explain much (it basically means that the sentence isn't a complete one, but it's very picky about what it considers a complete sentence). As for Clippy, the sentence "It looks like you're writing a letter. Would you like some help?" is almost memetic in how much anyone trying to write anything in Word will get irritated upon seeing it. Thankfully you can disable the Office Assistant (of which Clippy is one of many), which many people do, to the point that later editions of Microsoft Word no longer included them.
***
them. It gets more jarring when you have Word correct a small grammar mistake, only for it to flag the entire sentence as bad. Needless to say, this is why you have human proofreaders go over your work.
** On occasions, the grammar checker will identify a sentence as a grammar error, then after correcting, ''identify the corrected sentence as a grammar error''.
***
error''. This may be an indication of how ridiculously complicated the English language is in regards to its rules. There are so many exceptions and points where things don't make sense, you're bound to confuse the parser.



* An [[Franchise/StarWars R2-D2]] toy robot, that is supposed to listen to human commands and do various games or actions, does nothing but spin in place, beep, and stare at owner with confusion no matter how clear your English is.
** There's a Yoda toy that is supposed to train you in the ways of the Jedi. You make him go to sleep (turn him off) by setting him on his head and pressing his hand. He then ''immediately'' wakes up given the slightest provocation, or at complete random.

to:

* An [[Franchise/StarWars R2-D2]] toy robot, that is supposed to listen to human commands and do various games or actions, does nothing but spin in place, beep, and stare at owner with confusion no matter how clear your English is.
**
is. There's also a Yoda toy that is supposed to train you in the ways of the Jedi. You make him go to sleep (turn him off) by setting him on his head and pressing his hand. He then ''immediately'' wakes up given the slightest provocation, or at complete random.



* Your average GPS will work fine most of the time. However, there are instances where one will send a driver out to the middle of a field, or give them the most indirect route possible.
** The infamous older versions of Apple Maps would have occasional instances of providing good directions. Most of the time, they would present strange, winding routes, which might even ask the user to drive across an airport runway or two.

to:

* Your average GPS will work fine most of the time. However, there are instances where one will send a driver out to the middle of a field, or give them the most indirect route possible.
**
possible. The infamous older versions of Apple Maps would have occasional instances of providing good directions. Most of the time, they would present strange, winding routes, which might even ask the user to drive across an airport runway or two.
Is there an issue? Send a MessageReason:
None


* Sometimes, it only takes a small bit of pushing to get an otherwise sane and normal IRC chatbot to go get itself killed. Repeatedly. By the same action. [[http://irc.digibase.ca/qdb/?16 Bonus points for the bot in question acknowledging the action]].

to:

* Sometimes, it only takes a small bit of pushing to get an otherwise sane and normal IRC chatbot to go get itself killed. Repeatedly. By the same action. [[http://irc.digibase.ca/qdb/?16 [[http://archive.is/dsP2T Bonus points for the bot in question acknowledging the action]].
Is there an issue? Send a MessageReason:
None

Added DiffLines:

* This trope also seems to be behind the [[https://en.wikipedia.org/wiki/2015_Seville_Airbus_A400M_Atlas_crash crash in 2015 of an Airbus A400M near Seville, Spain]], when three of its four engines stopped shortly after taking off because of the plane's computer being unable to read sensor data of them.
Is there an issue? Send a MessageReason:
None


* In one somewhat infamous example, the Xbox Kinect's initial release caused quite a stir when an early review by [=GameSpot=] U.K reported that the Kinect could not read the motions of two dark-skinned employees, while the white employees were registered just fine. Cue several websites and gaming magazines half-jokingly claiming that the Kinect was [[http://www.pcworld.com/article/209708/Is_Microsoft_Kinect_Racist.html "racist"]].

to:

* In one somewhat infamous example, the Xbox Kinect's initial release caused quite a stir when an early review by [=GameSpot=] U.K reported that the Kinect could not read the motions of two dark-skinned employees, while the white employees were registered just fine. Cue several websites and gaming magazines half-jokingly claiming that the Kinect was [[http://www.pcworld.com/article/209708/Is_Microsoft_Kinect_Racist.html "racist"]]. Obviously, it's easier to detect a white face with dark hair, dark eyebrows, dark eyelashes and red lips than it is to detect those same features on darker skin. A white person with blonde hair, eyebrows, lashes and near-white lips would generally also be more difficult to detect. Besides a greater difference in color value, white skin will scatter the light onto those features (for which [[ThisMeansWarpaint warpaint]] is a practical countermeasure), making them easier to detect by a camera. Finally, a white person in front of the camera will reflect more light potentially triggering the camera to lower it's exposure time, resulting in less motion blur, making detection easier. Despite technical reasons why this sort of thing might happen, Microsoft should have probably tested the system more thoroughly.
Is there an issue? Send a MessageReason:
None


* Norton Antivirus. Which, according to the DarthWiki/IdiotProgramming page, has been known to classify ''itself'' as a virus. [[HilarityEnsues Hilarity, and digital suicide, ensues]]. Some other antivirus programs, like older versions of [=McAfee=]'s, can delete or quarantine themselves as well.

to:

* Norton Antivirus. Which, according to the DarthWiki/IdiotProgramming page, has been known to classify ''itself'' as a virus. [[HilarityEnsues Hilarity, and digital suicide, ensues]]. Few people who have had to uninstall the blasted thing manually would dispute the accuracy of this assessment. Some other antivirus programs, like older versions of [=McAfee=]'s, can delete or quarantine themselves as well.
Is there an issue? Send a MessageReason:
None

Added DiffLines:

* This trope is why automated cars, such as [[https://en.wikipedia.org/wiki/Google_driverless_car those being developed by]] Website/{{Google}}, are not in mass production yet. Take the aforementioned potential GPS errors and also factor in the possibility of ''fatal accidents''.

Top