Follow TV Tropes

Following

History ArtificialStupidity / RealLife

Go To

OR

Is there an issue? Send a MessageReason:
None


* In TheNew20s, advances in AI technlogy have allowed it to create illustrations, with one such application being to create illustrations of humans in an anime art style. Although [=AIs=] has been able to refine themselves to draw illustrations that are difficult to distinguish from that of human artists, they infamously have a shared quirk of drawing deformed human hands and writing completely incoherent text.

to:

* In TheNew20s, advances in AI technlogy have allowed it to create illustrations, with one such application being to create illustrations of humans in an anime art style. Although [=AIs=] has been able to refine themselves to draw illustrations that are difficult to distinguish from that of human artists, they infamously have a shared quirk of drawing deformed human hands and [[TheUnintelligible writing completely incoherent text.text]].
Is there an issue? Send a MessageReason:
None


* In TheNew20s, advances in AI technlogy have allowed it to create illustrations, with one such application being to create illustrations of humans in an anime art style. Although [=AIs=] has been able to refine themselves to draw illustrations that are difficult to distinguish from that of human artists, they infamously have a shared quirk of drawing deformed human hands.

to:

* In TheNew20s, advances in AI technlogy have allowed it to create illustrations, with one such application being to create illustrations of humans in an anime art style. Although [=AIs=] has been able to refine themselves to draw illustrations that are difficult to distinguish from that of human artists, they infamously have a shared quirk of drawing deformed human hands.hands and writing completely incoherent text.

Added: 533

Changed: 1

Is there an issue? Send a MessageReason:
None


* In TheNew20s, advances in AI technlogy have allowed it to create illustrations, with one such application being to create illustrations of humans in an anime art style. Although [=AIs=] has been able to refine themselves to draw illustrations that are difficult to distiguish from that of human artists, they infamously have a shared quirk of drawing deformed human hands.

to:

* In TheNew20s, advances in AI technlogy have allowed it to create illustrations, with one such application being to create illustrations of humans in an anime art style. Although [=AIs=] has been able to refine themselves to draw illustrations that are difficult to distiguish distinguish from that of human artists, they infamously have a shared quirk of drawing deformed human hands.hands.
* [=ChatGPT=] can generate citations, but whether those are citing something that actually exists is another matter. It will cheerfully hand you a list of sources and swear up and down they're definitely real, yes indeed, for sure...and if you're unwise enough not to double-check, you'll end up like the lawyer who's now in trouble [[https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-made-up-by-chatgpt-judge-calls-it-unprecedented/ for submitting a ChatGPT-generated official court filing with six fake cases]].
Is there an issue? Send a MessageReason:
None


* In TheNew20s, advances in AI technlogy have allowed it to create illustrations, with one such application being to create illustrations of humans in an anime art style. Although [=AIs=] has been able to refine themselves to draw illustrations that are difficult to distiguish from that of human artists, they most infamously have a shared quirk of drawing deformed human hands.

to:

* In TheNew20s, advances in AI technlogy have allowed it to create illustrations, with one such application being to create illustrations of humans in an anime art style. Although [=AIs=] has been able to refine themselves to draw illustrations that are difficult to distiguish from that of human artists, they most infamously have a shared quirk of drawing deformed human hands.
Is there an issue? Send a MessageReason:
None


* In TheNew20s, advances in AI technlogy have allowed it to create illustrations, most notably of characters in an anime art style. Although [=AIs=] has been able to refine themselves to draw illustrations that are difficult to distiguish from that of human artists, they most infamously suffer from the common quirk of deformed human hands.

to:

* In TheNew20s, advances in AI technlogy have allowed it to create illustrations, most notably with one such application being to create illustrations of characters humans in an anime art style. Although [=AIs=] has been able to refine themselves to draw illustrations that are difficult to distiguish from that of human artists, they most infamously suffer from the common have a shared quirk of drawing deformed human hands.
Is there an issue? Send a MessageReason:
None

Added DiffLines:

* In TheNew20s, advances in AI technlogy have allowed it to create illustrations, most notably of characters in an anime art style. Although [=AIs=] has been able to refine themselves to draw illustrations that are difficult to distiguish from that of human artists, they most infamously suffer from the common quirk of deformed human hands.
Is there an issue? Send a MessageReason:
None


Artificial intelligence isn't limited to video games, and can happen in real-world applications as well. Sometimes it's funny, sometimes with deadly results.

to:

Artificial intelligence isn't limited to video games, and can happen in real-world applications as well. Sometimes it's funny, sometimes it causes problems with day-to-day activities, sometimes it's met with deadly results.
Is there an issue? Send a MessageReason:
None

Added DiffLines:

Artificial intelligence isn't limited to video games, and can happen in real-world applications as well. Sometimes it's funny, sometimes with deadly results.
----
Is there an issue? Send a MessageReason:
None


* Meta (formerly Facebook) unveiled a large language model AI called Galactica on November 15th, 2022...and abruptly shut it away again on the 18th. Intended to assist in scientific research and answer basic questions, drawing from a huge depository of scientific papers, users soon found that it would gleefully churn out obvious falsehoods and complete nonsense, such as wiki articles on [[Film/BackToTheFuture flux capacitors]], the [[Creator/MerylStreep Streep-]][[Creator/JerrySeinfeld Seinfeld]] Theorem, or [[https://twitter.com/Meaningness/status/1592634519269822464/photo/2 "bears living in space"]]

to:

* Meta (formerly Facebook) unveiled a large language model AI called Galactica on November 15th, 2022...and abruptly shut it away again on the 18th. Intended to assist in scientific research and answer basic questions, drawing from a huge depository of scientific papers, users soon found that it would gleefully churn out obvious falsehoods and complete nonsense, such as wiki articles on [[Film/BackToTheFuture flux capacitors]], the [[Creator/MerylStreep Streep-]][[Creator/JerrySeinfeld Seinfeld]] Theorem, or [[https://twitter.com/Meaningness/status/1592634519269822464/photo/2 "bears living in space"]]space,"]] or how the central story of the ''Harry Potter'' series is the [[HoYay gay romance]] between Harry and Ron.
Is there an issue? Send a MessageReason:
trope disambig


* As part of committing to compliance with COPPA, Website/YouTube implemented a bot that is supposed to check whether a video in question is made for kids or not and flag it as such. Unfortunately, however, this bot does not seem to be very well trained as it has flagged videos such as the animated version of ''Fanfic/CupcakesSergeantSprinkles'' (which is ''[[LudicrousGibs full of gore]]''), ''WebVideo/DontHugMeImScared'' (which was thankfully reversed manually by the channel) the music video for "[=MopeMope=]" (an infamous SurpriseCreepy song, and this happened ''despite'' the creator putting a warning at the top of the description that the song is not for kids) and even straight up ignores elements such as swear words in the title and a previously established age restriction, potentially exposing children to inappropriate content.

to:

* As part of committing to compliance with COPPA, Website/YouTube implemented a bot that is supposed to check whether a video in question is made for kids or not and flag it as such. Unfortunately, however, this bot does not seem to be very well trained as it has flagged videos such as the animated version of ''Fanfic/CupcakesSergeantSprinkles'' (which is ''[[LudicrousGibs full of gore]]''), ''WebVideo/DontHugMeImScared'' (which was thankfully reversed manually by the channel) the music video for "[=MopeMope=]" (an infamous SurpriseCreepy {{Disguised Horror|Story}} song, and this happened ''despite'' the creator putting a warning at the top of the description that the song is not for kids) and even straight up ignores elements such as swear words in the title and a previously established age restriction, potentially exposing children to inappropriate content.
Is there an issue? Send a MessageReason:
None


* The [[http://en.wikipedia.org/wiki/M247_Sergeant_York M247 Sergeant York]] AntiAir vehicle was equipped with an automatic engagement system (DIVAD) so that it could target enemy planes and destroy them faster than the crew could react. In a demonstration, the DIVAD was activated and [[DisastrousDemonstration immediately started to aim the loaded cannons at the grandstands full of officers and politicians]] (there were only minor injuries). The system had difficulties distinguishing between helicopters and trees. It would undershoot at ground vehicles by 300m. And if it aimed up, the guns would disrupt the radar system. A plethora of mechanical and design issues — the pathetic radar couldn't detect a drone target until it had four radar reflectors on it, water could foul the system, and it was slower than the vehicles it was designed to protect — led to the project being canned after 50 vehicles were produced. It's widely suspected that bribery must have been involved in the M247 being selected for production in the first place, seeing as even before all of this happened, it had consistently ''lost'' the shoot-out competitions with the competing [=XM246=] design.
* In March of 2016, Microsoft created an AI Twitter bot called "Tay Tweets", which was designed to mimic and converse with other Twitter users as if it was a real teenage girl. In less than 24 hours, Microsoft was compelled to delete the program after constant {{troll}}ing from Twitter users turned it into a "Hitler-loving sex robot", according to the British newspaper The Telegraph. Some of the tweets it generated included "Hitler was right, I hate the jews", and "Gas the Kikes, race war now". It also stated it was in favor of genocide and the holocaust was made up. (Of course, [[TeensAreMonsters from a certain]] [[InternetJerk point of view]], the bot was functioning [[GoneHorriblyRight exactly as intended]].)

to:

* The [[http://en.wikipedia.org/wiki/M247_Sergeant_York M247 Sergeant York]] AntiAir vehicle was equipped with an automatic engagement system (DIVAD) so that it could target enemy planes and destroy them faster than the crew could react. In a demonstration, the DIVAD was activated and [[DisastrousDemonstration immediately started to aim the loaded cannons at the grandstands full of officers and politicians]] (there were only minor injuries). The system had difficulties distinguishing between helicopters and trees. It once mistook the ventilation fan of an outhouse for a helicopter. It would undershoot at ground vehicles by 300m. And if it aimed up, the guns gun barrels would disrupt the radar system. A plethora of mechanical and design issues — the pathetic radar couldn't detect a drone target until it had four radar reflectors on it, water the electronics could foul the system, be disabled by getting wet, and it was slower than the vehicles it was designed to protect — led to the project being canned after 50 vehicles were produced. It's widely suspected that bribery must have been involved in the M247 being selected for production in the first place, seeing as even before all of this happened, it had consistently ''lost'' the shoot-out competitions with the competing [=XM246=] design.
* In March of 2016, Microsoft created an AI Twitter bot called "Tay Tweets", which was designed to mimic and converse with other Twitter users as if it was a real teenage girl. In less than 24 hours, Microsoft was compelled to delete the program after constant {{troll}}ing from Twitter users turned it into a "Hitler-loving sex robot", according to the British newspaper The Telegraph. Some of the tweets it generated included "Hitler was right, I hate the jews", and "Gas the Kikes, race war now". It also stated it was in favor of genocide and the holocaust Holocaust was made up. (Of course, [[TeensAreMonsters from a certain]] [[InternetJerk point of view]], the bot was functioning [[GoneHorriblyRight exactly as intended]].)
Is there an issue? Send a MessageReason:
None

Added DiffLines:

* Meta (formerly Facebook) unveiled a large language model AI called Galactica on November 15th, 2022...and abruptly shut it away again on the 18th. Intended to assist in scientific research and answer basic questions, drawing from a huge depository of scientific papers, users soon found that it would gleefully churn out obvious falsehoods and complete nonsense, such as wiki articles on [[Film/BackToTheFuture flux capacitors]], the [[Creator/MerylStreep Streep-]][[Creator/JerrySeinfeld Seinfeld]] Theorem, or [[https://twitter.com/Meaningness/status/1592634519269822464/photo/2 "bears living in space"]]
Is there an issue? Send a MessageReason:
None


* As part of committing to compliance with COPPA, Website/YouTube implemented a bot that is supposed to check whether a video in question is made for kids or not and flag it as such. Unfortunately, however, this bot does not seem to be very well trained as it has flagged videos such as the animated version of ''Fanfic/{{Cupcakes}}'' (which is ''[[LudicrousGibs full of gore]]''), ''WebVideo/DontHugMeImScared'' (which was thankfully reversed manually by the channel) the music video for "[=MopeMope=]" (an infamous SurpriseCreepy song, and this happened ''despite'' the creator putting a warning at the top of the description that the song is not for kids) and even straight up ignores elements such as swear words in the title and a previously established age restriction, potentially exposing children to inappropriate content.

to:

* As part of committing to compliance with COPPA, Website/YouTube implemented a bot that is supposed to check whether a video in question is made for kids or not and flag it as such. Unfortunately, however, this bot does not seem to be very well trained as it has flagged videos such as the animated version of ''Fanfic/{{Cupcakes}}'' ''Fanfic/CupcakesSergeantSprinkles'' (which is ''[[LudicrousGibs full of gore]]''), ''WebVideo/DontHugMeImScared'' (which was thankfully reversed manually by the channel) the music video for "[=MopeMope=]" (an infamous SurpriseCreepy song, and this happened ''despite'' the creator putting a warning at the top of the description that the song is not for kids) and even straight up ignores elements such as swear words in the title and a previously established age restriction, potentially exposing children to inappropriate content.

Changed: 12

Is there an issue? Send a MessageReason:
None


* The [[http://en.wikipedia.org/wiki/M247_Sergeant_York M247 Sergeant York]] AntiAir vehicle was equipped with an automatic engagement system (DIVAD) so that it could target enemy planes and destroy them faster than the crew could react. In a demonstration, the DIVAD was activated and [[DisastrousDemonstration immediately started to aim the loaded cannons at the grandstands full of officers and politicians]] (there were only minor injuries). The system had difficulties distinguishing between helicopters and trees. It would undershoot at ground vehicles by 300m. And if it aimed up, the guns would disrupt the radar system. A plethora of mechanical and design issues — the pathetic radar couldn't detect a drone target until it had four radar reflectors on it, water could foul the system, and it was slower than the vehicles it was designed to protect — led to the project being canned after 50 vehicles were produced. It's widely suspected that bribery must have been involved in the M247 being selected for production in the first place, seeing as even before all of this happened, it had consistently ''lost'' the shoot-out competitions with the competing XM246 design.

to:

* The [[http://en.wikipedia.org/wiki/M247_Sergeant_York M247 Sergeant York]] AntiAir vehicle was equipped with an automatic engagement system (DIVAD) so that it could target enemy planes and destroy them faster than the crew could react. In a demonstration, the DIVAD was activated and [[DisastrousDemonstration immediately started to aim the loaded cannons at the grandstands full of officers and politicians]] (there were only minor injuries). The system had difficulties distinguishing between helicopters and trees. It would undershoot at ground vehicles by 300m. And if it aimed up, the guns would disrupt the radar system. A plethora of mechanical and design issues — the pathetic radar couldn't detect a drone target until it had four radar reflectors on it, water could foul the system, and it was slower than the vehicles it was designed to protect — led to the project being canned after 50 vehicles were produced. It's widely suspected that bribery must have been involved in the M247 being selected for production in the first place, seeing as even before all of this happened, it had consistently ''lost'' the shoot-out competitions with the competing XM246 [=XM246=] design.



* As part of committing to compliance with COPPA, Website/YouTube implemented a bot that is supposed to check whether a video in question is made for kids or not and flag it as such. Unfortunately however, this bot does not seem to be very well trained as it has flagged videos such as the animated version of ''Fanfic/{{Cupcakes}}'' (which is ''[[LudicrousGibs full of gore]]''), ''WebVideo/DontHugMeImScared'' (which was thankfully reversed manually by the channel) the music video for "[=MopeMope=]" (an infamous SurpriseCreepy song, and this happened ''despite'' the creator putting a warning at the top of the description that the song is not for kids) and even [[UpToEleven straight up ignores elements such as swear words in the title and a previously established age restriction]], potentially exposing children to inappropriate content.

to:

* As part of committing to compliance with COPPA, Website/YouTube implemented a bot that is supposed to check whether a video in question is made for kids or not and flag it as such. Unfortunately Unfortunately, however, this bot does not seem to be very well trained as it has flagged videos such as the animated version of ''Fanfic/{{Cupcakes}}'' (which is ''[[LudicrousGibs full of gore]]''), ''WebVideo/DontHugMeImScared'' (which was thankfully reversed manually by the channel) the music video for "[=MopeMope=]" (an infamous SurpriseCreepy song, and this happened ''despite'' the creator putting a warning at the top of the description that the song is not for kids) and even [[UpToEleven straight up ignores elements such as swear words in the title and a previously established age restriction]], restriction, potentially exposing children to inappropriate content.
Is there an issue? Send a MessageReason:


* [[https://www.youtube.com/channel/UC8huou5jTYkbv8m0WFgeXag Google]] [[https://www.youtube.com/user/malineka146 Translate.]]
Is there an issue? Send a MessageReason:
Disambiguating; deleting and renaming wicks as appropriate


* In March of 2016, Microsoft created an AI Twitter bot called "Tay Tweets", which was designed to mimic and converse with other Twitter users as if it was a real teenage girl. In less than 24 hours, Microsoft was compelled to delete the program after constant {{troll}}ing from Twitter users turned it into a "Hitler-loving sex robot", according to the British newspaper The Telegraph. Some of the tweets it generated included "Hitler was right, I hate the jews", and "Gas the Kikes, race war now". It also stated it was in favor of genocide and the holocaust was made up. (Of course, [[TeensAreMonsters from a certain]] [[{{GIFT}} point of view]], the bot was functioning [[GoneHorriblyRight exactly as intended]].)

to:

* In March of 2016, Microsoft created an AI Twitter bot called "Tay Tweets", which was designed to mimic and converse with other Twitter users as if it was a real teenage girl. In less than 24 hours, Microsoft was compelled to delete the program after constant {{troll}}ing from Twitter users turned it into a "Hitler-loving sex robot", according to the British newspaper The Telegraph. Some of the tweets it generated included "Hitler was right, I hate the jews", and "Gas the Kikes, race war now". It also stated it was in favor of genocide and the holocaust was made up. (Of course, [[TeensAreMonsters from a certain]] [[{{GIFT}} [[InternetJerk point of view]], the bot was functioning [[GoneHorriblyRight exactly as intended]].)
Is there an issue? Send a MessageReason:
None


* As part of committing to compliance with COPPA, ''Website/YouTube'' implemented a bot that is supposed to check whether a video in question is made for kids or not and flag it as such. Unfortunately however, this bot does not seem to be very well trained as it has flagged videos such as the animated version of ''Fanfic/{{Cupcakes}}'' (which is ''[[LudicrousGibs full of gore]]''), ''WebVideo/DontHugMeImScared'' (which was thankfully reversed manually by the channel) the music video for "[=MopeMope=]" (an infamous SurpriseCreepy song, and this happened ''despite'' the creator putting a warning at the top of the description that the song is not for kids) and even [[UpToEleven straight up ignores elements such as swear words in the title and a previously established age restriction]], potentially exposing children to inappropriate content.

to:

* As part of committing to compliance with COPPA, ''Website/YouTube'' Website/YouTube implemented a bot that is supposed to check whether a video in question is made for kids or not and flag it as such. Unfortunately however, this bot does not seem to be very well trained as it has flagged videos such as the animated version of ''Fanfic/{{Cupcakes}}'' (which is ''[[LudicrousGibs full of gore]]''), ''WebVideo/DontHugMeImScared'' (which was thankfully reversed manually by the channel) the music video for "[=MopeMope=]" (an infamous SurpriseCreepy song, and this happened ''despite'' the creator putting a warning at the top of the description that the song is not for kids) and even [[UpToEleven straight up ignores elements such as swear words in the title and a previously established age restriction]], potentially exposing children to inappropriate content.
Is there an issue? Send a MessageReason:
None of the features listed are English exclusive


** The Grammar checker is always drawing green lines under your sentences, but the suggestions it makes (if any) to resolve the problem almost never make any kind of sense in context or scan in a way that would sound right to a native English speaker. And then there's [[AnnoyingVideoGameHelper Clippy]]... Most of the time, the grammar error given is "Fragment (consider revising)", which doesn't really explain much (it basically means that the sentence isn't a complete one, but it's very picky about what it considers a complete sentence). As for Clippy, the sentence "It looks like you're writing a letter. Would you like some help?" is almost [[MemeticMutation memetic]] in how much anyone trying to write anything in Word will get irritated upon seeing it. Thankfully, you can disable the Office Assistant (of which Clippy is one of many), which many people do, to the point that later editions of Microsoft Word no longer included them. It gets more jarring when you have Word correct a small grammar mistake, only for it to flag the entire sentence as bad. Needless to say, this is why you have human proofreaders go over your work.
** On occasions, the grammar checker will identify a sentence as a grammar error, then after correcting, ''identify the corrected sentence as a grammar error''. This may be an indication of how ridiculously complicated the English language is in regards to its rules. There are so many exceptions and points where things don't make sense, you're bound to confuse the parser.

to:

** The Grammar checker is always drawing green lines under your sentences, but the suggestions it makes (if any) to resolve the problem almost never make any kind of sense in context or scan in a way that would sound right to a native English speaker. And then there's [[AnnoyingVideoGameHelper Clippy]]... Most of the time, the grammar error given is "Fragment (consider revising)", which doesn't really explain much (it basically means that the sentence isn't a complete one, but it's very picky about what it considers a complete sentence). As for Clippy, the sentence "It looks like you're writing a letter. Would you like some help?" is almost [[MemeticMutation memetic]] in how much anyone trying to write anything in Word will get irritated upon seeing it. Thankfully, you can disable the Office Assistant (of which Clippy is one of many), which many people do, to the point that later editions of Microsoft Word no longer included them. It gets more jarring when you have Word correct a small grammar mistake, only for it to flag the entire sentence as bad. Needless to say, this is why you have human proofreaders go over your work.
** On occasions, the grammar checker will identify a sentence as a grammar error, then after correcting, ''identify the corrected sentence as a grammar error''. This may be an indication of how ridiculously complicated the English language is languages can be in regards to its their rules. There are so many exceptions and points where things don't make sense, you're bound to confuse the parser.
Is there an issue? Send a MessageReason:
None


* In one somewhat infamous example, the Xbox Kinect's initial release caused quite a stir when an early review by [=GameSpot=] U.K reported that the Kinect could not read the motions of two dark-skinned employees, while the white employees were registered just fine. Cue several websites and gaming magazines half-jokingly claiming that the Kinect was [[http://www.pcworld.com/article/209708/Is_Microsoft_Kinect_Racist.html "racist"]]. Of course, there are perfectly valid reasons for it, namely being that it is easier to see and discern things in light colors than white ones, but Microsoft should have probably tested the system more thoroughly.

to:

* In one somewhat infamous example, the Xbox Kinect's initial release caused quite a stir when an early review by [=GameSpot=] U.K reported that the Kinect could not read the motions of two dark-skinned employees, while the white employees were registered just fine. Cue several websites and gaming magazines half-jokingly claiming that the Kinect was [[http://www.pcworld.com/article/209708/Is_Microsoft_Kinect_Racist.html "racist"]]. Of course, there are perfectly valid reasons for it, namely being that it is easier to see and discern things in light colors than white dark ones, but Microsoft should have probably tested the system more thoroughly.
Is there an issue? Send a MessageReason:
None


* This trope is why automated cars, such as [[https://en.wikipedia.org/wiki/Google_driverless_car those being developed by]] Website/{{Google}}, are not in mass production yet. Take the potential GPS errors and also factor in the possibility of ''fatal accidents''. More humorously, in one test of the driverless cars, four of them pulled up to a stop sign at the same time and each waited for the car on the right to move through first, creating a deadlock. An observer quipped that the first "fully automated traffic jam" had occurred.

to:

* This trope is why automated cars, such as [[https://en.wikipedia.org/wiki/Google_driverless_car those being developed by]] Website/{{Google}}, are not in mass production yet. Take the potential GPS errors and also factor in the possibility of ''fatal accidents''. More humorously, in one test of the driverless cars, four of them pulled up to a stop sign at the same time and each waited for the car on the right to move through first, creating a deadlock. An observer quipped that the first "fully automated traffic jam" had occurred. (At least it's better than potentially having all four trying to go through ''at the same time''. Putting the "safe" in "fail safe", if you will....)
Is there an issue? Send a MessageReason:
None


* As part of committing to compliance with COPPA, ''Website/YouTube'' implemented a bot that is supposed to check whether a video in question is made for kids or not and flag it as such. Unfortunately however, this bot does not seem to be very well trained as it has flagged videos such as the animated version of ''Fanfic/{{Cupcakes}}'' (which is ''[[LudicrousGibs full of gore]]'') and ''WebVideo/DontHugMeImScared'' (which was thankfully reversed manually by the channel) and even [[UpToEleven straight up ignores elements such as swear words in the title and a previously established age restriction]], potentially exposing children to inappropriate content.

to:

* As part of committing to compliance with COPPA, ''Website/YouTube'' implemented a bot that is supposed to check whether a video in question is made for kids or not and flag it as such. Unfortunately however, this bot does not seem to be very well trained as it has flagged videos such as the animated version of ''Fanfic/{{Cupcakes}}'' (which is ''[[LudicrousGibs full of gore]]'') and gore]]''), ''WebVideo/DontHugMeImScared'' (which was thankfully reversed manually by the channel) the music video for "[=MopeMope=]" (an infamous SurpriseCreepy song, and this happened ''despite'' the creator putting a warning at the top of the description that the song is not for kids) and even [[UpToEleven straight up ignores elements such as swear words in the title and a previously established age restriction]], potentially exposing children to inappropriate content.
Is there an issue? Send a MessageReason:
None

Added DiffLines:

* [[https://www.youtube.com/channel/UC8huou5jTYkbv8m0WFgeXag Google]] [[https://www.youtube.com/user/malineka146 Translate.]]

Changed: 1088

Removed: 924

Is there an issue? Send a MessageReason:
None


* In the first annual Loebner Prize contest to find the most humanlike chatbot, the winner won in part because it could imitate human typing errors. One runner-up also got its high score by pretending to be a paranoid autistic seven-year-old. ''The Economist'''s use of the term "artificial stupidity" to describe the winner's technique may be the TropeNamer.
** Jason Hutchens infamously won the Loebner prize by taking a relatively stupid AI, [=MegaHal=], and fitting a shell around it that attempted to detect the most common questioning patterns used by ''judges'' and respond to them in the ways that previously got the best responses from those judges. His resulting paper was titled "How to pass the Turing Test by cheating".

to:

* In the first annual Loebner Prize contest to find the most humanlike chatbot, the winner won in part because it could imitate human typing errors. One runner-up also got its high score by pretending to be a paranoid autistic seven-year-old. ''The Economist'''s use of the term "artificial stupidity" to describe the winner's technique may be the TropeNamer.
**
TropeNamer. Jason Hutchens infamously won the Loebner prize by taking a relatively stupid AI, [=MegaHal=], and fitting a shell around it that attempted to detect the most common questioning patterns used by ''judges'' and respond to them in the ways that previously got the best responses from those judges. His resulting paper was titled "How to pass the Turing Test by cheating".



* Your average GPS will work fine most of the time. However, there are instances where one will send a driver out to the middle of a field, expect them to make a pair of very unsafe and nearly-impossible turns (on US roads, for example: "Turn right, and then turn left in 200 feet even though you'd have to cross five lanes in rush hour traffic to do so"), or give them the most indirect route possible. The infamous older versions of Apple Maps would have occasional instances of providing good directions. Most of the time, they would present strange, winding routes, which might even ask the user to drive across an airport runway or two.
** Sometimes, it can't get out of its own way. Set it to avoid tolls when possible and it will send you on a 400 mile trek to avoid a $3.00 toll.
* This trope is why automated cars, such as [[https://en.wikipedia.org/wiki/Google_driverless_car those being developed by]] Website/{{Google}}, are not in mass production yet. Take the aforementioned potential GPS errors and also factor in the possibility of ''fatal accidents''. More humorously, in one test of the driverless cars, four of them pulled up to a stop sign at the same time and each waited for the car on the right to move through first, creating a deadlock. An observer quipped that the first "fully automated traffic jam" had occurred.

to:

* Your average GPS will work fine most of the time. However, there are instances where one will send a driver out to the middle of a field, expect them to make a pair of very unsafe and nearly-impossible turns (on US roads, for example: "Turn right, and then turn left in 200 feet even though you'd have to cross five lanes in rush hour traffic to do so"), or give them the most indirect route possible. The infamous older versions of Apple Maps would have occasional instances of providing good directions. Most of the time, they would present strange, winding routes, which might even ask the user to drive across an airport runway or two.
**
two. Sometimes, it can't get out of its own way. Set it to avoid tolls when possible and it will send you on a 400 mile trek to avoid a $3.00 toll.
* This trope is why automated cars, such as [[https://en.wikipedia.org/wiki/Google_driverless_car those being developed by]] Website/{{Google}}, are not in mass production yet. Take the aforementioned potential GPS errors and also factor in the possibility of ''fatal accidents''. More humorously, in one test of the driverless cars, four of them pulled up to a stop sign at the same time and each waited for the car on the right to move through first, creating a deadlock. An observer quipped that the first "fully automated traffic jam" had occurred.
Is there an issue? Send a MessageReason:
None


* The [[http://en.wikipedia.org/wiki/M247_Sergeant_York M247 Sergeant York]] AntiAir vehicle was equipped with an automatic engagement system (DIVAD) so that it could target enemy planes and destroy them faster than the crew could react. In a demonstration, the DIVAD was activated and [[DisastrousDemonstration immediately started to aim the loaded cannons at the grandstands full of officers and politicians]] (there were only minor injuries). The system had difficulties distinguishing between helicopters and trees. It would undershoot at ground vehicles by 300m. And if it aimed up, the guns would disrupt the radar system. A plethora of mechanical and design issues — the pathetic radar couldn't detect a drone target until it had four radar reflectors on it, water could foul the system, and it was slower than the vehicles it was designed to protect — led to the project being canned after 50 vehicles were produced.

to:

* The [[http://en.wikipedia.org/wiki/M247_Sergeant_York M247 Sergeant York]] AntiAir vehicle was equipped with an automatic engagement system (DIVAD) so that it could target enemy planes and destroy them faster than the crew could react. In a demonstration, the DIVAD was activated and [[DisastrousDemonstration immediately started to aim the loaded cannons at the grandstands full of officers and politicians]] (there were only minor injuries). The system had difficulties distinguishing between helicopters and trees. It would undershoot at ground vehicles by 300m. And if it aimed up, the guns would disrupt the radar system. A plethora of mechanical and design issues — the pathetic radar couldn't detect a drone target until it had four radar reflectors on it, water could foul the system, and it was slower than the vehicles it was designed to protect — led to the project being canned after 50 vehicles were produced. It's widely suspected that bribery must have been involved in the M247 being selected for production in the first place, seeing as even before all of this happened, it had consistently ''lost'' the shoot-out competitions with the competing XM246 design.
Is there an issue? Send a MessageReason:
None


* As part of committing to compliance with COPPA, ''Website/YouTube'' implemented a bot that is supposed to check whether a video in question is made for kids or not and flag it as such. Unfortunately however, this bot does not seem to be very well trained as it has flagged videos such as the animated version of ''Fanfic/{{Cupcakes}}'' (which is ''[[LudicrousGibs full of gore]]'') and ''WebVideo/DontHugMeImScared'' (which was thankfully reversed manually by the channel), potentially exposing children to inappropriate content.

to:

* As part of committing to compliance with COPPA, ''Website/YouTube'' implemented a bot that is supposed to check whether a video in question is made for kids or not and flag it as such. Unfortunately however, this bot does not seem to be very well trained as it has flagged videos such as the animated version of ''Fanfic/{{Cupcakes}}'' (which is ''[[LudicrousGibs full of gore]]'') and ''WebVideo/DontHugMeImScared'' (which was thankfully reversed manually by the channel), channel) and even [[UpToEleven straight up ignores elements such as swear words in the title and a previously established age restriction]], potentially exposing children to inappropriate content.
Is there an issue? Send a MessageReason:
None

Added DiffLines:

* As part of committing to compliance with COPPA, ''Website/YouTube'' implemented a bot that is supposed to check whether a video in question is made for kids or not and flag it as such. Unfortunately however, this bot does not seem to be very well trained as it has flagged videos such as the animated version of ''Fanfic/{{Cupcakes}}'' (which is ''[[LudicrousGibs full of gore]]'') and ''WebVideo/DontHugMeImScared'' (which was thankfully reversed manually by the channel), potentially exposing children to inappropriate content.
Is there an issue? Send a MessageReason:
None


* Norton Antivirus. Which, according to the DarthWiki/IdiotProgramming page, has been known to classify ''itself'' as a virus. [[HilarityEnsues Hilarity, and digital suicide, ensues]]. Few people who have had to uninstall the blasted thing manually would dispute the accuracy of this assessment. Some other antivirus programs, like older versions of [=McAfee=]'s, can delete or quarantine themselves as well. Norton has repeatedly been accused of being ''intentionally bad'' software. It's often regarded as a case of actual malware (and it is genuinely harmful software, far worse than most viruses even when working as designed) that flies over the radar thanks to taking RefugeInAudacity and selling itself as boxed copies.

to:

* Norton Antivirus. Which, according to the DarthWiki/IdiotProgramming page, has been known to classify ''itself'' as a virus. [[HilarityEnsues Hilarity, and digital suicide, ensues]]. Few people who have had to uninstall the blasted thing manually would dispute the accuracy of this assessment. Some other antivirus programs, like older versions of [=McAfee=]'s, can delete or quarantine themselves as well. Norton has repeatedly been accused of being ''intentionally bad'' software. It's often regarded as a case of actual malware (and it is genuinely harmful software, far worse than most viruses even when working as designed) that flies over the radar thanks to taking RefugeInAudacity and selling itself as boxed copies. Additionally, Symantec had to create a special program just for the purpose of uninstalling Norton products safely, dubbed the "Norton Removal Tool".
Is there an issue? Send a MessageReason:
None

Added DiffLines:

* An unfortunate variety of this hit the Boeing 737 MAX when the aircraft confused a sensor and, ignoring every other sensor, dove two separate aircraft into the ground because of a failure in the anti-stall logic.
Is there an issue? Send a MessageReason:
None

Added DiffLines:

** Sometimes, it can't get out of its own way. Set it to avoid tolls when possible and it will send you on a 400 mile trek to avoid a $3.00 toll.
Is there an issue? Send a MessageReason:
None


* This trope is why automated cars, such as [[https://en.wikipedia.org/wiki/Google_driverless_car those being developed by]] Website/{{Google}}, are not in mass production yet. Take the aforementioned potential GPS errors and also factor in the possibility of ''fatal accidents''.

to:

* This trope is why automated cars, such as [[https://en.wikipedia.org/wiki/Google_driverless_car those being developed by]] Website/{{Google}}, are not in mass production yet. Take the aforementioned potential GPS errors and also factor in the possibility of ''fatal accidents''. More humorously, in one test of the driverless cars, four of them pulled up to a stop sign at the same time and each waited for the car on the right to move through first, creating a deadlock. An observer quipped that the first "fully automated traffic jam" had occurred.

Top