Follow TV Tropes

Following

History ArtificialStupidity / RealLife

Go To

OR

Is there an issue? Send a MessageReason:
None


* The Google Gemini image generation was under a discourse about producing mainly images with people of colour. On paper, this wouldn't be much of an issue and a rather very good idea. [[https://knowyourmeme.com/memes/google-gemini-diverse-prompt-injection However, the problem is when the AI's stance on diversity clashes with historical records]], with people of color representing historically white time periods and societal roles, like 1800s Germany or the Pope, among other examples, resulting in huge RaceLift. Gemini seemed to omit the presence of white people, revealing its usage of the word "diverse" and the phrase "people with diverse backgrounds" in its intro sentence. The poor execution of generating non-Caucasian people in historically inaccurate positions was met with massive criticism and MemeticMutation. They were just automatically adding text to every image prompt, specifying that people in the image should be "diverse" -- which the AI interpreted as meaning "nonwhite". But that wasn't the only weird thing that was going on with Gemini with regards to race. It was also trained to refuse explicit requests to draw White people, on the grounds that such images would perpetuate "harmful stereotypes" (despite demonstrably not having a problem depicting stereotypes of Native Americans and Asians). And it refused to draw a painting in the style of Norman Rockwell, on the grounds that Rockwell's paintings presented too idealized a picture of 1940s America, and could thus "perpetuate harmful stereotypes". Basically ArtisticLicenseHistory as if it were an AI art generator. Google had to quickly pulled it out to iron it out.

to:

* The Google Gemini image generation was under a discourse about producing mainly images with people of colour. On paper, this wouldn't be much of an issue and a rather very good idea. [[https://knowyourmeme.com/memes/google-gemini-diverse-prompt-injection However, the problem is when the AI's stance on diversity clashes with historical records]], with people of color representing historically white time periods and societal roles, like 1800s Germany or the Pope, among other examples, resulting in huge RaceLift. Gemini seemed to omit the presence of white people, revealing its usage of the word "diverse" and the phrase "people with diverse backgrounds" in its intro sentence. The poor execution of generating non-Caucasian people in historically inaccurate positions was met with massive criticism and MemeticMutation. They were just automatically adding text to every image prompt, specifying that people in the image should be "diverse" -- which the AI interpreted as meaning "nonwhite". But that wasn't the only weird thing that was going on with Gemini with regards to race. It was also trained to refuse explicit requests to draw White people, on the grounds that such images would perpetuate "harmful stereotypes" (despite demonstrably not having a problem depicting stereotypes of Native Americans and Asians). And it refused to draw a painting in the style of Norman Rockwell, on the grounds that Rockwell's paintings presented too idealized a picture of 1940s America, and could thus "perpetuate harmful stereotypes". Basically ArtisticLicenseHistory as if it were an AI art generator. Google had to quickly pulled pull it out to iron it the kinks out.
Is there an issue? Send a MessageReason:
None


* The Google Gemini image generation was under a discourse about producing mainly images with people of colour. On paper, this wouldn't be much of an issue and a rather very good idea. [[https://knowyourmeme.com/memes/google-gemini-diverse-prompt-injection However, the problem is when the AI's stance on diversity clashes with historical records]], with people of color representing historically white time periods and societal roles, like 1800s Germany or the Pope, among other examples, resulting in huge RaceLift. Gemini seemed to omit the presence of white people, revealing its usage of the word "diverse" and the phrase "people with diverse backgrounds" in its intro sentence. The poor execution of generating non-Caucasian people in historically inaccurate positions was met with massive criticism and MemeticMutation. Basically ArtisticLicenseHistory as if it were an AI art generator. Google had to quickly pulled it out to iron it out.

to:

* The Google Gemini image generation was under a discourse about producing mainly images with people of colour. On paper, this wouldn't be much of an issue and a rather very good idea. [[https://knowyourmeme.com/memes/google-gemini-diverse-prompt-injection However, the problem is when the AI's stance on diversity clashes with historical records]], with people of color representing historically white time periods and societal roles, like 1800s Germany or the Pope, among other examples, resulting in huge RaceLift. Gemini seemed to omit the presence of white people, revealing its usage of the word "diverse" and the phrase "people with diverse backgrounds" in its intro sentence. The poor execution of generating non-Caucasian people in historically inaccurate positions was met with massive criticism and MemeticMutation. They were just automatically adding text to every image prompt, specifying that people in the image should be “diverse” — which the AI interpreted as meaning “nonwhite”. But that wasn’t the only weird thing that was going on with Gemini with regards to race. It was also trained to refuse explicit requests to draw White people, on the grounds that such images would perpetuate “harmful stereotypes” (despite demonstrably not having a problem depicting stereotypes of Native Americans and Asians). And it refused to draw a painting in the style of Norman Rockwell, on the grounds that Rockwell’s paintings presented too idealized a picture of 1940s America, and could thus “perpetuate harmful stereotypes”. Basically ArtisticLicenseHistory as if it were an AI art generator. Google had to quickly pulled it out to iron it out.
Is there an issue? Send a MessageReason:
None


* The Google Gemini image generation was under a discourse about producing mainly images with people of colour. On paper, this wouldn't be much of an issue and a rather very good idea. [[https://knowyourmeme.com/memes/google-gemini-diverse-prompt-injection However, the problem is when the AI's stance on diversity clashes with historical records]], with people of color representing historically white time periods and societal roles, like 1800s Germany or the Pope, among other examples. Gemini seemed to omit the presence of white people, revealing its usage of the word "diverse" and the phrase "people with diverse backgrounds" in its intro sentence. The poor execution of generating non-Caucasian people in historically inaccurate positions was met with massive criticism and MemeticMutation. Basically ArtisticLicenseHistory as if it were an AI art generator. Google had to quickly pulled it out to iron it out.

to:

* The Google Gemini image generation was under a discourse about producing mainly images with people of colour. On paper, this wouldn't be much of an issue and a rather very good idea. [[https://knowyourmeme.com/memes/google-gemini-diverse-prompt-injection However, the problem is when the AI's stance on diversity clashes with historical records]], with people of color representing historically white time periods and societal roles, like 1800s Germany or the Pope, among other examples.examples, resulting in huge RaceLift. Gemini seemed to omit the presence of white people, revealing its usage of the word "diverse" and the phrase "people with diverse backgrounds" in its intro sentence. The poor execution of generating non-Caucasian people in historically inaccurate positions was met with massive criticism and MemeticMutation. Basically ArtisticLicenseHistory as if it were an AI art generator. Google had to quickly pulled it out to iron it out.
Is there an issue? Send a MessageReason:
None


* As part of committing to compliance with COPPA, Website/YouTube implemented a bot that is supposed to check whether a video in question is made for kids or not and flag it as such. Unfortunately, however, this bot does not seem to be very well trained as it has flagged videos such as the animated version of ''Fanfic/CupcakesSergeantSprinkles'' (which is ''[[LudicrousGibs full of gore]]''), ''WebVideo/DontHugMeImScared'' (which was thankfully reversed manually by the channel) the music video for "[=MopeMope=]" (an infamous {{Disguised Horror|Story}} song, and this happened ''despite'' the creator putting a warning at the top of the description that the song is not for kids) and even straight up ignores elements such as swear words in the title and a previously established age restriction, potentially exposing children to inappropriate content.

to:

* As part of committing to compliance with COPPA, Website/YouTube implemented a bot that is supposed to check whether a video in question is made for kids or not and flag it as such. Unfortunately, however, this bot does not seem to be very well trained as it has flagged videos such as the animated version of ''Fanfic/CupcakesSergeantSprinkles'' (which is ''[[LudicrousGibs full of gore]]''), ''WebVideo/DontHugMeImScared'' (which was thankfully reversed manually by the channel) the music video for "[=MopeMope=]" (an infamous {{Disguised Horror|Story}} song, and this happened ''despite'' the creator putting a warning at the top of the description that the song is not for kids) and even straight up ignores elements such as swear words in the title and a previously established age restriction, potentially exposing children to inappropriate content. Sometimes the bot would remove the video for "Child Safety" despite being marked as "not made for kids".
Is there an issue? Send a MessageReason:
None

Added DiffLines:

* The Google Gemini image generation was under a discourse about producing mainly images with people of colour. On paper, this wouldn't be much of an issue and a rather very good idea. [[https://knowyourmeme.com/memes/google-gemini-diverse-prompt-injection However, the problem is when the AI's stance on diversity clashes with historical records]], with people of color representing historically white time periods and societal roles, like 1800s Germany or the Pope, among other examples. Gemini seemed to omit the presence of white people, revealing its usage of the word "diverse" and the phrase "people with diverse backgrounds" in its intro sentence. The poor execution of generating non-Caucasian people in historically inaccurate positions was met with massive criticism and MemeticMutation. Basically ArtisticLicenseHistory as if it were an AI art generator. Google had to quickly pulled it out to iron it out.
Is there an issue? Send a MessageReason:
None


** [=ChatGPT=] troubles extend far beyond the world of law. Trained language models programs like [=ChatGPT=] don't really understand what they are doing, instead they simply try to give a result that looks like what it thinks the person giving the prompt wants to hear. This means that while [=ChatGPT=] can do seemingly amazing things, it also makes some basic mistakes that not even a calculator would make. For example, [[https://www.youtube.com/watch?v=FojyYKU58cw/ here is a video of it trying and failing to play chess against Google's equivalent bot]].

to:

** [=ChatGPT=] troubles extend far beyond the world of law. Trained language models programs like [=ChatGPT=] don't really understand what they are doing, instead they simply try to give a result that looks like what it thinks the person giving the prompt wants to hear. This means that while [=ChatGPT=] can do seemingly amazing things, it also makes some basic mistakes that not even a calculator would make. For example, [[https://www.youtube.com/watch?v=FojyYKU58cw/ here is a video of it trying and failing to play chess against Google's equivalent bot]].bot]], [[https://www.youtube.com/watch?v=GneReITaRvs and another with appropriate sound effects]].
Is there an issue? Send a MessageReason:
Added example(s)


* Your average GPS will work fine most of the time. However, there are instances where one will send a driver out to the middle of a field, expect them to make a pair of very unsafe and nearly-impossible turns (on US roads, for example: "Turn right, and then turn left in 200 feet even though you'd have to cross five lanes in rush hour traffic to do so"), or give them the most indirect route possible. The infamous older versions of Apple Maps would have occasional instances of providing good directions. Most of the time, they would present strange, winding routes, which might even ask the user to drive across an airport runway or two. Sometimes, it can't get out of its own way. Set it to avoid tolls when possible and it will send you on a 400 mile trek to avoid a $3.00 toll.
* This trope is why automated cars, such as [[https://en.wikipedia.org/wiki/Google_driverless_car those being developed by]] Website/{{Google}}, are not in mass production yet. Take the potential GPS errors and also factor in the possibility of ''fatal accidents''. More humorously, in one test of the driverless cars, four of them pulled up to a stop sign at the same time and each waited for the car on the right to move through first, creating a deadlock. An observer quipped that the first "fully automated traffic jam" had occurred. (At least it's better than potentially having all four trying to go through ''at the same time''. Putting the "safe" in "fail safe", if you will....)

to:

* Your average GPS will work fine most of the time. However, there are instances where one will send a driver out to the middle of a field, expect them to make a pair of very unsafe and nearly-impossible turns (on US roads, for example: "Turn right, and then turn left in 200 feet even though you'd have to cross five lanes in rush hour traffic to do so"), or give them the most indirect route possible. The infamous older versions of Apple Maps would have occasional instances of providing good directions. Most of the time, they would present strange, winding routes, which might even ask the user to drive across an airport runway or two. Sometimes, it can't get out of its own way. Set it to avoid tolls when possible and it will send you on a 400 mile trek to avoid a $3.00 toll.
toll. Another case involves people heading home to Los Angeles from watching the Las Vegas Grand Prix in 2023, in which Google Maps, in an attempt to avoid a dust storm on the way back, redirected drivers up a mountain road in the middle of the desert that slowly turned into a hiking trail, then nothing, causing extensive damage to many of the cars.
* This trope is why automated cars, such as [[https://en.wikipedia.org/wiki/Google_driverless_car those being developed by]] Website/{{Google}}, are not in mass production yet. Take the potential GPS errors and also factor in the possibility of ''fatal accidents''. More humorously, in one test of the driverless cars, four of them pulled up to a stop sign at the same time and each waited for the car on the right to move through first, creating a deadlock. An observer quipped that the first "fully automated traffic jam" had occurred. (At least it's better than potentially having all four trying to go through ''at the same time''. Putting the "safe" in "fail safe", if you will....)) Tesla's Full Self Driving also has a tendency to [[https://electrek.co/2020/06/25/tesla-autopilot-confuses-burger-king-stop-signs-ad-campaign/ stop at Burger King signs]], mistaking them for stop signs.
Is there an issue? Send a MessageReason:
No real life
Is there an issue? Send a MessageReason:
No real life


* Probably the worst EpicFail in the history of computer chess occurred in [[http://en.lichess.org/aooMurBn#1 the game played by COKO III against GENIE]] in the 1971 ACM North American Computer Chess Championship. COKO had captured all the Black pieces, trapped the Black king, and was all set to checkmate. But COKO [[https://www.chessprogramming.org/Coko apparently]] thought there was something better than mate in one, for seven moves in a row, instead shuffling the White king back and forth. GENIE, which meanwhile had been pushing its Black pawns and promoting one to a queen, proceeded to exchange its new queen for all the White pieces and a couple of pawns. By the time Black was about to queen another pawn, COKO's programmers resigned.

to:

* Probably the worst EpicFail fail in the history of computer chess occurred in [[http://en.lichess.org/aooMurBn#1 the game played by COKO III against GENIE]] in the 1971 ACM North American Computer Chess Championship. COKO had captured all the Black pieces, trapped the Black king, and was all set to checkmate. But COKO [[https://www.chessprogramming.org/Coko apparently]] thought there was something better than mate in one, for seven moves in a row, instead shuffling the White king back and forth. GENIE, which meanwhile had been pushing its Black pawns and promoting one to a queen, proceeded to exchange its new queen for all the White pieces and a couple of pawns. By the time Black was about to queen another pawn, COKO's programmers resigned.
Is there an issue? Send a MessageReason:
None

Added DiffLines:

* Intuit's popular QuickBooks accounting software uses AI to automatically categorize bank and credit card transactions. This AI is pretty good in the desktop version of QuickBooks, but in the cloud-based QuickBooks Online...not so much. The QuickBooks online AI is known to commit various gaffes such as mixing up utilities and restaurants, putting various expenses in the "Uncategorized Asset" account which makes them very difficult to reclassify, ignoring everything except the first couple words in transaction data, and not remembering the user's previous choices for similar transactions.
Is there an issue? Send a MessageReason:
None


* In TheNew20s, advances in AI technlogy have allowed it to create illustrations, with one such application being to create illustrations of humans in an anime art style. Although [=AIs=] has been able to refine themselves to draw illustrations that are difficult to distinguish from that of human artists, they infamously have a shared quirk of drawing deformed human hands and [[TheUnintelligible writing completely incoherent text]].
* [=ChatGPT=] can generate citations, but whether those are citing something that actually exists is another matter. It will cheerfully hand you a list of sources and swear up and down they're definitely real, yes indeed, for sure...and if you're unwise enough not to double-check, you'll end up like the lawyer who's now in trouble [[https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-made-up-by-chatgpt-judge-calls-it-unprecedented/ for submitting a ChatGPT-generated official court filing with six fake cases]].

to:

* In TheNew20s, advances in AI technlogy technology have allowed it to create illustrations, with one such application being to create illustrations of humans in an anime art style. Although [=AIs=] has been able to refine themselves to draw illustrations that are difficult to distinguish from that of human artists, they infamously have a shared quirk of drawing deformed human hands and [[TheUnintelligible writing completely incoherent text]].
* [=ChatGPT=] can generate citations, but whether those are citing something that actually exists is another matter. It will cheerfully hand you a list of sources and swear up and down they're definitely real, yes indeed, for sure...and if you're unwise enough not to double-check, you'll end up like the lawyer who's now lawyers who got in trouble hot water [[https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-made-up-by-chatgpt-judge-calls-it-unprecedented/ for submitting a ChatGPT-generated official court filing with that cited six fake non-existent cases]].
Is there an issue? Send a MessageReason:
None


* Meta (formerly Website/{{Facebook}}) unveiled a large language model AI called Galactica on November 15th, 2022...and abruptly shut it away again on the 18th. Intended to assist in scientific research and answer basic questions, drawing from a huge depository of scientific papers, users soon found that it would gleefully churn out obvious falsehoods and complete nonsense, such as wiki articles on [[Film/BackToTheFuture flux capacitors]], the [[Creator/MerylStreep Streep-]][[Creator/JerrySeinfeld Seinfeld]] Theorem, [[https://twitter.com/Meaningness/status/1592634519269822464/photo/2 "bears living in space,"]] or how the central story of the ''Harry Potter'' series is the [[HoYay gay romance]] between Harry and Ron.

to:

* Meta (formerly Website/{{Facebook}}) unveiled a large language model AI called Galactica on November 15th, 2022...and abruptly shut it away again on the 18th. Intended to assist in scientific research and answer basic questions, drawing from a huge depository of scientific papers, users soon found that it would gleefully churn out obvious falsehoods and complete nonsense, such as wiki articles on [[Film/BackToTheFuture [[Franchise/BackToTheFuture flux capacitors]], the [[Creator/MerylStreep Streep-]][[Creator/JerrySeinfeld Seinfeld]] Theorem, [[https://twitter.com/Meaningness/status/1592634519269822464/photo/2 "bears living in space,"]] or how the central story of the ''Harry Potter'' series is the [[HoYay gay romance]] between Harry and Ron.
Is there an issue? Send a MessageReason:
None


* Meta (formerly Facebook) unveiled a large language model AI called Galactica on November 15th, 2022...and abruptly shut it away again on the 18th. Intended to assist in scientific research and answer basic questions, drawing from a huge depository of scientific papers, users soon found that it would gleefully churn out obvious falsehoods and complete nonsense, such as wiki articles on [[Film/BackToTheFuture flux capacitors]], the [[Creator/MerylStreep Streep-]][[Creator/JerrySeinfeld Seinfeld]] Theorem, [[https://twitter.com/Meaningness/status/1592634519269822464/photo/2 "bears living in space,"]] or how the central story of the ''Harry Potter'' series is the [[HoYay gay romance]] between Harry and Ron.

to:

* Meta (formerly Facebook) Website/{{Facebook}}) unveiled a large language model AI called Galactica on November 15th, 2022...and abruptly shut it away again on the 18th. Intended to assist in scientific research and answer basic questions, drawing from a huge depository of scientific papers, users soon found that it would gleefully churn out obvious falsehoods and complete nonsense, such as wiki articles on [[Film/BackToTheFuture flux capacitors]], the [[Creator/MerylStreep Streep-]][[Creator/JerrySeinfeld Seinfeld]] Theorem, [[https://twitter.com/Meaningness/status/1592634519269822464/photo/2 "bears living in space,"]] or how the central story of the ''Harry Potter'' series is the [[HoYay gay romance]] between Harry and Ron.
Is there an issue? Send a MessageReason:
None


** [=ChatGPT=] troubles extend far beyond the world of law. Trained language models programs like [=ChatGPT=] don't really understand what they are doing, instead they simply try to give a result that looks like what it thinks the person giving the prompt wants to hear. This means that while [=ChatGPT=] can do seemingly amazing things, it also makes some basic mistakes that not even a calculator would make. For example,[[https://www.youtube.com/watch?v=FojyYKU58cw/ here is a video of it trying and failing to play chess against Google's equivalent bot]].

to:

** [=ChatGPT=] troubles extend far beyond the world of law. Trained language models programs like [=ChatGPT=] don't really understand what they are doing, instead they simply try to give a result that looks like what it thinks the person giving the prompt wants to hear. This means that while [=ChatGPT=] can do seemingly amazing things, it also makes some basic mistakes that not even a calculator would make. For example,[[https://www.example, [[https://www.youtube.com/watch?v=FojyYKU58cw/ here is a video of it trying and failing to play chess against Google's equivalent bot]].
Is there an issue? Send a MessageReason:
None


** ChatGPT troubles extend far beyond the world of law. Trained language models programs like ChatGPT don't really understand what they are doing, instead they simply try to give a result that looks like what it thinks the person giving the prompt wants to hear. This means that while ChatGPT can do seemingly amazing things, it also makes some basic mistakes that not even a calculator would make. For example,[[https://www.youtube.com/watch?v=FojyYKU58cw/ here is a video of it trying and failing to play chess against Google's equivalent bot]].

to:

** ChatGPT [=ChatGPT=] troubles extend far beyond the world of law. Trained language models programs like ChatGPT [=ChatGPT=] don't really understand what they are doing, instead they simply try to give a result that looks like what it thinks the person giving the prompt wants to hear. This means that while ChatGPT [=ChatGPT=] can do seemingly amazing things, it also makes some basic mistakes that not even a calculator would make. For example,[[https://www.youtube.com/watch?v=FojyYKU58cw/ here is a video of it trying and failing to play chess against Google's equivalent bot]].
Is there an issue? Send a MessageReason:
None


** ChatGPT troubles extend far beyond the world of law. Trained language models programs like ChatGPT don't really understand what they are doing, instead they simply try to give a result that looks like what it thinks the person giving the prompt wants to hear. This means that while ChatGPT can do seemingly amazing things, it also makes some basic mistakes that not even a calculator would make. For example, [[https://www.youtube.com/watch?v=FojyYKU58cw/a video of it trying and failing to play chess against Google's equivalent bot]].

to:

** ChatGPT troubles extend far beyond the world of law. Trained language models programs like ChatGPT don't really understand what they are doing, instead they simply try to give a result that looks like what it thinks the person giving the prompt wants to hear. This means that while ChatGPT can do seemingly amazing things, it also makes some basic mistakes that not even a calculator would make. For example, [[https://www.example,[[https://www.youtube.com/watch?v=FojyYKU58cw/a com/watch?v=FojyYKU58cw/ here is a video of it trying and failing to play chess against Google's equivalent bot]].
Is there an issue? Send a MessageReason:
None


** ChatGPT troubles extend far beyond the world of law. Trained language models programs like ChatGPT don't really understand what they are doing, instead they simply try to give a result that looks like what it thinks the person giving the prompt wants to hear. This means that while ChatGPT can do seemingly amazing things, it also makes some basic mistakes that not even a calculator would make. For example, [[https://www.youtube.com/watch?v=FojyYKU58cw/ChatGPT trying and failing to play chess against Google's equivalent bot]].

to:

** ChatGPT troubles extend far beyond the world of law. Trained language models programs like ChatGPT don't really understand what they are doing, instead they simply try to give a result that looks like what it thinks the person giving the prompt wants to hear. This means that while ChatGPT can do seemingly amazing things, it also makes some basic mistakes that not even a calculator would make. For example, [[https://www.youtube.com/watch?v=FojyYKU58cw/ChatGPT com/watch?v=FojyYKU58cw/a video of it trying and failing to play chess against Google's equivalent bot]].
Is there an issue? Send a MessageReason:
None

Added DiffLines:

** ChatGPT troubles extend far beyond the world of law. Trained language models programs like ChatGPT don't really understand what they are doing, instead they simply try to give a result that looks like what it thinks the person giving the prompt wants to hear. This means that while ChatGPT can do seemingly amazing things, it also makes some basic mistakes that not even a calculator would make. For example, [[https://www.youtube.com/watch?v=FojyYKU58cw/ChatGPT trying and failing to play chess against Google's equivalent bot]].
Is there an issue? Send a MessageReason:
None


* The [[http://en.wikipedia.org/wiki/M247_Sergeant_York M247 Sergeant York]] AntiAir vehicle was equipped with an automatic engagement system (DIVAD) so that it could target enemy planes and destroy them faster than the crew could react. In a demonstration, the DIVAD was activated and [[DisastrousDemonstration immediately started to aim the loaded cannons at the grandstands full of officers and politicians]] (there were only minor injuries). The system had difficulties distinguishing between helicopters and trees. It once mistook the ventilation fan of an outhouse for a helicopter. It would undershoot at ground vehicles by 300m. And if it aimed up, the gun barrels would disrupt the radar system. A plethora of mechanical and design issues — the pathetic radar couldn't detect a drone target until it had four radar reflectors on it, the electronics could be disabled by getting wet, and it was slower than the vehicles it was designed to protect — led to the project being canned after 50 vehicles were produced. It's widely suspected that bribery must have been involved in the M247 being selected for production in the first place, seeing as even before all of this happened, it had consistently ''lost'' the shoot-out competitions with the competing [=XM246=] design.

to:

* The [[http://en.wikipedia.org/wiki/M247_Sergeant_York M247 Sergeant York]] AntiAir vehicle was equipped with an automatic engagement system (DIVAD) so that it could target enemy planes and destroy them faster than the crew could react. In a demonstration, the DIVAD was activated and [[DisastrousDemonstration immediately started to aim the loaded cannons at the grandstands full of officers and politicians]] (there were only minor injuries).(the gun for this demonstration thankfully required human input to fire, so disaster was averted). The system had difficulties distinguishing between helicopters and trees. It once mistook the ventilation fan of an outhouse for a helicopter. It would undershoot at ground vehicles by 300m. And if it aimed up, the gun barrels would disrupt the radar system. A plethora of mechanical and design issues — the pathetic radar couldn't detect a drone target until it had four radar reflectors on it, the electronics could be disabled by getting wet, and it was slower than the vehicles it was designed to protect — led to the project being canned after 50 vehicles were produced. It's widely suspected that bribery must have been involved in the M247 being selected for production in the first place, seeing as even before all of this happened, it had consistently ''lost'' the shoot-out competitions with the competing [=XM246=] design.
Is there an issue? Send a MessageReason:
None


* Probably the worst EpicFail in the history of computer chess occurred in [[http://en.lichess.org/aooMurBn#1 the game played by COKO III against GENIE]] in the 1971 ACM North American Computer Chess Championship. COKO had captured all the Black pieces, trapped the Black king, and was all set to checkmate. But COKO overlooked mate in one for seven moves in a row, instead shuffling the White king back and forth. COKO was evidently suffering from a form of ParalysisByAnalysis common in early chess computers which caused them to prefer longer winning combinations over shorter ones, but COKO somehow failed to play its obvious winning move even at the last possible moment. GENIE, which meanwhile had been pushing its Black pawns and promoting one to a queen, proceeded to exchange its new queen for all the White pieces and a couple of pawns. By the time Black was about to queen another pawn, COKO's programmers resigned.

to:

* Probably the worst EpicFail in the history of computer chess occurred in [[http://en.lichess.org/aooMurBn#1 the game played by COKO III against GENIE]] in the 1971 ACM North American Computer Chess Championship. COKO had captured all the Black pieces, trapped the Black king, and was all set to checkmate. But COKO overlooked [[https://www.chessprogramming.org/Coko apparently]] thought there was something better than mate in one one, for seven moves in a row, instead shuffling the White king back and forth. COKO was evidently suffering from a form of ParalysisByAnalysis common in early chess computers which caused them to prefer longer winning combinations over shorter ones, but COKO somehow failed to play its obvious winning move even at the last possible moment.forth. GENIE, which meanwhile had been pushing its Black pawns and promoting one to a queen, proceeded to exchange its new queen for all the White pieces and a couple of pawns. By the time Black was about to queen another pawn, COKO's programmers resigned.

Top