History Main / ProsecutorsFallacy

10th May '17 9:20:29 PM Luigifan
Is there an issue? Send a Message


* This was also a problem in [[https://en.wikipedia.org/wiki/People_v._Collins People v. Collins.]] A mixed-race couple (a black man with a mustache and beard and a white woman with blonde hair) were seen robbing an old woman and fleeing in a yellow car. The Collinses were a mixed race couple with the hair and car described. The prosecution famously claimed that the odds that such a couple existed in the area were 1 in 12 million, based on made-up statistics. [[note]]He asked his secretaries what they thought the odds were for a woman to be blonde, for a black man to have facial hair, and so on. He then gave these statistics to a mathematician, who treated each variable as if it was independent, even if it wasn't (men with beards are likely to have mustaches as well, but he treated them as independent variables.[[/note]] Even if the statistic was correct, and the likelihood of such a couple existing was 1 in 12 million, all it proves is that it is statistically unlikely for them to exist. Another couple, just as statistically unlikely, could have robbed the old woman. It doesn't mean that the chances that ''the Collinses weren't robbers'' was 1 in 12 million, though that was what the jury seemed to believe.

to:

* This was also a problem in [[https://en.wikipedia.org/wiki/People_v._Collins People v. Collins.]] A mixed-race couple (a black man with a mustache and beard and a white woman with blonde hair) were seen robbing an old woman and fleeing in a yellow car. The Collinses were a mixed race couple with the hair and car described. The prosecution famously claimed that the odds that such a couple existed in the area were 1 in 12 million, based on made-up statistics. [[note]]He asked his secretaries what they thought the odds were for a woman to be blonde, for a black man to have facial hair, and so on. He then gave these statistics to a mathematician, who treated each variable as if it was independent, even if it wasn't (men with beards are likely to have mustaches as well, but he treated them as independent variables.variables).[[/note]] Even if the statistic was correct, and the likelihood of such a couple existing was 1 in 12 million, all it proves is that it is statistically unlikely for them to exist. Another couple, just as statistically unlikely, could have robbed the old woman. It doesn't mean that the chances that ''the Collinses weren't robbers'' was 1 in 12 million, though that was what the jury seemed to believe.



* In ''The Poisoned Chocolates Case'' by Anthony Berkeley, Mr Bradley makes a list of twelve statements about the murderer, and declares that the odds against a random person meeting all the conditions are 4,790,000,516,458 to 1 against. But what he should be calculating is "What are the chances that, given that a particular person fulfills all the conditions, that person is the criminal?" -- which isn't the same thing at all. As Bradley goes on to points out that he himself meets all twelve conditions and is therefore, logically, the murderer, it's clear that he's only using the fallacy to {{troll}} his audience.
* In medicine, a test will have various numbers which indicate to the practitioner how much stock to put into the test's result. The four most commonly reported are: sensitivity (what percentage of people who have the tested condition test positive), specificity (what percentage without the condition test negatively), positive predictive value (what are the odds that a random positive answer means that someone is positive for the condition), and negative predictive value (odds a negative test means you don't have the condition. This fallacy is most similar to a situation where a test has a high sensitivity and high specificity, but a low positive predictive value.

to:

* In ''The Poisoned Chocolates Case'' by Anthony Berkeley, Mr Bradley makes a list of twelve statements about the murderer, and declares that the odds against a random person meeting all the conditions are 4,790,000,516,458 to 1 against. But what he should be calculating is "What are the chances that, given that a particular person fulfills all the conditions, that person is the criminal?" -- which isn't the same thing at all. As Bradley goes on to points point out that he himself meets all twelve conditions and is therefore, logically, the murderer, it's clear that he's only using the fallacy to {{troll}} his audience.
* In medicine, a test will have various numbers which indicate to the practitioner how much stock to put into the test's result. The four most commonly reported are: sensitivity (what percentage of people who have the tested condition test positive), specificity (what percentage without the condition test negatively), positive predictive value (what are the odds that a random positive answer means that someone is positive for the condition), and negative predictive value (odds a negative test means you don't have the condition.condition). This fallacy is most similar to a situation where a test has a high sensitivity and high specificity, but a low positive predictive value.
6th May '17 2:01:02 AM SMARTALIENQT
Is there an issue? Send a Message

Added DiffLines:

* This was also a problem in [[https://en.wikipedia.org/wiki/People_v._Collins People v. Collins.]] A mixed-race couple (a black man with a mustache and beard and a white woman with blonde hair) were seen robbing an old woman and fleeing in a yellow car. The Collinses were a mixed race couple with the hair and car described. The prosecution famously claimed that the odds that such a couple existed in the area were 1 in 12 million, based on made-up statistics. [[note]]He asked his secretaries what they thought the odds were for a woman to be blonde, for a black man to have facial hair, and so on. He then gave these statistics to a mathematician, who treated each variable as if it was independent, even if it wasn't (men with beards are likely to have mustaches as well, but he treated them as independent variables.[[/note]] Even if the statistic was correct, and the likelihood of such a couple existing was 1 in 12 million, all it proves is that it is statistically unlikely for them to exist. Another couple, just as statistically unlikely, could have robbed the old woman. It doesn't mean that the chances that ''the Collinses weren't robbers'' was 1 in 12 million, though that was what the jury seemed to believe.
27th Nov '16 2:59:38 PM chc232323
Is there an issue? Send a Message


* In ''The Poisoned Chocolates Case'' by Anthony Berkeley, Mr Bradley makes a list of twelve statements about the murderer, and declares that the odds against a random person meeting all the conditions are 4,790,000,516,458 to 1 against. But what he should be calculating is "What are the chances that, given that a particular person fulfills all the conditions, that person is the criminal?" -- which isn't the same thing at all. As Bradley goes on to points out that he himself meets all twelve conditions and is therefore, logically, the murderer, it's clear that he's only using the fallacy to {{troll}} his audience.

to:

* In ''The Poisoned Chocolates Case'' by Anthony Berkeley, Mr Bradley makes a list of twelve statements about the murderer, and declares that the odds against a random person meeting all the conditions are 4,790,000,516,458 to 1 against. But what he should be calculating is "What are the chances that, given that a particular person fulfills all the conditions, that person is the criminal?" -- which isn't the same thing at all. As Bradley goes on to points out that he himself meets all twelve conditions and is therefore, logically, the murderer, it's clear that he's only using the fallacy to {{troll}} his audience.audience.
* In medicine, a test will have various numbers which indicate to the practitioner how much stock to put into the test's result. The four most commonly reported are: sensitivity (what percentage of people who have the tested condition test positive), specificity (what percentage without the condition test negatively), positive predictive value (what are the odds that a random positive answer means that someone is positive for the condition), and negative predictive value (odds a negative test means you don't have the condition. This fallacy is most similar to a situation where a test has a high sensitivity and high specificity, but a low positive predictive value.
** A concrete example: suppose an amazingly accurate test comes out that picks up on 99% of people with a disease and comes up negative in 99.9% of people without the disease. Now you test ten million people for a disease which occurs in 5 people per 100,000. There are 500 true cases of the disease. Your test identifies 495 of them. Your test also mislabels 1 in 1,000 as having the disease when they don't, which is approximately 10,000 people out of that 10 million! The positive predictive value is about 495/10,000, so a given positive test result only has an approximately 5% chance of actually identifying a person who has the condition.
24th Sep '16 11:00:29 AM Josef5678
Is there an issue? Send a Message


* This is also a favorite for conspiracy theorists when some (apparently) unlikely coincidence becomes part of the event in question. To use a WorldWar2 example, one radar site picked up the Japanese aircraft headed toward Pearl Harbor and reported the contact but were dismissed because entirely coincidentally a flight of aircraft from mainland was due to arrive at roughly the same time. This has been used by conspiracy freaks to argue the Japanese were allowed to attack because the odds of that sort of coincidence seem so remote.

to:

* This is also a favorite for conspiracy theorists when some (apparently) unlikely coincidence becomes part of the event in question. To use a WorldWar2 UsefulNotes/WorldWarII example, one radar site picked up the Japanese aircraft headed toward Pearl Harbor and reported the contact but were dismissed because entirely coincidentally a flight of aircraft from mainland was due to arrive at roughly the same time. This has been used by conspiracy freaks to argue the Japanese were allowed to attack because the odds of that sort of coincidence seem so remote.
29th Jul '16 12:17:24 PM john_e
Is there an issue? Send a Message


* The implicit assumption behind the ''Series/JudgeJudy''-ism "If it doesn't make sense, it isn't true."

to:

* The implicit assumption behind the ''Series/JudgeJudy''-ism "If it doesn't make sense, it isn't true.""
* In ''The Poisoned Chocolates Case'' by Anthony Berkeley, Mr Bradley makes a list of twelve statements about the murderer, and declares that the odds against a random person meeting all the conditions are 4,790,000,516,458 to 1 against. But what he should be calculating is "What are the chances that, given that a particular person fulfills all the conditions, that person is the criminal?" -- which isn't the same thing at all. As Bradley goes on to points out that he himself meets all twelve conditions and is therefore, logically, the murderer, it's clear that he's only using the fallacy to {{troll}} his audience.
15th Aug '14 1:58:20 PM lucy24
Is there an issue? Send a Message


* This is also a favorite for conspiracy theorists when some (apparently) unlikely coincidence becomes part of the event in question. To use a WorldWar2 example, one radar site picked up the Japanese aircraft headed toward Pearl Harbor and reported the contact but were dismissed because entirely coincidentally a flight of aircraft from mainland was due to arrive at roughly the same time. This has been used by conspiracy freaks to argue the Japanese were allowed to attack because the odds of that sort of coincidence seem so remote.

to:

* This is also a favorite for conspiracy theorists when some (apparently) unlikely coincidence becomes part of the event in question. To use a WorldWar2 example, one radar site picked up the Japanese aircraft headed toward Pearl Harbor and reported the contact but were dismissed because entirely coincidentally a flight of aircraft from mainland was due to arrive at roughly the same time. This has been used by conspiracy freaks to argue the Japanese were allowed to attack because the odds of that sort of coincidence seem so remote.remote.
* The implicit assumption behind the ''Series/JudgeJudy''-ism "If it doesn't make sense, it isn't true."
13th Sep '13 12:02:23 PM BrendanDRizzo
Is there an issue? Send a Message


** And this is when their statistics are even valid, instead of recognizing that the naturalistic explanation is ''not'' due to random chance. (For instance, creationists will claim that the odds of a peptide chain folding into precisely the dimensions of a functional protein is absurdly low, completely ignoring [[http://en.wikipedia.org/wiki/Anfinsen%27s_dogma that it has been demonstrated]] that the natural state of proteins is the one that is thermodynamically most stable, and so will always fold that way.

to:

** And this is when their statistics are even valid, instead of recognizing that the naturalistic explanation is ''not'' due to random chance. (For instance, creationists will claim that the odds of a peptide chain folding into precisely the dimensions of a functional protein is absurdly low, completely ignoring [[http://en.wikipedia.org/wiki/Anfinsen%27s_dogma that it has been demonstrated]] that the natural state of proteins is the one that is thermodynamically most stable, and so will always fold that way.)
13th Sep '13 12:02:02 PM BrendanDRizzo
Is there an issue? Send a Message

Added DiffLines:

** And this is when their statistics are even valid, instead of recognizing that the naturalistic explanation is ''not'' due to random chance. (For instance, creationists will claim that the odds of a peptide chain folding into precisely the dimensions of a functional protein is absurdly low, completely ignoring [[http://en.wikipedia.org/wiki/Anfinsen%27s_dogma that it has been demonstrated]] that the natural state of proteins is the one that is thermodynamically most stable, and so will always fold that way.
1st Feb '13 3:27:25 PM infraredshirt
Is there an issue? Send a Message


Rejecting an explanation for a particular event on the grounds that it requires a rare or unlikely event to have occurred, while ignoring that the favoured explanation might actually be even less likely. This fallacy ignores the fact that 'statistically improbable' doesn't mean 'impossible'.

As the name implies, this fallacy is a favorite of prosecutors in legal cases -- it can be quite convincing to argue, "How likely is it that this really happened the way the defendant said it did, if the odds of it happening that way are 1 in 10 million? Which is more believable -- that he's lying or that something that improbable really happened?" It also lends itself well to CassandraTruth plots.

to:

Rejecting an explanation for a particular event on the grounds that it requires a rare or unlikely event to have occurred, while ignoring that the favoured explanation might actually be even less likely. This fallacy ignores the fact that 'statistically improbable' 'improbable' doesn't mean 'impossible'.

'impossible'. Like the Gambler's Fallacy, this is also a statistical error.

As the name implies, this fallacy is a favorite of prosecutors in legal cases and sometimes in procedural shows like CSI -- it can be quite convincing tempting to argue, "How likely is it that this really happened the way the defendant said it did, if the odds of it happening that way are 1 in 10 million? Which is more believable -- that he's lying or that something that improbable really happened?" It also lends itself well to CassandraTruth plots.



* Occurred twice in a [[RealLife real life]] case from the UK: Sally Clark was accused of murdering her two children (both of whom had actually died of Sudden Infant Death Syndrome). An expert witness asserted incorrectly that the probability of two cases of SIDS in one family was 1 in 73 million (this figure came from a separate error -- treating the two cot deaths as independent events when there was evidence to suggest that that wasn't the case). Sally was convicted, but eventually cleared, although it took two appeals -- the first, based on the 1 in 73 million figure, failed when judges argued that while the quoted figure was much worse than it should have been, it still illustrated that double cot death (SIDS) was very unlikely.

to:

* Occurred twice This was one of two errors in a [[RealLife real life]] case from statistical reasoning that contributed to the UK: result of the [[http://en.wikipedia.org/wiki/Sally_Clark Sally Clark]] trial in the UK. Sally Clark was accused arrested, charged, and wrongfully convicted of murdering killing her two children (both of whom sons, who had actually died of Sudden Infant Death Syndrome). An sudden infant death syndrome, on the basis that two cot deaths in one family was extremely unlikely (an example of the prosecutor's fallacy -- double homicide isn't likely either). This error was compounded by an expert witness witness, who asserted incorrectly that the probability of two cases of SIDS in one family a double cot death was 1 in 73 million (this (a figure came from a separate error -- treating the two cot which assumed, without evidence, that both deaths as were independent events when there was evidence to suggest that that wasn't the case). Sally was convicted, but eventually cleared, although it took two appeals of each other -- the first, based on the 1 in 73 million figure, failed when judges argued that while the quoted figure was much worse than it should have been, it still illustrated that double ignoring possibilities such as a family with a genetic predisposition towards cot death (SIDS) was very unlikely.deaths).
1st Feb '13 2:10:46 PM infraredshirt
Is there an issue? Send a Message


While there is some value to this (OccamsRazor exists for a reason), it often ignores that unusual cases are, well, [[ShapedLikeItself unusual.]] We tend to notice unusual events more than common events, and by the very fact that the issue is being argued over guarantees that it is likely an unusual event. For instance, if a practiced hunter accidently shoots his friend, one could argue that the odds of him making such a serious error is very small. But then, the alternative explanation is that the hunter ''purposefully'' shot his friend, which is also somewhat unlikely. In the end, the event itself can ''only'' be explained by one of several improbable explanations, and so the fact that they ''are'' improbable ceases to be relevant.

to:

While there is some value to An argument of this (OccamsRazor exists for a reason), it form often ignores that unusual cases are, well, [[ShapedLikeItself unusual.]] unusual. We tend to notice unusual events more than common events, and by the very fact that the issue is being argued over guarantees that it is likely an unusual event. For instance, if a practiced practised hunter accidently shoots his friend, one could argue that the odds of him making such a serious error is very small. But then, the alternative explanation is that the hunter ''purposefully'' shot his friend, which is also somewhat unlikely. In the end, the event itself can ''only'' be explained by one of several improbable explanations, and so the fact that they ''are'' improbable ceases to be relevant.
This list shows the last 10 events of 30. Show all.
http://tvtropes.org/pmwiki/article_history.php?article=Main.ProsecutorsFallacy