Follow TV Tropes

Following

Context Main / FourPointScale

Go To

1[[quoteright:299:[[Webcomic/{{xkcd}} https://static.tvtropes.org/pmwiki/pub/images/star_ratings_3037.png]]]]
2
3->'''Duke:''' Why the hell do you have to be so critical?\
4'''Jay:''' I'm a critic!\
5'''Duke:''' No, your job is to rate movies on a scale from "good" to "excellent"!\
6'''Jay:''' What if I don't like them?\
7'''Duke:''' That's what "good" is for.
8-->-- ''WesternAnimation/TheCritic'', Pilot
9
10Ever notice how the average score given by a {{review}} show somehow tends to be above average?
11
12If you take a stroll on professional game review websites, you will notice that score tend to be in the 6.0 to 10.0 range, even if they're nominally using a ten-point scale. This is called the '''four point scale''', which is also sometimes called the '''7 to 9 scale'''. Two takes exist on why this is so.
13
14The first view considers the four point scale to be a bad thing, and holds this as evidence of a website's lack of integrity (often toward mainstream outlets). The accusation is rarely leveled at the writers themselves, with the blame usually placed on a site's editors or ExecutiveMeddling.
15
16The game journalism industry, like all forms of journalism, thrives on ''access''. Game {{magazines}} and websites need to get a steady flow of new games, previews, and promotional materials directly from the publishers in a timely manner, or they're irrelevant. Unfortunately, the game industry does not ''have'' to provide this access, and games review sites and magazines are far more reliant on the companies that produce the games than movie critics are on movie companies; indeed, since most websites are expected to provide their content for free, industry advertising is perhaps their most important source of income. There are tales of editorial mandates or outright bribery, but the whole system is set up so that providing a highly critical review of a company's triple-A title is akin to biting the hand that feeds you. This is especially true of previews, which tend to have an artificially positive tone since if a journalist pans a game the company didn't have to show them in the first place, they're unlikely to be invited back to see any of their other work. As such, you're unlikely to see major titles, even the worst of the worst, get panned ''too'' hard in for-profit publications. This results in [[http://www.ign.com/articles/2012/09/11/double-dragon-neon-review sites like IGN giving insanely negative reviews]] in order to appear "balanced" by panning smaller titles that don't provide them with large enough kickbacks.
17
18In addition, there's the fact that many of these game review programs draw their audience by reviewing the most anticipated upcoming games; games which are anticipated due to their high degree of quality and polish. Because of this, many critics are incentivized to only review good games for fear of losing ratings. As such, many game reviewers will simply never get around to reviewing the lower quality, bargain bin, shovelware games in order to balance out the scale, hence skewing their score average upwards.
19
20The other view considers the four point scale to be the result of a perfectly reasonable way to award and interpret review scores. This can be understood fairly easily by comparing with the way school assignments in America are graded. In any given class, people will usually get scores ranging from 60% to 100%, with the average being around 70-75%. This then leads people, both reviewer and reader, to expect scores to mean something similar to what they already encountered in real life. Getting ~60% means "this sucks, but it can still be considered a game", ~75% is "average", ~85% is "decent/solid" and anything above 90% is a mark of excellence.
21
22
23An additional reason for this lies in a form of selective bias for reviews: You're more likely to go to the trouble of writing a review for something in the first place if you really liked it and want to tell others about it, or absolutely loathed it and want to ward others away from it. While this obviously doesn't apply quite as much to professional critics, it is a major factor in the overwhelming positivity or negativity among user-submitted reviews.
24
25The situation with the four point scale has led some reviewers to drop rating scores altogether, or favor an A/B/C/D grading system. Professional reviews tend to keep a rating system to reduce the chance of being misquoted or misinterpreted, as it will be evident that you did not mean the game was "excellent" if there's a big "6/10" or "D" at the end of the article.
26
27The same basic concept applies to every industry; reviewers tend to place things in the upper half of whatever their reviewing scale happens to be, and for the same reasons. That said, it's generally agreed to be much more prominent in gaming than in industries like film. Review aggregator Metacritic, for instance, explicitly has different categorization between films and games: an 85 average is considered "universal acclaim" for films, and "generally favorable" for games (with 90 being considered "universal acclaim" for games), and a 45 average is considered "mixed or average" for films and "generally negative" for games (with 50 being considered "mixed or average" for games).
28
29If reviewers get ''too'' negative there's always the risk of fan backlash, because ReviewsAreTheGospel. Contrast SoOkayItsAverage, where being just below this scale is acknowledged to have some quality, but not a lot. See also BrokeTheRatingScale and FMinusMinus. See also DamnedByFaintPraise; when this scale is in effect, scores like 7 or 8 ''become'' faint praise.
30
31It is possible to avert this trend, such as by ranking a product's features in relation to one another (one such review for a video game might start: "Soundtrack > Graphics > Plot > Gameplay > Immersiveness") or by giving purely text-based, non-numerical reviews, but this only serves to bypass one's own cognitive biases, not to satiate company execs.
32
33----
34!!Examples in real life (by subject):
35
36[[foldercontrol]]
37
38[[folder:General]]
39* This happens to an extent with fan reviews too. If you go to any site where shows can be rated (like Anime News Network) most shows will float above 6.0. Fan reviewers do tend to be, well, fans, which would tend to skew reviews positively. That, and they may pattern themselves after official reviews, even without meaning to. And sometimes the fan reviews "cheat" to bring the score closer to their desired number. The problem is with the way the scores are averaged, encouraging this kind of behaviour. By taking the median score or using a fancy formula, there are ways to make it an 8/10 rated movie is affected the same way by a 7/10 and a 1/10.
40** There's plain selection bias here; no one is forced to watch anime they remotely suspect they won't like. The some-eps rating vs the all-eps rating point spread and population ratio can be instructive.
41** Exception: Some anime series with exceptionally bad {{Macekre}} dubs will still have the original version rated highly, but the dub will get low ratings.
42*** Fan reviews on video games also apply here. You'll find a mixture of reviews that are all perfect scores or close to it and reviews that give the game the lowest score possible.
43* TruthInTelevision: the whole business can also be justified in many cases by score entropy. Here's how it goes: you independently, objectively and honestly review game A. You give it, say, 95%. A year later, you review game B. Game B is pretty much game A, with the awesome cranked to eleven. Or with the same awesome, but all the miscellaneous suck ironed out. Watchagonnado? You objectively ''have'' to give it 96%. Cue next year. Some reviewers such as [=GameSpot=] claim that their standards rise as the average quality of what they review rises, averting this problem in theory but giving rise to a lot of FanDumb if actually followed.
44** This is the same concept behind why they have the Olympic favorites in events like Ice Skating do their routines last. If they did them first, and got a perfect score, but were then one-upped by an underdog, the judges can't score the underdog higher than perfect, and controversy erupts. (Of course this also means holding television viewers' interest until the end, rather than the outcome seeming to be a ForegoneConclusion after the actually good competitors have gone - note that the same ordering is usually used for timed events and other events without judged scores where this isn't a factor.)
45** This trope can also be explained in basically all industries because, if you assume the scores are like grades in school, getting a 50% is absolutely terrible. This does lead to bizarre situations with user submitted reviews on sites where a person will give the game a 5 or 6 out of ten while claiming the game was average or somewhat above average, while somebody scoring the game a 7 claims the game was mediocre but without major flaws, or where somebody will give a game an 8.5 or 9/10 because "nothing can be perfect" or because it's not on the system the reviewer likes, while somebody else may score the game a ten saying it's the best game on the system by far despite a few minor flaws.
46* Lore Sjoberg played with this in giving his first and possibly only F on the Book of Ratings to ''Scrappy-Doo''.
47-->'''Lore on Potato Bugs''': "'Fouler insect never swarmed or flew, nor creepy toad was gross as 'tato bug. Remove the cursed thing before I freak.' -- Wm. Shakespeare, Betty and Veronica, Act 1, Scene 23. I can't even go into how nightmarish these vile little affronts to decency and aesthetics are. If I were having an Indiana Jones-style adventure, the Nazis would lock me in a crypt with a herd of potato bugs. And, I might add, I'd choke myself to death with my own whip right then and there rather than let a single evil little one of them touch my still-living body. They're still better than Scrappy-Doo, though. D-"
48* Any horoscope that rates the upcoming day on an alleged scale of one to ten will use a four-point scale.
49* A well-known gun writer came right out and said that negative reviews were not allowed by the editorial staff. He went on to say that they simply wouldn't print reviews for bad guns, so if a new gun came out and none of the major industry mags were reviewing it, take a hint.
50* Website/{{IMDB}} seems to actively encourage this, listing any review that rated a movie or show a 7.0 or worse under "Hated It".
51[[/folder]]
52
53[[folder:Cars]]
54* New car reviews in both magazines and newspapers. Even the [[http://en.wikipedia.org/wiki/Yugo Yugo]] received lukewarm reviews from the major car magazines; these publications are truly frightened at the thought of losing advertising revenue due to giving a poor review. This is doubly true after General Motors pulled its advertising from the Los Angeles Times after one of GM's products was panned in print. This may be the case where this trope is least {{justified|Trope}}; as compared to everything else on this list, cars and other vehicles are very expensive, and if you buy one the dealer isn't inclined to take returns.
55** European motorcycle magazines seem to have a particular love for BMW motorcycles. A flat spot in the torque curve is a minus for any other marque, but the BMW is praised for having high end power. Or a test of three comparable motorcycles where the two Japanese cycles win on points in the summary, but the article still proclaims the BMW number 1. It's either Euro-chauvinism, or influence by the BMW advertising budget. It doesn't help that BMW routinely provides reviewers with bikes with all the optional extras. Reviewers will gush the entire review on the technological gew-gaws, and then mention in one sentence at the end that these are all optional and cost money. Guess what readers remember?
56* Jeremy Clarkson mentioned this trope frequently in his published reviews. He says that the best thing that happened to his car reviewing was television, because it meant that the previous power relationship was reversed - he was rich enough to say what he liked thanks to TV profiles, and his public profile was so great that car manufacturers could not ''not'' send him cars for review. Which he then reviewed honestly. Tellingly, despite years of saying that he despises all Asian cars except Hondas (because Honda was started by a Mr. Honda who had a dream when he was a small boy, like BMW or Lotus, as opposed to simply being the automobile arm of a heavy industry company), firms like Daewoo still sent him cars, which would be savaged.
57* ''Consumer Reports'' has a policy against reviewing cars or household goods that they didn't buy incognito from a retailer. Nonetheless, most of its ratings are Good, Very Good or Excellent.
58* Cars at a car show or ones that are being appraised are scored on a scale of 1-6, with 1 being perfect and 6 being junk. Most are scored as a 2-3 because they are generally at a car show and most people don't take junk vehicles out to such things.
59[[/folder]]
60
61[[folder:Comic Books]]
62* ''Spirou'' started out as a comic book magazine. Later on in their lifespan they would also incorporate other stuff as well including a review section for comic books from comic book publishing company ''Dupuis'' that came out recently compared to the publication day of the magazine. They use a rating of piccolo hats in order to review how good something is. One hat is ''not bad'' and five hats is ''masterpiece''. They never seem to go below this. So if you come across a comic book from ''Dupuis'' that was never reviewed its either that it is old or that not even ''Dupuis'' themselves would defend that piece of crap.
63* ''ComicVine'' has what some would consider a two point scale. They loved a comic? Five stars. A comic was decent? Four stars. It's rare to see a three or even two star review from them, but when that does happen, people take notice.
64[[/folder]]
65
66[[folder:Live-Action TV]]
67* Go to TV.com. Pick a show you hate, any show. It's pretty much guaranteed that most of the ratings won't drop below 7 out of 10. In some cases, reviewers will rate an episode ''[[PraisingShowsYouDontWatch before]]'' it's aired, in a "I think this will be good" way.
68* For British television dramas, "average" is actually 77%. Even so, very few dramas go below 70 or over 90 (much was made over the ''Series/DoctorWho'' Series 4 finale getting 91% for both parts).
69* As a reality TV example from ''Series/DancingWithTheStars'', you can trip, shuffle, and walk your way across the dance floor for two minutes and still get a four or five. Two and three are put in play extremely rarely, when the judges are trying to force an inferior dancer off the show. In ten seasons, no one has ever been given a one.
70** The Head Judge Len once gave an explanation of each of the ten scores, and getting on the floor and moving your feet grants you a 2. Being vaguely aware that there was music playing was a 3. Dancing mostly in time to said music gets a 4. To get a 1, you literally would have to not dance at all.
71** The Australian version of the show tends to vary a little more with bad dancers getting in the 40-50% range. There are some rarer exceptions: Creator/NikkiWebster got a 1 from one of the judges, almost certainly a publicity stunt as people have danced far worse and gotten more. A couple of contestants have gotten 1s from all the judges but you pretty much have to dress up like a clown and go completely insane to get that (which one guy did).
72** Averted in the German version ''Let's Dance''. While recent seasons have seen a surge in 10s and quite a few 30 total dances, judges aren't afraid to go low if a dancer, even if trying, just isn't delivering. 2s and 3s are very common during the first stages, and not often but regularily dancers fail to get double digits total. Especially "the evil judge" - who always exists on these shows - Joachim Llambi shows 1s on a regular basis and even [[BrokeTheRatingScale draws a minus before it on occasion]]. He calls host of the show Sylvie Meis a "rule lawyer" in those cases where she has to remind him of whatever he does, they will still get a minimum of 1. It took quite a few seasons for the first triple 1 though.
73** On ''Series/StrictlyComeDancing'', Craig Revel-Horwood, in particular, has been criticised for his "low" marking - he marks out of the full 10 (and isn't afraid to use 1s or 2s), while the other judges give out sub-6 scores so rarely that it tends to look like a personal insult when they do. This criticism ignores the fact that, logically, if you're using a ten-point scale then a five or six should be average and a seven or above should be good. Things get even worse once the season passes the quarter-final stage, when any mark lower than 9 tends to be roundly booed by the audience.
74* ''Ice Age'' (formerly ''Stars On Ice'', [[ForeignRemake Russian]] ''Dancing With The Stars'' [[InSPACE on ice]]), uses standard figure skating scales: 0.0 to 6.0. To put things into perspective, the worst average score in the entire history of the show, awarded to the worst pair on the very first day back in 2006, was 4.8. It's becoming worse over the years: now the average score is 6.0, noticeable mistakes mean 5.9, and bad performance is as low as 5.8. To add insult to injury, judges sometimes complain about how they don't have enough grades to noticeably differentiate between performances of similar qualities, apparently ignoring the fact that they have 57 other grades at their disposal.
75* In ''Series/GreatBritishMenu'', a score of 7 is considered average, and anything below an 8 is considered a disappointment; the lowest score ever given in a judging round was a 2, leading the subject to RageQuit. On the other side, though, scores of 10 out of 10 are far from unheard of, being essentially an indication that, in its current state, the dish is worthy of being presented at the banquet.
76* ''Series/VideoPower'' was an early 90s show meant to cover everything related to video games, including reviews of recent titles. It only takes watching a few episodes of this to notice the host never does a game he doesn't recommend.
77[[/folder]]
78
79[[folder:Music]]
80* ''Q Magazine'' has never gotten over giving five stars to the legendary Music/{{Oasis}} [[HypeBacklash trainwreck]] ([[BrokenBase for some, anyway]]) ''Music/BeHereNow''.
81* ''Sounds of Death'', aka ''S.O.D.'', is infamous for this. In past years they would publish "reviews" of albums with copy taken straight from the record label's press releases, and in many cases run a glowing review of an album opposite a full-page ad for the same CD!
82* Allmusic zig-zags this:
83** It rarely rates an album below three stars, and ''never'' rates an album five stars when it comes out.
84** It isn't unheard of for them to go a little lower. Music/BrooksAndDunn's and Music/KennyChesney's discographies include at least a two-star and two-and-a-half star apiece. Kenny has ''two'' two-stars.
85** With [[Music/{{Nickelback}} certain]] [[Music/MCHammer artists]] it shifts the scale about one-and-a-half stars lower.
86** Allmusic also seems to have a strange hate for later Music/WeirdAlYankovic albums, which are usually well-received by others.
87** Some of the reviews date from when Allmusic was still in book form, and in those cases, the stars don't always match up — so they might say an album is unremarkable yet give it four stars, or say it's great but only give it three.[[note]]This probably has to do with AMG being aimed slightly more towards record dealers initially. Stars are based mainly on the album's ability to sell (see the Charles Manson example), while the more subjective reviews can be useful when recommending albums to customers (especially those buying 2-3 star albums).[[/note]]
88*** UsefulNotes/CharlesManson's first album got 4/5 stars[[note]]because of [[NoSuchThingAsBadPublicity historic/novelty value]][[/note]], which made the initial review[[note]]which has since been replaced with a new one, but keeps the same rating,[[/note]] a bit confusing when stating that Manson was "as good a songwriter as he was a human being" (though the review concluded telling the reader to "Don't bother").
89** At one point on Allmusic's website, every single one of Music/TheBeatles' albums (as originally released in the U.K.) were rated five out of five stars, no matter if the review was critical or not. Conversely, ratings for other issues of their albums (e.g. the American Capitol releases, some of which, such as ''Meet the Beatles'' and the U.S. issue of ''Rubber Soul'' are considered by some to be better than their canon counterparts) were all over the place.
90* In a similar vein, ''Country Weekly'' magazine has used a five-star rating in its albums reviews section since late 2003, a couple years after the late Chris Neal took over as primary reviewer. Almost ''every''thing seemed to get an automatic three-star or higher, with the occasional two-and-a-half at worst. Perhaps the only time he averted this trope was in one issue where a Kidz Bop-esque covers album got one star. Before the star-rating system, the mag's reviewers were even more unflinchingly favorable, both from Neal and his predecessors. When a batch of new reviewers took over in late 2009, they got a little more conservative with the stars; one gave an album only two-and-a-half stars, although the tone of the review didn't suggest that the album was even mediocre. Later on, when the review section was expanded to singles, music videos, and other country media as well, the lowest they ever went was two stars for the music video of Music/ZacBrownBand's "The Wind".
91** They switched to letter grades in late 2012. For three years, they managed never to go lower than C-minus (''Series/TheXFactor'' winner Tate Stevens' debut, the video for Eli Young Band's "Say Goodnight", Music/LukeBryan's "That's My Kind of Night", and the video for Music/FloridaGeorgiaLine's "This Is How We Roll"). The magazine finally gave out ''four'' [=Ds=][[note]]David Fanning's "Doing Country Right", Waterloo Revival's "Bad for You", Eric Paslay's "High Class", and Jana Kramer's "Said No One Ever"[[/note]] and a D-minus[[note]]Danielle Bradbery's "Friend Zone"[[/note]] in 2015 and 2016, but still never gave out an F before the magazine stopped publication in 2016.
92* Robert Christgau used to be much more diverse in his ratings, which either ranged from E- to A+ (before 1990) or through a wide variety of grades including dud, "neither," honorable mention, and B+ to A+. Now that he no longer has the same encyclopedic approach to reviewing he once had, he only rates albums he likes as part of his "Expert Witness" blog, effectively limiting grades from B+ to A+. Christgau will occasionally use B or lower to signal that a record is a "turkey", signalling a GiftedlyBad record worthy of BileFascination.
93** For albums he doesn't grade, he has a scale of three stars to one star for "honorable mentions", which signal flawed albums that have niche appeal -- three stars for potential modest {{Cult Classic}}s; two stars for potentially enjoyable records; one star for potentially likeable records. He also has ✂ (an unworthy album with a good/great song), 😐 (an album with some craft or merit, but not enough) and 💣 (a bad record unworthy of further thought).
94* Generally averted by ''Magazine/RollingStone''; if you look on their website, the vast majority of albums score 3 or 3.5 stars. Higher-scoring albums are usually later albums or remasterings by classic rock artists.
95* Averted by Magazine/{{NME}}, which has a 10-point scale and freely uses all the points on it. They very rarely give out a perfect 10 (usually only once or twice a year, if at all) and this will almost certainly be their album of the year. They also occasionally [[BrokeTheRatingScale break the bottom end of the scale]] by giving out zeroes or even minus figures when they're feeling particularly snarky.
96* John Mc Ferrin, who reviews on [[http://www.johnmcferrinmusicreviews.org John McFerrin Music Reviews]], has made a conscious effort to avert this. His system goes 1-9, then A, B, C, D, E, F, and finally 10. In a [[http://www.johnmcferrinmusicreviews.org/ratings.html FAQ]] on his website, he stated that he set up his system asymetrically, with the first 8 being the equivalent to a 0-7, and the other 8 spanning from 7-10. He explains that otherwise, most albums would get a 7 or 8. For the most part, his system actually inverts it. Very few albums get above a D (The equivalent of 13/16, or an A-). On the opposite end, out of the 1015 albums he has reviewed, only 79 get below a 6 (The equivalent of a C) with only ''four'' getting below a 3 (Equal to a D-). Though, to be fair, this could be because he publicly stated he mostly reviews albums by bands he likes.
97* Music/{{Eminem}} specifically called out the hip-hop magazine ''The Source'' (which rated albums based on a scale of five mics) for doing this, in one of his [[Characters/EminemBeefs many]] diss tracks aimed at the magazine and its editor Benzino, "The Sauce":
98-->''The Source'' was like our only source of light\
99when the mics used to mean somethin', a four was, like,\
100you were the shit -- now it's like the least you get!\
101Three and a half now just means you're a piece of shit.\
102Four and a half or five means you're [[Music/TheNotoriousBIG Biggie]], [[Music/JayZ Jigga]], Music/{{Nas}}\
103[[SmallNameBigEgo or Benzino]] — shit, I don't even think you realize\
104you're playin' with motherfuckers' lives
105[[/folder]]
106
107[[folder:Pinball]]
108* Review scores on Pinball News have never dropped below roughly 70%, even widely disliked machines like ''Pinball/IndianaJonesStern'' and ''CSI''. The reasons for this are unknown, but considering Pinball News reviewers receive machines directly from manufacturers for review, it may be to prevent those manufacturers from cutting them off.
109* Thoroughly avoided with user aggregate scores on sites like The Internet Pinball Database and Pinside, however: At these sites, a 50% DOES describe a mediocre machine, with really bad ones dropping between 10% to 25%.[[note]]Internet Pinball Database has reatings out of 10, and Pinside has them out of 100.[[/note]] There is no enforcement necessary for either site--it seems the pinball audiences, by and large, naturally do not use the Four-Point Scale.
110[[/folder]]
111
112[[folder:Professional Wrestling]]
113* This trope hits professional wrestling reviews ''hard.'' Virtually nobody is satisfied with any rating below four stars. Japanese wrestling reviewer Mike Campbell has gotten a reputation as a horribly biased negative critic simply because he averts this trope very hard while explaining the pros ''and'' cons of a wrestling match in meticulous detail.
114* Averted in the case of Dave Meltzer. He does his best to try and use every point of his -5 to 5 star scale and occasionally going out of that range.
115[[/folder]]
116
117[[folder:Sports]]
118* The 10-point must system used for scoring various boxing and MMA bouts.
119** In boxing, judges award the winner of the round 10 points and the loser 9 points. Barring fouls, the only way to get fewer than 9 points is to get knocked down, which is rare and usually indicates that the boxer is about to lose. Scores of 7 or fewer would require the boxer to get knocked down several times in a 3-minute span. In that situation, the referee or the fighter's corner would usually stop the fight before the round ended. Sometimes, rules are set in place in which the fight is automatically stopped if three knockdowns occur in a single round, making it ''impossible'' to score 7 or fewer points. Thus, in fights that go to decision, the scores are very large, but decided by only a few points. You get 108 points just by managing to not fall down for 12 rounds, and 120 points for winning ''every single round''.
120** MMA also uses the 10-point must system, but has no knockdown rules. Therefore, if you lose the round, you get 9 points. If you're utterly dominated, you'll get 8 points. There's basically no way to get fewer than 8 barring penalties for rules infractions, as a fighter who is performing that poorly would be rescued by the referee. In 2017, changes in the Unified Rules of Mixed Martial Arts made the scoring of 10-8 rounds less strict, allowing 10-8 rounds to be scored when one fighter is defeated soundly, but not completely. The change was made in an effort to combat this trope.
121* In competitive debating tournaments
122** In one scoring system, 75 is considered an average speech, and virtually all speaker scores fall between about 70 and 80, with 79 or 80 being a demigod level speech. Supposedly if someone simply gets up, repeats the topic of the debate, and sits down, that's about a 50. Getting enough judges for a debate can be a problem; often the judging forms are very specific to try to get around the fact that some judges may be, effectively, people who wandered in because they smelled coffee. There are forms where the judge is asked to circle a number from 1 to 5 on 20 different categories, then add the numbers up to give the final score. Since in some categories a 2 is roughly equivalent to "Did not mumble incomprehensible gibberish during the entirety of the debate," 40-50 is about the lowest score you can get if you even attempt to look like you're self-aware.
123** In other formats, each competitor's score is determined by adding the judges' individual scores, each one out of fifty points. Judges are instructed to both score and rank each competitor. Where the fun begins is that judges aren't allowed to give tied scores, and scores are only allowed to differ from each other by one point. The result being that first place, in every round, automatically carries a 50, second place a 49, and so on. Even if a competitor starts his piece over more than once (which automatically carries a ten-point penalty or worse, depending on the format) they're often just given the last place score. Few judges ever rock the vote; a judge who awards a first place a 49 (let alone, say, a 45) is regarded as being unfamiliar with the format. The dark irony hits when you realize that the most veteran judges are the ones willing to be tough; judges who don't know their way around the competition usually just punt it.
124* Rivals.com, a football recruiting site, ranks prospects using the standard 1-5 star scale. Then they have a vague additional ranking system that ranks players on a 4.9-6.1 scale.
125* In ski jumping each jump is scored by five judges. They can award up to 20 points each for style based on keeping the skis steady during flight, balance, good body position, and landing. The highest and lowest style scores are disregarded, with the remaining three scores added to the distance score. However, anything below 18 is usually considered a slightly botched jump and scores below 14 are only ever seen when the jumper falls flat on his face upon landing.
126* In NCAA football, going through an NFL draft voids the remainder of your scholarship years, which often prevents players from finishing any degrees they have not completed. In order to "help" kids who were on the fence about declaring or staying in school, the NCAA allowed them to consult a panel that would predict where they would be drafted should they come out. However, this panel was ''notoriously'' optimistic, frequently telling hundreds of kids a year that they would be drafted in the first 3 rounds.[[note]] For reference, with 32 teams in the NFL, that equates to telling all of them they are one of the top 100 players in the draft pool that year.[[/note]] This had very real consequences as many kids were lured by the promise of NFL riches, fell to late in the draft because they were raw players, and washed out of the NFL before developing.
127* Gymnastics was previously regarded as a one-point scale due to its use of a code of points which starts every competitor off at a fixed score depending on the difficulty of their routine and makes deductions for errors (you'll hear commentators referring to a good performance as "giving no points away to the judges"). Understandably, at elite level, difficulty ratings were usually close to a 10 (if not a 10 proper), and elite scores would very rarely fall below a 9 (only for major mistakes or an unfinished routine -- even a fall could still score in the low 9s). These days, all competitors start on 10 points for execution, and have a difficulty rating added on, resulting in elite male gymnasts' hit routines [[RankInflation regularly scoring 15 or more]], while female gymnasts', though lower, will still top a 14 for a hit[[note]]women's routines have lower difficulty scores on average because they count only 8 elements towards the difficulty score, as opposed to 10 for men[[/note]].
128** Modern gymnastics execution scoring, specifically at major international events, actually inverts this, as judging has become increasingly stringent leading to more deductions and lower execution scores. For all events except vault (which has higher execution due to the nature of the single-element routine), anything above an 8 is generally a solid execution score, an 8.5 or better is pretty darn good, and breaking a 9 is essentially theoretical.
129* This became a point of contention during the 2016 NBA Slam Dunk Contest. Judges were liberally giving out high scores to the point where as the dunks grew more and more impressive, they felt obligated to give them 10s. This led to it taking multiple rounds because the final two athletes kept on tying.
130[[/folder]]
131
132[[folder:Technology]]
133* In general, electronic products (and products in general) are rated based on their performance in a particular price segment, not overall performance against everything else. The reason is because this would be really unfair to the more affordable and sometimes more practical products.
134** As an example, a $100 graphics card that performs better than its competitors in this price category (typically ±10%) can receive a 9/10. But a $500 graphics card that can't match its competitors may receive a 7/10, even though the $500 graphics card will totally blow the $100 graphics card out of the water in performance alone.
135** Also products should be rated based on the times. It seems silly that a 10/10 product from 10 years ago still holds any weight against a 8/10 product of today. Generally though, the user experience is what counts.
136* Generally the case with all electronics for people: never buy any product given less than an 8 on a 10 point scale. The reasons for this are complicated, but basically boil down to the following few reasons:
137** A lot of it has to do with useability. If a sample of reviewers generally agreed that the useability of the electronic gizmo sucks and thus gives it lower scores, then nobody will buy it because who wants to buy an electronic gadget that's annoying to use?
138** Almost every complaint that you could make about most well known high-tech products is either based on taste (say, iOS vs. android) or is strongly counterbalanced by price (a top end graphics card against a $60 model). The few complaints that don't fall into those two tend towards nitpicking and are often only visible when sitting two things next to each other. So whatever problems you might find can't take too many points off if the device does what it is supposed to for that price.
139** Gadgets have some of the most vehement fanboys on the internet, and so a site that tries to cater to all of them has to hedge their scores to keep everyone happy further pushing the scores closer together.
140** Finally, they have to keep the manufacturers happy too, because those smartphones, SLR cameras and 3D [=TVs=] aren't cheap. So they will almost always focus a review on the 'new' feature being touted by the manufacturer and how amazing it is and then ignoring the same feature on similar products who are pushing a different part of their widget as being awesome.
141** This is actually so ingrained that electronics manufacturers who usually make products that receive good reviews have threatened review publications to stop sending them products to review because something that was reviewed got a lower than expected score. Some of those products did indeed receive the equivalent of 7/10. Of course, why would you threaten the very people who review your product to stop sending them your product when you rely on said reviews?
142* ''Series/AttackOfTheShow''[='s=] Gadget [=Pr0n=] segment has never rated any reviewed item below 70%. Even a digital camera with grainy picture, difficult menus, unresponsive buttons, low battery life, insufficient storage space, and inadequate low light sensitivity that is several hundreds of dollars too expensive will still get the equivalent of a B+.
143* Zig-zagged by ''Mac|Life'' back when it was still called ''Magazine/MacAddict''. At the time, they had three review sections: a generic one, one for interactive [=CD-ROMs=] and one for children's software. All three used a four-point scale with their mascot, Max: "Freakin' Awesome", "Spiffy", "Yeah, Whatever" and "Blech!".
144** The catch-all section had reviews written by a panel of reviewers, summarized with the responding four-point scale and a good news/bad news blurb summarizing the product's strongest and weakest points. If they could find even ''one'' good thing to say about it, it usually got a "Spiffy" at worst. "Yeah, Whatever" was usually reserved for SoOkayItsAverage products, and "Blech!" was all but nonexistant.
145** The interactive CD-ROM section, however, was just the opposite. It used a three-reviewer panel for each CD-ROM, and it was very rare that any of the three had anything ''good'' to say about any of the interactive [=CD-ROMs=]. You could pretty much guarantee at least one "Blech!" per issue here.
146** And finally, the children's section used feedback from actual children, with a summary from a regular reviewer. The children's panel and the main reviewer were weighted to give the overall rating, but even then, you'd be hard-pressed to find a "Blech!"
147** All of this went out the window when the magazine repackaged itself as [[CerebusSyndrome more staid and formal]], going with a standard five-star scale (which has remained with the shift to ''Mac|Life'').
148[[/folder]]
149
150[[folder:Video Games]]
151* In ''VideoGame/HappyWheels'' this trope is played straight: If you sort by ratings, you'll inevitably end up with crappy levels that received the 5 star rating either by the creator being the only one to rate it (as suggested by the page image), or by having a [[SchmuckBait "rate 5 stars" message at the end of the crappy level.]] One sure fire way of finding a good level, is to see the "featured levels". Even then you'd be more than likely to see a a crappy game that was rated five stars because the maker promised a poorly drawn picture of a [[SexSells naked woman]] for five stars.
152* ''[=GamesRadar=]'' is [[http://www.gamesradar.com/f/what-is-seven-out-of-10-week/a-20080721121451869026 fully aware]] of the four point scale, and examines this phenomenon in their article: [[http://www.gamesradar.com/f/shit-games-that-scraped-a-seven-out-of-ten/a-2008072310514413006 "Crap games that scored a seven out of ten."]] Take with a grain of salt, however, as half of the list is immediate dismissal based on genre, nature, [[note]]''Nascar 2007'' is on the list explicitly because [[Administrivia/ComplainingAboutShowsYouDontLike the author doesn't like Nascar]], and ''Prince Caspian'' doesn't get beyond the fact that [[TheProblemWithLicensedGames it's a licensed game]] -- whether it [[SugarWiki/NoProblemWithLicensedGames really is good or bad in its own right]] is left a total mystery to the reader.[[/note]] or most jarringly, [[PeripheryHatedom target audience]][[note]]The portion featuring the ''Nancy Drew'' game on the list is vocally appalled that a game targeted towards girls got a passing score.[[/note]] rather than the content of the games.
153* ''Edge'' magazine is one publication that, over the years, has attempted to stick to a rating system where a score of 5 should ''ideally'' be perceived as average, not negative. However, their mean score is definitely skewed closer to 7, simply because the magazine is more likely to review relatively polished high-profile games than the bargain-bin budget titles (such as Creator/PhoenixGames) that would balance out the weighting the other way. ''Edge'' has done quite a lot of self-analysis of its own reviewing/scoring practices over the years, with articles like E124's look at how reviewing practices vary across the gaming publications industry (how much time a reviewer should spend with a game before rating it, how styles of criticism and ratings criteria vary depending on the target audience, and so on). Up until a few years ago, they also did a lot to build up the prestige and mythology around their rarely-awarded 10/10 score (see for example their 10th anniversary issue [E128, October 2003] retrospective look at the highly exclusive club of four games that had received that score up until that point - ''VideoGame/SuperMario64'' in 1996, ''VideoGame/GranTurismo'' and ''VideoGame/TheLegendOfZeldaOcarinaOfTime'' in 1998, and ''VideoGame/HaloCombatEvolved'' in 2001).
154** Then in 2007, ''VideoGame/Halo3'', ''[[VideoGame/HalfLife2 The]] [[VideoGame/{{Portal}} Orange]] [[VideoGame/TeamFortress2 Box]]'', and ''VideoGame/SuperMarioGalaxy'' were awarded 10s three months running, and since then the score has been awarded a lot more frequently. (See [[http://www.gamesradar.com/xbox360/xbox-360/news/does-a-perfect-score-mean-a-perfect-game/a-2007100919021828033/g-20060321132945404017/p-6 this interview with the editor]] for a discussion of their reviewing philosophy from around that time.) In contrast to 10/10, they've only used the dread 1/10 score twice - for the godawful ''Kabuki Warriors'', and ''VideoGame/FlatOut 3''.
155* Shortly before being discontinued, ''Games for Windows: The Official Magazine'' (previously ''Computer Gaming World'') switched to a letter grade system like that used in schools, precisely because of this problem. This system is now used on their corresponding website, 1up.com.
156** ''Computer Gaming World'' rather famously didn't have numerical/starred reviews for its first [[OlderThanTheyThink fifteen years or so]], until the mid 1990s, when readers who didn't want to actually read the whole article and just look at the score finally complained enough that they started giving out 0-5 stars. When they did start actually giving scores to their reviewed games, in most cases they were more than willing to use the entire scale. They even had an "unholy trinity" of games that were [[BrokeTheRatingScale rated at zero]] (''VideoGame/Postal2'', ''Mistmare'', and ''VideoGame/DungeonLords'').
157* The notorious game reviewer Jeff Gerstmann [[http://www.gamesindustry.biz/articles/2012-03-16-gamespots-acquisition-of-giant-bomb-explained-by-gerstmann-davison was fired by GameSpot]] for panning ''VideoGame/KaneAndLynch'' (a game heavily advertised on the site) with a 6.0. However, the site says he was fired for personal reasons. Also, he was not exactly alone among reviewers in scoring the game poorly. Of course, after this controversy, and his firing, Gerstmann started up ''Website/GiantBomb''. Over there, Gerstmann and his crew use an ''Series/XPlay''-style review scale (1-5 stars, no half-stars), and they're more than willing to dish out 1 and 2 star reviews for bad games. He later reviewed the sequel ''Kane & Lynch: Dog Days'', which he gave a 3 out of 5 (an average score).
158** Alex Navarro (a co-worker and supporter of Gerstmann's) often broke the four point scale when he reviewed games including ''VideoGame/BigRigsOverTheRoadRacing'', ''Robocop'', and ''Land of the Dead''.
159** Gamespot is partially guilty of the scale: browsing their reviews archive, 405 of their [[https://www.gamespot.com/games/reviews/?sort=gs_score_desc&page=405 725 pages so far]] score between 7 and 10 (and only in TheNewTens perfect scores became more common - currently there are 20, but only six were released before 2010, with [[VideoGame/TonyHawksProSkater the 4th]] in 2001, and the [[VideoGame/GrandTheftAutoIV next]] [[VideoGame/MetalGearSolid4GunsOfThePatriots two]] in 2008).
160*** Once upon a time, Gamespot had an excuse for this. A now-long removed from the site breakdown of their scoring system revealed that being technically competent (bug-free console release or a feature-complete PC release that would run on common system configurations at the time) automatically got a game a 6 and other factors built the score up from there. This page, and presumably the system, have been gone from the site for at least five years by now, though.
161*** This is brought in an article named "[[http://www.gamespot.com/articles/in-defense-of-the-60/1100-6349597/ In Defense of the 6.0]]", where along with "averting technical pitfalls that can pull you out of the experience will warrant a fair score", one of Gamespot's reviewers declares that the RuleOfFun shouldn't be ignored only because the score was below the usual 7-10.
162** Upon Giant Bomb's acquisition by [=CBSi=], which also owns Gamespot, Gerstmann was finally able to fully explain his firing [[https://www.youtube.com/watch?v=GagFPnSG0j4 here]]. The firing ended up being related to review scores after all, but was a more chronic problem of an inexperienced executive team not knowing how to responsibly deal with dropping ad dollars due to (justifiably) low review scores across several mediocre games.
163* Independent review site [=WorthPlaying=].com has a typical floor of 4.0 unless the game is flat-out broken (in the sense of significant glitches).
164* ''[[http://www.hardcoregamer.com Hardcore Gamer Magazine]]'' has an interesting version of this. Each game is reviewed by ''two'' staffers; the first gives the in-depth review of the game and awards a score (0.0--5.0 scale), then the second comes in with a "second opinion" score, and gives usually a one or two sentence aside about the game. The two scores are averaged out. And while it's refreshing to see the two scores differing by about half a point, the real entertainment comes from watching the second opinion offering completely derail the score of the main reviewer.
165* ''[=RPGFan=]'' is notorious for this - with [[VideoGame/UnlimitedSaga rare exceptions]], even a game the reviewer will spend the entire piece criticizing will still get at least a 70. They posted an [[http://www.rpgfan.com/news/2009/337.html editorial]] about it, providing an explanation of their methods and somewhat admitting that the lower half of their scale is pointless, but sidestepped describing their reasoning, instead saying that you should focus on the text of their reviews. They later added a link to a [[http://www.rpgfan.com/graphics/gradingscale_lg.jpg guide]] with every review, but still have not explained ''why'' they score games this way.
166* ''[=RPGamer=]'' used to score on a scale of 1-10, but ultimately dropped this in favor of a 1-5 system because of this very trend. This led to their reviews since the change actually using the entire scale, with several 1s and 2s given to games that truly tortured the staff members reviewing them. While older scores on the older scales remain unchanged, the review scoring page provides a conversion scale that has led to many games experiencing a severe drop in score when converted to their latest scale.
167* Video game magazine ''Magazine/ElectronicGamingMonthly'', or ''EGM'', made a conscious effort to avert this: most (previously all) titles they featured were handled by three separate reviewers, and highly varying impressions were surprisingly common. Closer to the end of its run, they switched from a 1-10 scale to a 'grade' system (A, B, B+, etc.) for the purpose of avoiding the Four Point Scale trap entirely.
168** Towards the end of the mag's original run, they handed off the really awful games to internet personality {{Creator/Seanbaby}}, who wrote humorous reviews lambasting them for being so bad that nobody would - or should - ever play them (many of the reviews can be seen, in extended and uncensored forms, on his website).
169*** Eventually this reached its ridiculous-yet-logical conclusion when EGM was denied a review copy of the Game Boy Advance ''Film/TheCatInTheHat'' movie tie-in game, which the developer said was because they "didn't want Seanbaby to make fun of it". Or, to put it another way, they acknowledged right out the gate that their game was so bad it wouldn't even rate a 1 in the normal review section. Seanbaby obligingly went out and purchased a copy just so he could lambaste it.
170** There were letters from the editor talking about how some company or another wouldn't give them information about their games anymore because of the bad scores they handed out. This happened at least twice with Acclaim and once with Capcom. In their first encounter with Acclaim, EGM had handed out very low review scores to their ''Total Recall'' game for the NES; when Acclaim threatened to pull advertising if they didn't give the game a better review, editor-in-chief Ed Semrad wrote in an editorial column that they could go right ahead, because they were sticking by the review even if it cost them money, because journalistic integrity was more important than a paycheck. The second time this happened, it was because EGM had blasted ''BMX XXX'' (and rightfully so); this time, Acclaim threatened to never let them review another game of theirs ever again, to which EGM said "fine by us". Capcom's case was a somewhat different affair: it wasn't a review that got them angry, but instead EGM badmouthing the constant stream of "updates" to ''VideoGame/StreetFighterII''; when Capcom asked EGM to apologize for the remarks in exchange for not pulling advertising, EGM again said that they would not retract the statements even if it cost them Capcom's money, because they felt honesty and independence in their publication was more important. In all three cases, Acclaim and Capcom pulled ads from the mag for a few months before buying adspace again (plus, Acclaim would go bankrupt shortly after ''BMX XXX'' anyway).
171** It should also be noted that EGM's review system was heavily inspired by Famitsu's review system. The first issue of EGM, however, featured scores that ranged from 'miss' to 'DIRECT HIT!'.
172** Actually inverted by EGM in 1998, where they revised their review policy in order to give HIGHER scores, specifically 10s. There was a period from late 1994-mid 1998 where no reviewer had given out a single 10 (''[[VideoGame/Sonic3AndKnuckles Sonic & Knuckles]]'' being the last one to receive one). After a slew of excellent high-profile games such as ''[[VideoGame/GoldenEye1997 GoldenEye]]'' and ''VideoGame/FinalFantasyVII'' passed through in 1997 with 9.5s, the mag revised its policy in the summer of 1998. Previously, a 10 was only awarded if a reviewer believed the game to be "perfect". But as Crispin Boyer pointed out in his editorial discussing the change, "Since you can find flaws in any game if you wanted … there's really no point in having a 10-point scale if we're only using 9 of them." Thus, a 10 would be given out if the game was to be considered a gold standard of gaming and genre. The very next issue, ''[[VideoGame/{{Tekken}} Tekken 3]]'' would break the 3+-year spell by receiving 10s from three of its four reviewers, and later that year, ''VideoGame/MetalGearSolid'' and ''[[VideoGame/TheLegendOfZeldaOcarinaOfTime Ocarina of Time]]'' became the first games to receive 10s across the board in the magazine's long history.
173** EGM also received criticism from readers that some games would receive high scores one year, but the next year, a new-and-improved sequel or an extremely-similar-but-better game would come out to lower scores; alternately, a game that received high scores upon its original release may be ported to another system, or remade years later, to lower scores. Reader logic was that if Game B was better than Game A, objectively, Game B had to be rated higher on the numerical scale (see an entry above). This was addressed multiple times in the reader mail and editorial sections, where it was explained that they did not follow this rule, as long-running and generally high-scoring yearly sports series like ''[[VideoGame/MaddenNFL Madden]]'' or ''VideoGame/TonyHawksProSkater'' would have hit the 10-point ceiling years ago due to improvements in each version. Furthermore, at least technically speaking, games will always be improving due to the more powerful consoles and computers that are released every few years. Finally, innovation naturally tended to score higher because of its originality than when all those ideas were incorporated into every game the next year. EGM explained that instead, they rated games based on the current marketplace, and specifically compared new releases to others within its own genre, while their level of standards would naturally increase into the future as games became more ambitious.
174* [[WebVideo/StuartAshen Dr. Ashen's]] review of ''[[https://www.youtube.com/watch?v=swvy4qE_AYM Karting Grand Prix]]'' mocks this, with Ashen referring to the game as "irredeemably awful", then giving it a score of 73% "because I'm a fucking idiot."
175** In an earlier review on the [[https://www.youtube.com/watch?v=QXxttqyOGWU&feature=channel_page Gamestation]], a flea-market handheld game system resembling the original Platform/PlayStation, Dr. Ashen gives the system 7/10, saying that it's the lowest score one can give "before the company pulls their advertising".
176** And in yet another review he gives a product 8/10, but "only because it's made in China, and I'm terrified of their government."
177* ''WebAnimation/ZeroPunctuation'' does not give out numerical scores for just this reason.
178** He ''did'' give out a numerical score for ''VideoGame/{{Wolfenstein|2009}}'' (a two out of five stars, which is already an aversion of this trope). Likely the reason he did give out a rating, though, was because he did the review almost entirely in limerick form and just needed a rhyme.
179** In response to [[FanDumb "How can you even call it a review without a score?"]] from his ''VideoGame/SuperSmashBros'' Mailbag Showdown: "If you want a score, how about four, as in four-k you" accompanied by the commenter being flattened by a giant number 4.
180** It is also worth mentioning that, his lack of using scores aside, Yahtzee subverts the whole reason for this trope in the first place (that is, reviewers not giving bad reviews more or less to keep their jobs). His job practically ''is'' [[CausticCritic to give bad reviews]], and he often receives criticism when he ''praises'' a game.
181** Kotaku doesn't give scores, either, [[HateDumb making some commenters confused]]. Their system of summing up reviews is to ask "Should you buy this game?" with the possible answers being "Yes" for a good game, "Not yet" for a game with significant issues that might be patched in the future, and "No" for a bad game that's irredeemable.
182* British gaming magazine ''PC Zone'''s reviews run the whole gamut from 7%-98%. Similarly, a score of 80%+ does NOT automatically gain a "Highly Recommended" award; although these often ARE given out to high scoring games, on occasion they have not been awarded to games that are technically good, but are lacking in some kind of "soul" that the reviewer (and the Second Opinion reviewer) would have liked to see present.
183* [[http://www.gamesindustry.biz/articles/ubisoft-branded-least-consistent-publisher This compilation]] of ''[=MetaCritic=]'' scores is this trope in all its glory. 70% is worth no points, 60% is -1, and anything below that is -2. It doesn't really prove consistency, for one. That is standard deviations, while this is a total of points. For another, putting negatives that high just makes the lower scorers look even worse. Talk about spin.
184* ''[=GameTrailers=]'' generally has very informative and reliable reviews that coherently explain the points they try to make as the review itself is going on, but the score at the end falls squarely into this trap, the lowest score they usually give being somewhere in the 4.7 to 5.0 range. It once gave a humorous "suicide review" of ''Ultimate Duck Hunting'' presented in the form of the reviewer having [[DrivenToSuicide killed himself over the game]] and his review being his suicide note, and went on about how it was bad enough to push him over the edge at every turn, only to give it [[http://www.gametrailers.com/player/11580.html a 3.2.]]
185* ''Magazine/NintendoPower'' is usually good at averting this trope, but some of their reviews of games in popular franchises tend to be given high ratings by default.
186** With this magazine, what you have to watch for is not the score, but the number of pages of the review. The Nintendo blockbusters get two, three, even four page reviews, squishing out reviews for other games.
187** They also admitted in response to a letter that while they use a full ten-point scale, they won't put up a review for a game lower than a two, reasoning it's too bad to even bother with, and they only give out tens for the super-duper cream of the crop.
188** Towards the end of the magazine's run, they held mini reviews for Platform/VirtualConsole games (old games from eras past and original games) and rated games under "Recommended" (this game is good), "Hmmm..." (your milage will vary), and "Grumble, grumble" (game is bad, don't buy it). This style of scoring seems to have been made to avoid the four point scale.
189* ''Amiga Computing'' gave 100% to ''Xenon 2''. A reader called them out on this, asking if they'd give a higher score to an even better game. ("Yup.") They later gave out a score of ''109%'', and ''another'' 100% in the same [[http://amr.abime.net/issue_529_reviews issue]].
190* The UK ''Official Dreamcast'' magazine aimed to avert this trope (back around the turn of the millennium even) by insisting on a rating scheme where 5/10 was strictly "Average". This led to a huge amount of complaints from fans who missed the intention behind the scheme and complained that a game they liked got a "harsh" score (The creators of ''Fur Fighter'' commented that the 7/10 they got from the magazine was the lowest score the game received). Eventually, the magazine staff made a phrase for each number and put it under each review score so the reader knew what the rating actually "meant". (For instance, any 7/10 rating had the word "good" under it. ''VideoGame/{{Shenmue}}'' was the only game that let us find out that the word under a 10/10 was "genius").
191* The Finnish gaming magazine ''Pelit'' uses this to a degree: They use a percentage scale for their game reviews, and they ''do'' use the entire gamut of their scoring system, but anything below 65 is still relatively rare. The magazine used to include an info box that described anything below 65% was below all standards, and 50% and lower meant the game was truly atrocious. While the 50-or-lower reviews are amusing to read (such as their ''Fight Club'' review where the entire review was just the phrase "Rule 1 of Fight Club: You do not talk about the Fight Club" with a 20% score), the staff hardly ever go out of their way to seek bad games to review, because they don't hate themselves that much. Instead, they pick games that they know they'll like, or ones that have interesting subject matter or are otherwise noteworthy. Originally their scoring system was chosen to maintain compatibility with other gaming magazines of the time, by the early 2000s there were basically no other respectable magazines around that still used the same scale, and the staff have mentioned repeatedly that they would like to switch to a star-based system or no score at all.
192* ''[[http://arstechnica.com/ Ars Technica]]'' has started reviewing video games on a three-point scale: Buy, Rent, and Skip. They [[http://arstechnica.com/gaming/news/2010/05/game-reviews-on-metacritic-why-we-avoid-inclusion.ars expand a bit]] upon why they use that scale and why they aren't part of Metacritic.
193** ''Website/ScrewAttack'' has the same review system, with the exception of using "F' It" rather than "Skip." It's also the system used for the video game reviews in ''Boys' Life'' (the magazine of the Boy Scouts), under the names of [[AlliterativeList "Buy," "Borrow," and "Bag,"]] but not many people care about that.
194** ''Disney Adventures'' also used to use this rating system as well.
195** ''Magazine/NintendoPower'' uses a three-tier system for digital download reviews ("Recommended", "Hmmm...", and "Grumble Grumble").
196* ''Website/InsidePulse'' tried to avoid this, but got so many threatening letters from developers that it gave up on a numeric scale entirely, describing games with positive and negative adjectives instead.
197* When ''VideoGame/AssassinsCreedII'' was due for release, Ubisoft got caught in a major shitstorm when they announced that they won't give the game out for testing unless the reviewer agrees in advance to give a positive review. Apparently, it [[http://www.metacritic.com/game/xbox-360/assassins-creed-ii didn't need]] the "boost".
198** Eidos also pulled this trick for ''VideoGame/TombRaiderUnderworld''.
199* Video game review site actionbutton.net has been routinely lambasted for using a four point scale from fans who believe a game should have gotten five stars.
200* Spanish mag ''Nintendo Acción'' runs on this, to the point some ''Pokémon'' fans complained when ''VideoGame/PokemonBlackAndWhite'' got only a 94, when other games got 96-98 scores. Though in their defense, said review also lambasts the game's graphics, despite the great animated sprites and the SceneryPorn the game has.
201* While Creator/{{Toonami}} hosted dozens of video game reviews over the course of the show, only a handful ever scored below 7 out of 10. No games ever scored lower than 6 on that scale either. The creators have admitted this is due to not having a profession reviewer in their group and only playing games they really like, not wanting to fill the air with needless negativity. That being said, they only rarely give a game a perfect 10 out of 10, with 8 by far the most common score.
202* Surprisingly, IGN is often pretty good about averting this trope (witness the 3.0 they gave Ninja Gaiden 3). In fact, they completely averted it in the very early days where they scored games on an integer scale rather than a decimal scale. However, once they moved to a decimal scale around September of 1998, this cropped up more and more frequently. For example, in 2000, they wrote a very critical (and angry!) review for the PC version of ''VideoGame/FinalFantasyVIII'' [[http://www.ign.com/articles/2000/01/29/final-fantasy-viii-2 but still gave it a pretty solid 7.4/10]] [[note]] Although when reviewing ports separately from the main game, it's not uncommon for the text of the review to focus solely on the quality of the port, as the game itself has already been reviewed previously, and then assign a new numerical score based on the original's score adjusted for how good or bad the port was[[/note]]. Similarly, in 2000, they wrote a very negative review for [=RealMyst=] but still gave it a score of 6.5.
203* An absolutely notorious example of the trope came with IGN's review of ''VideoGame/HogwartsLegacy''. The text of the review utterly ''excoriated'' the game, citing a poor story, weak gameplay, and various technical issues... but the ''score'' was a nine out of ten. Naturally, IGN was immediately taken to task for giving such a glowing score when their opinion was clearly anything ''but'', which many believed was to avoid angering Warner Brothers.
204* A very notable exception to the rule is the VNDB (Visual Novel Data Base), which, as the name suggests, is a listing of (Japanese) visual novels on the market. When a user attempts to give a 10/10, the site actually warns them that this score is reserved for absolute perfection that is unlikely to ever be improved upon and as such, should be given only two or three times at most over one's lifetime. As a result, the list only has two entries over 9.00[[note]]''VisualNovel/SteinsGate'' and ''VisualNovel/WhiteAlbum2''[[/note]] and less than 50 entries over 8.00, out of a database of well over 10,000 titles. Since visual novels have fairly low requirements to function, as opposed to regular video games, their quality is almost entirely based around the story and therefore highly subjective. As such, even a game that scores around 7.00 can still be very enjoyable.
205* The defunct ''Game Player's'' magazine (now absorbed into several other publications) once had a major shakeup after realizing it had fallen into this trope, with even "terrible" games rating 50-60% scores. A new rating scale was devised to even out the score distribution, and was meant to be read in context with the review itself rather than be taken as an absolute. Under the new review system, even a game with a 50% score is probably still worth solid consideration by a fan of the game's genre, and a low-rated game could either be thoroughly underwhelming, or an excellent game for a very small audience of players. 90% and above, however, would be restricted only to games so fantastic that players outside of its genre might consider checking it out, and consequently, very few of these were given out through any particular year.
206* Chris Livingston, of ''Webcomic/{{Concerned}}'' fame, [[http://www.screencuisine.net/screencuisine/video-games/bullet-points-crysis-2-part-2/ brings this up in his "Bullet Points" series]] on ''VideoGame/{{Crysis}} 2'':
207-->I was going to give some points to Crysis 2 because it hasn’t crashed and there haven’t been any graphical glitches. But that’s kind of weird. That’s like buying a Prius and saying “Well, [[EveryCarIsAPinto it didn’t explode when I used the turn signal]] and [[TheAllegedCar the airbags didn’t go off in my face when I turned on the radio]], so it’s a good car.” PC gamers just have such low expectations for games, I guess. I need to break that habit of awarding points for simply working properly.
208* Tom Chick, who currently writes on his "Quarter To Three" website unabashedly uses the entire scale available to him and has done so for a very long time. This has led to numerous big money titles being given low scores way outside the four point scale. He has opined that if you ask an editor if the four-point scale exists, they will say no. Until you try to rate a game a 3.
209** He rated ''VideoGame/DeusEx'', one of the most critically acclaimed games of all time, a game that rivals ''VideoGame/HalfLife1'' for the title of "Best Game Ever", [[http://www.rockpapershotgun.com/2010/06/29/tom-chick-the-man-who-hated-deus-ex/ a 3 out of 10.]]
210** ''VideoGame/Halo4''. [[http://www.quartertothree.com/fp/2012/11/04/halo-4-is-half-the-game-it-should-be/ 1 out of 5]].
211** ''VideoGame/CompanyOfHeroes 2'' [[http://www.quartertothree.com/fp/2013/06/24/company-of-heroes-2-is-a-real-snow-job/ received 1 star out of 5]] on the basis of it being no improvement over the original, [[BribingYourWayToVictory actively blocking elements of the game in order to provide marketable DLC packs]], and requiring endless grinding (in an RTS no less) to unlock vital in-game content.
212* ''Neoseeker'' both averts this and plays it straight. Some reviews, such as ''VideoGame/CodeOfPrincess'' and ''VideoGame/PokemonBlackAndWhite'' (at least before it was edited) had reviews that consisted of little outside of bashing the game, then gave it a seven and six, respectively, to the games. However, since it allows user reviews, there are some reviews that use 5 as the average, and some that use 7 as bad.
213* ''WebVideo/HotPepperGaming'' falls prey to this as well. A fine example of this would be [[https://www.youtube.com/watch?v=WbxKbyasY3o Erin's review]] of ''VideoGame/ClashOfClans'', in which she savaged it so badly that they punctuated her screed with ''drumbeats''... and then she gave it three out of five.
214* An interesting aversion of this is the German magazine CBS (short for Computer Bild Spiele). This is because they use a system in which they give points from 1 to 5 (5 meaning that it's terrible and 1 meaning that it's perfect. The rating system is similar to that of German schools). After it they would multiply the score with the game's price tag and after seeing the score, they would give it tags ranging from "very expensive" to "very cheap". This makes it possible for a AAA title to have the same score as a shovelware game just because the AAA game costs an insane lot more money than the shovelware game.
215** Of course they handle out extra issues in which they only give the reviews of their AAA games to balance out this problem. The fact that they are only a part of a larger magazine publishing company also helps. They also make a point of ''never'' using preview versions of games. While this obviously means their reviews may be as much as two months late compared to the release, this also means the review will be more accurate about the actual state of the game (many, ''many'' games nowadays have Day One {{DLC}}s and patches that change the game content and performance so much that a preview review is more often than not highly inaccurate).
216* Many mobile apps are major abusers of the ratings system. Often, a 5-star rating can be given from within the app with one click, whereas lower ratings require jumping through more hoops. Particularly egregious are free-to-play games which give users in-game bonuses for leaving a rating... if you see a game with lots of 5-star ratings accompanied by few or no explanatory comments, it's a safe bet that the game is more exploitative than fun.
217* The user-rated difficulty rating scale on Website/GameFAQs is a 2.5-4.0 scale, when counting only the games with least 50 difficulty level votes. Easy games have a difficulty rating of less than 3. Average difficulty games, which represent the majority of games for most genres, are between 3 and 3.5 in difficulty. Minimum threshold for low-end NintendoHard starts at around 3.5. Difficulty rating 4 or above are reserved for the hardest of the NintendoHard, and several genres (such as Adventure, Puzzle, and Sports) have no games rated with such high difficulty. Only a very small number of games exceed the 4.5 mark, examples of such games include Ghosts and Goblins (NES), Battletoads (NES), Silver Surfer (NES), Ikuraga, and Touhou Chireiden: Subterranean Animism.
218** The choices available for voting a game's difficulty are as follows, and the point value in brackets: [[ItsEasySoItSucks Simple (1)]], Easy (2), Just Right (3), [[NintendoHard Tough (4)]], and [[PlatformHell Unforgiving (5)]].
219* Eurogamer has recently dispensed with a rating system altogether partially because of this trope. They now only give games an "Avoid", "Recommended" or "Essential" sticker - as well as no sticker at all for games they believe to be SoOkayItsAverage.
220* Polygon does their best to use all 10 points of the scale (including half steps) and will actually go back and adjust a score if a game changes enough between release and updates.
221* Jim Sterling of WebVideo/{{Jimquisition}} uses a 10 point scale for their reviews on their blog, but completely averts the trope by making the scores actually mean something, such as 5 being average, 7 being good, and so on. However, the trope is played straight by the fans of the games they reviewed, which caused Jim a lot of grief; their review of ''VideoGame/NoMansSky'' caused fans of the game to DDOS their web site because they gave the game a 5/10 for having potential, but wasting it on bad game design. Their site was attacked again when they gave ''VideoGame/TheLegendOfZeldaBreathOfTheWild'' a 7/10, saying that the weapon durability system and other factors annoyed them greatly, but they still enjoyed the game overall. Jim then made an episode pointing out how absurd people were acting over their 7/10 score and wondered how on earth such a score is considered to be horrible. They would since prefer going with impression videos of a game they played to show all the bad and good bits.
222* The old Polish computer game magazine ''[[Magazine/TopSecretMagazine Top Secret]]'' rarely gave any game a rating below 7, even when the review contained mainly complaints about the game's quality.
223* Magazine Famitsu averted this in its early years by having four reviewers who each scored the game out of ten. A 40/40 was something almost impossible to attain, the first one being given in 1998. From then to 2007, only six games hit 40/40... but at some point, their policies seem to have changed, such that in the following two years, seven games received that score, and a further twelve have hit it since, while entire years have gone by without a game scoring under 25. Games part of established franchises tend to be even more ironclad; compare ''VideoGame/ResidentEvil6'''s Metacritic score (60) with its Famitsu score (39/40).
224* ''VideoGame/PokemonGO'' and ''VideoGame/{{Ingress}}'' have a system where high leveled players can vote on whether or not a submitted portal/stop should be implemented in the game based on several factors (historical/cultural significance, easy access on foot, accurate location on the map, etc). Said factors can be given a rating from 1 star to 5 stars and giving a nomination 1 star is an automatic rejection (usually reserved for low quality nominations). Most players vote either 1 star or 5 stars and rarely in between. This is due to the systeming punishing players by lowering their review rating (which in turn makes their votes have less of an impact) if they make too many votes that do not agree with the majority.
225* The now-defunct video game review website ''Crispy Gamer'' averted this trope by only giving games one of three ratings: Buy It! (for games that they regarded as being excellent), Try It (for games they believed were worth renting for a weekend and only buying if you turned out to really love it), and Fry It! (for games they regarded as having no redeeming qualities). Some readers felt that they were over-aggressive with choosing Fry It! ratings to games just because the reviewer didn't like the genre or story even when gameplay was decent.
226* Valve goes even further; user reviews on Steam use a ''two''-point scale with just a simple thumbs-up/thumbs-down. This leads to numerous comments like "I gave it a thumbs up/down but..." or "I wish there was a neutral option because this game is SoOkayItsAverage". But then that's aggregated into more of this kind of scale by reporting the ratio of ups and downs -- Overwhelmingly Positive, Mostly Positive, etc. -- with games that have "Mixed" reviews or less tending to be seen as low-quality.
227* An odd trend turned up in amateur reviews of ''VideoGame/HogwartsLegacy'' after the pre-launch controversies over antisemitic content and the transphobic views of the IP creator died down. A surprising number of reviewers summarized it as something along the lines of "it's a mediocre open-world game, but it's ''Franchise/HarryPotter'' so 8/10".
228[[/folder]]
229
230[[folder:Web Original]]
231* Website/YouTube had a rating system that let people give a video a score of up to five stars, though hardly anyone gave less than three, unless the video was particularly bad. This led to a few wide-spread incidents of vote-bots giving dozens or hundreds of one star ratings to people whose videos disagree with the attackers' own political or religious beliefs, where a drop even to four stars will greatly reduce a video's traffic. Youtube has since dropped the 5-star system and changed it to a simple like/dislike system.
232** Similarly, Creator/{{Netflix}} allowed movie ratings, and aggregated all the user reviews into a star rating. Because there are people that will like something no matter how bad it is, and some people that will hate something no matter how good it is, 1 star and 5 star ratings are impossible. However, if a movie doesn't get above 1 and 1/2 stars, you should probably avoid it, and if it reaches 4 and 1/2 stars, it's probably worth watching. So the scale is skewed, but still relatively accurate. It has since shifted to simple like/dislike too, and at certain point it didn't show the full score, as the rating only serves to determine what will be recommended to the viewer.
233* Similarly, a web site that hosts community content for ''VideoGame/Left4Dead'' allows people to give reviews on the created content, ranging from 1-100. Trolls or people who exaggerate how much they hate the custom content will generally give a rating between 1-20. Anyone that wants to praise the author to hell or if the author is using an alt account, they will give scores of 90-100. For the latter, the people will ridicule others who give scores between a 60 and an 80, even if the content doesn't meet the standards of receiving a high score. In other words, if the content is decent, you better either give high scores or risk being flamed by the community for being too harsh or a troll.
234* {{Platform/Newgrounds}} is somewhat of an aversion to this; while the scale is only 0-5, it's an unspoken rule that if it's not up to snuff for the portal, it's a 0, if you just didn't like it or something along those lines you should vote 2, and if you love it vote 5. While 1, 3 and 4 are in there, hardly anyone uses them. Undoubtedly this is partially due to its "Blam"/"Protection" system which, generally, rewards you for relatively high ratings of content others have rated relatively high and low ratings for content others have rated low, in a blind system. The actual reviews, however, can become an extreme example of this, as WebVideo/{{Retsupurae}} has demonstrated in their "Retsufrash" videos - they've witnessed perfect or near-perfect scores handed out to games that the reviewer in question had multiple complaints about (usually with no redeeming qualities mentioned), admitted to not finishing, or, in some particular cases, ''couldn't even get the game to start''. They also witnessed one bizarre inversion, where one game they riffed on got a review that called it "one of the best" of its genre, yet only gave it a half-star.
235* On Mobygames, any author of a published game review is rewarded with 1-5 points, depending on how the review quality is rated by the approving staff. The very worst rating the staff can pick is "Average", continuing through "Good", "Great", "Excellent" and "Superb". Though it makes some sense, since if a review deserved a bad rating, it shouldn't be approved for publication anyway.
236* Hentai sharing sites g-e and exhentai.org use a five-star rating system, but like many sites the stars don't mean much - if anything, it's more likely a low star rating means that the work was translated by someone who does so poorly and makes no effort to improve rather than anything to do with the quality of the work itself. In cases where no translation was necessary (works written in English to start with, or image galleries with no text), the star rating becomes more useful.
237* {{Discussed|Trope}} by Jim Sterling on the WebVideo/{{Jimquisition}} after they gave ''VideoGame/TheLegendOfZeldaBreathOfTheWild'' a 7/10, [[CriticalDissonance leading the fandom to go berserk over the "low" score]]. When they still reviewed games and gave them scores, they went out of their way to ''avert'' the four-point scale, and according to their scale, a 7/10 was actually a rather good score (good overall, even great, but with a large flaw that affected the experience. In this case, the weapon durability system), but it proved to be enough to take BOTW's Metacritic score from a 98 to a [[SarcasmMode mere and lowly]] 97, leading to the fandom to DDOS their website, and them to drop reviews altogether in favor of "Jimpressions," which features no score and is simply a "Did they like/dislike it?" video.
238* When Jello Apocalypse started doing film reviews, he had to go so far as to put out a video explaining his scale, saying that "I'm not the American education system." By his scale, anything above 5/10 meant "this movie is a good use of your time", with only a 6 meaning "you liked it but you probably wouldn't watch it twice or tell your friends to go see it." He himself heavily averts this; his reviews of the ''Franchise/{{Pokemon}}'' films had the ''highest'' scores be 6/10s.
239* Most WebVideo/{{Retsupurae}} videos making fun of flash games end with a sampling of Newgrounds reviews. This subject has come up so many times that it's become a meme to leave outright flames on a Retsupurae video ending with 9/10 or 10/10. Occasionally it even gets inverted, when someone leaves a glowing review with an inexplicably low score.
240** In "Sonic Boom City - State of the Review Edition", the guys read the reviews for a game they couldn't get to load. One review: "Didn't load but I'll give you the benefit of the doubt. [2.5/5 stars]"
241-->'''slowbeef:''' People found that helpful! That is the least helpful review in the world!
242** The guys behind Retsupurae, LetsPlay/{{slowbeef}} and LetsPlay/{{Diabetus}}, have joked about this several times as well. During the former's ''VideoGame/DeadToRights'' LetsPlay, while talking about the critical reviews the game received at release, Diabetus comments that "a 7/10 rating usually means the game is fucking awful". The description for their "Retsufrash" playlist (videos where they make fun of Flash videos and games) also notes that the Flashes in question "deserve the full scorn that an 8 out of 10 offers".
243[[/folder]]
244
245[[folder:Web Videos]]
246* The titular host of ''WebVideo/TheAngryJoeShow'' states this is a PetPeeveTrope of his. The "Angry Reviews", "Extended Review Discussions" and "Rapid Fire Reviews" have all used just about every number on the 1-10 scale (whole numbers only with no decimals). According to Joe and his team, a 5/10 is their flat average, with reviews for video games building towards the team justifying a higher or lower score. For instance, a 3/10 to Joe will have some decent points, yet he'll detail the negatives and why it's ultimately not worth recommending to his audience; conversely, Joe will preach for a 9/10, but explain why it falls short of a 10/10. Still, there are some examples in the show's history where the trope is PlayedWith.
247** ''VideoGame/DanceCentral'' received a 7/10, the highest possible from Joe: to others, this may seem like a weak score, but he reasons the game wasn't only fun, it was built specifically to take advantage of the [[Platform/Xbox360 Kinect]]. Not only did he give it his "Badass Seal of Approval", Joe also placed it at #5 on his "[[TopTenList Top Ten Best Games of 2010]]" over other big titles during that year such as ''VideoGame/HaloReach'', ''VideoGame/CallOfDutyBlackOps'' and ''[[VideoGame/NeedForSpeedHotPursuit]]''.
248** Similarly, ''VideoGame/AsurasWrath'' was given a 6/10, a "slightly above average" game to Joe, but since he was in complete awe of the title from beginning to end, he awarded it his Badass Seal of Approval.
249[[/folder]]
250
251[[folder:Other]]
252* The Ontario education system, in addition to giving a percentile score for academic achievement in a subject, also uses a four-point scale for such things as teamwork, organization and initiative: Needs improvement, Satisfactory, Good, and Exceptional.
253* Brokerages have a quid pro quo relationship with the firms that they're supposed to be rating. Usually there's an informal understanding between the two that, if the brokerage advise their investors to sell a particular firm's assets, that firm will stop providing the brokerage with information or other privileges. So brokerages almost never give firms a "sell" rating. You can see a Four Point Scale in corporate credit ratings where junk bonds and high risks get a B-rating, while better investments get A, AA, AAA, etc. In an ordinary education system, a B is a respectable grade, and a C is a clear pass.
254* Couchsurfing.com is a hosting website based around building up a reputation through a publicly visible vouching/feedback system. Negative "reviews" are so rare that many people will refuse to stay with or host people who have even one.
255* eBay ratings, as parodied here in ''[[https://xkcd.com/325/ xkcd]]''
256** eBay only has a Positive-Neutral-Negative rating system, but it still skews very much toward positive. Some people leave neutral feedback for sellers when they really should give negative. Part of this is because eBay doesn't allow anonymous feedback and [[SmallNameBigEgo a few sellers]] flip out and give the buyer negative feedback in retaliation.
257** The system itself discourages users from giving anything other than positive, making the user confirm that they have given the seller ample time, that they have tried to contact the seller about any problems, and that they understand what they're doing in order to give a neutral. This is more confirmation than one has to do to sign up to the system. It's also possible for a buyer to lose feedback privileges altogether for leaving too many negatives in a short space of time.
258** Sellers will receive a warning (possibly followed by the withdrawal of certain selling privileges) if any of their ratings fall below 4.5. This means, in effect, that 4 out of 5 is considered a ''bad'' score and that it's actually better not to receive a rating at all than to receive one less than a perfect 5/5. (This is mitigated to some extent in that wherever objectively possible, ratings are assigned automatically by eBay itself, e.g. it's not possible to receive four stars or below for shipping price if your shipping is free.)
259** It's not all one-way though: sellers only have the option of leaving positive feedback, no negative or neutral options at all. (Cases where the buyer is causing a problem are usually better handled by internal dispute resolution, as this information probably wouldn't be useful to anyone else anyway.)
260* The LSAT has a minimum score of 120, and a maximum of 180. The empty range is twice the size of the scored range.
261* The Dutch Cito test at the end of primary school, which partially determines what kind of secondary education a pupil can/will take, has a range of ''500-550''. (The reason for this is to avoid the Cito results being misinterpreted as IQ.) The empty range is ''ten times'' the size of the scored range.
262* If you're involved in humanities degrees in the [[UsefulNotes/BritishUnis British university system]], you'll almost never see a mark below 35% or above 75%; forty points used on a hundred-point scale. Language marks tend to be capped at the top end to bring them in-line with humanities, since otherwise it would be quite possible to get 100% on a language test. Your final degree in any subject is awarded on a four-point scale, First/2:1/2:2/Third. The thresholds for those are usually 70/60/50/40% respectively.
263* Honours degrees in Australia have the ranks being First-class, 2A, 2B and 3rd Class (generally referred to as just Honours, rather than "Honours 2A" for example). Thresholds are fuzzy since grades are a combination of coursework and dissertation mark, with the relative importance varying on the field, but usually it's about 85%/75%/65%/50%. It's important to note that getting Honours at all is viewed as remarkable, just the higher ones are deemed exceptional (although if you want to go on for postgraduate work like Masters or [=PhD=], you usually need 2A or higher).
264* Until 2016, the SAT had a range from 600 to 2400. Turning in a completely blank test (if it isn't discarded out of hand) would ''not'' result in the lowest possible score - they would have to lose points by answering incorrectly.
265* In the Italian university system, passing an exam originally required you to get at least a 6/10 from three professors, which nowadays translates to 18/30 from a single professor. However, because of the significantly greater flexibility regarding when students can (re)take their exams, students who fail to get at least an 18 are simply not graded at all and told to retake the exam at a later opportunity (or drop the subject if it is optional). This means the scores from 0 to 17 are never actually formally awarded. (On a secondary level: even within the 18-30 range there is a tendency of some professors to award very high –even perfect– scores almost as a ‘default’ to students who don’t display obvious gaps in their basic knowledge of a subject.)
266* In music festival ratings (mostly for high school choirs, orchestras and bands), you theoretically have 5 levels you can rate a performance. The scale is 5 = Poor, 4 = Fair, 3 = Good, 2 = Excellent, 1 = Superior. Very few groups get a 4 or 5, and 3s are what's given when something was terrible. The "Excellent" or "2" rating goes to groups that range from acceptable to very good. It's partially meant to be encouraging. You also have to sign your rating form and you want to be invited back - judges get paid.
267* Competitive high school debate organizations use a different scoring system for each event, such as the Lincoln-Douglas event. Judges are asked to score competitors on a 30-point scale, but any score below 20 is to be reserved for extreme circumstances in which the judge must provide a written justification of why they gave a score lower than 20. As long as a contestant gets up, says enough words to fill the time limit, and doesn't use any foul language, they get at least a 20/30.
268* In Southern California, restaurants are given a letter grade based on health and safety standards. It's mostly about how clean the place is. While the rankings follow the usual A, B, and C monikers, most restaurants have an A grade. It's rare that a place has B (even in food courts where its neighbors have [=A=]s). [[JustifiedTrope Since it's an official government statement on a restaurant's hygienic practices, anything below an A is practically a kiss of death]] -- consumers tend to assume that even a B-rated place is a plague pit, even though objectively that's still considered an acceptable rating. Most restaurants overhaul their practices ''very quickly'' to get back to an A rating or risk going bankrupt. Although this imbalance was not intended, it's generally seen as an overall good thing from a public health perspective.[[note]]The exceptions are niche restaurants whose select customers care less about the rating and more about the niche, and in ethnic neighborhoods where eateries may cater only to the local population, who will eat there regardless.[[/note]] It's also enforced to an extent: Any restaurant that doesn't meet at least the C grade is shut down until they clean up. New York City also has a similar grading system for its eateries, with ratings of A, B, C, and "pending". Any place that doesn't have an A is usually not bothered with by consumers, even though a B rating isn't generally too bad (passable but with minor problems found that can easily be fixed). It should also be noted that "pending" does not mean "not rated yet", it means "failed but was given time to bring things up to par".
269* The USDA beef grading. Most meats that normal consumers have access to (from lowest to highest) are Select, Choice, and Prime. There's also five ranks below that, and from lowest to highest: Canner, Cutter, Utility, Commercial, Standard.
270* Anyone watched the Olympics? Try the gymnastics events sometime. Despite being on a 10 point scale, it's ''rare'' for any competitor to get below a ''9.5.'' RankInflation is so bad that critical flaws (such as a gymnast tripping and falling on their face) are worth only about a tenth of a point. Flaws that we viewers can't even distinguish? 1/100th of a point off. Scores generally range from 9.7 to 9.9. This is in large part because Olympic gymnastic point guidelines don't differ significantly from those used at lower levels of competition--and you ''will'' find lower scores there. The issue is that at the Olympics, you have the world's best--Olympians ''don't'' screw up noticeably enough to warrant a lower score.
271* Telephone customer service personnel will occasionally ask you to rate their level of service on a scale of 1-10. If you answer 9 or below, they'll ask for specific reasons why you didn't give them a 10. Customers who can't or don't care to name specific flaws in the service will probably amend their rating to 10. This makes a rating of 10 equivalent to acceptable service with no specific complaints rather than outstanding or beyond expectations service. For the company who the phone agent is working for, anything less than a 10 can and often will be used to deny raises to the employeed, regardless of how frivilous, trivial, or absurd the reason for it is.
272* People often rate appearance on a 4 point scale. Studies have shown that when asked to rate their own appearance a person will rate themselves somewhere in the 6 to 9 range on a scale of 10 but will only rarely rate people lower than a 4 and even then most admit feeling guilty.
273* From 2005 to 2012 Ofsted school inspections in Britain graded schools on a scale of Outstanding, Good, Satisfactory, or Inadequate. Schools and their senior staff would invariably be criticised for being "Satisfactory". In 2012 "Satisfactory" was renamed "Requires Improvement", reflecting what the grade had come to mean.
274* Enlisted Performance Reports in the US Army and US Air Force. For many years rating inflation was rife, and it became standard for everyone to get perfect scores on their evaluations. Anything less than a perfect score meant you had ''seriously'' fucked up. Unfortunately this meant if you ''were'' an exceptional performer, there was no way to indicate it on the report, because both great soldiers and the merely average were receiving the same scores. This also led to odd situations where a senior enlisted member would be convicted of something like sexual assault and news reports would emphasize their history of outstanding performance evaluations. This ranking inflation would lead the Army to revise the rating system, including limiting the number of top ratings that anyone is able to give someone over the course of their career.
275* Grading in US high schools and colleges follows this to a tee, and may be the reason why this trope is so prevalent elsewhere. Most schools grade along a five-letter system of "A", "B", "C", "D", and "F", Much like the California health inspectors' grading system above, this has a real-world justification in that, if you finish your class knowing less than 60% of the material you learned, then you don't deserve to pass the class. Some colleges go further and make a "D" (or even "C-" at University Of California, Santa Cruz, where it's pass/fail at other colleges) a failing grade as well, on the grounds that just barely passing this class means that you aren't ready for the next one. When you've spent your whole life associating a 65% with "barely adequate", and you become a professional critic, that's going to rub off on your grading. It's for this reason that some magazines and websites (such as ''Entertainment Weekly'') simply use the same A-through-F grading system.
276* Most scholarships and grants requires a student to have ''at least'' a 3.0 GPA or an 80% average. Goes even further in Graduate School. Most Graduate Programs requires a student to maintain an 80% average at ''minimum.'' They take it one step further by threatening to drop a student if they get even a C in more than one class.
277* The Soviet/Russian education system ostensibly uses numbers from 1 to 5, but in practice 1 is virtually never used, with 2 being the lowest grade, standing for failure.
278* AAA (American Automobile Association) uses a [[http://ww2.aaa.com/aaa/common/tourbook/diamonds/whatisthis.html five-diamond scale]] to rate the quality of lodging and restaurants, both in their yearly Tour Books and online. According to their criteria, one-diamond simply means that the property is undistinguished and unremarkable, but still not "bad". In addition to these, they have "AAA Approved", which means that it meets a specific set of quality guidelines regardless of star rating. They also stop listing properties if they fall below the minimum standards and ''never'' list properties under the threshhold, so if it's not in the Tour Book, it's probably not worth checking out.
279* A curious example of this trope is found in the DoomsdayClock from the Bulletin of the Atomic Scientists, which grades how close we are to [[TheEndOfTheWorldAsWeKnowIt global catastrophe]], in terms of how close the clock face is to midnight. Even when the clock is at 11:55, the result is actually more-or-less safe -- the UsefulNotes/ColdWar is over and none of the countries with nuclear weapons have any interest in starting WorldWarIII. The earliest time shown on the clock, when the world was considered safest, immediately after the fall of the Soviet Union, was 11:43 PM. It is implied that [[AsLongAsThereIsEvil the very existence of nukes]] is what's keeping the time so close to midnight.
280* The AP (Advanced Placement) program, by which American high school students, usually juniors and seniors, can get credit for intro college courses, averts this in general. Scores range from 1 to 5, and most scores are 3s, with 1s and 5s typically the least common. (However, tests that are considered harder, like calculus, physics, and foreign languages, skew upward in their scoring because of who tends to take them.) A 4-5 is usually enough for college credit; many colleges also accept 3s (sometimes for slightly less credit than higher numbers, but still some value) and a few accept 2s or use them to place students into honors versions of intro courses.
281* Illinois K-8 students take a standardized test called the MAP test. Scores typically go up to about 260 or 270, average out at 240, and anything below 220 in middle school is considered super low.
282* The driver rating system for ridesharing services Uber and Lyft operate on this scale, as well. The scale runs from one to five stars, but drivers face removal from the systems if their average rating falls below 4.5 in most regions. Likewise, food delivery gigs like Doordash and Grubhub can also remove drivers from the system if their overall rating falls below similar threshold.
283* NVIDIA combined this with RankInflation. When it came around for a model number refresh with the [=GeForce=] 200 series, NVIDIA denoted the prefix before the number would indicate the card's performance tier, which would be no prefix (10), G (10-20), GT (30-50), and GTX (60-90). Come the [=GeForce=] 700 series, the 50 number graduated to GTX and a new tier was effectively made, the [=GeForce=] TITAN. With the [=GeForce=] 900 series, there is no GT card a consumer can buy. The problem is that there's a ''huge'' performance difference swing between, for example, the GTX 950 and the GTX 980 Ti.
284* When AMD had a model number refresh, they added an R# prefix and uses odd numbers up to 9. Except the lowest R# starts at R5.
285* The Carnival parades of Usefulnotes/RioDeJaneiro and Usefulnotes/SaoPaulo end in a vote count where the grades supposedly go from 0.0 to 10. It was usually a ten point scale, as rarely a judge would give something lower than 9.0 - exceptions included [[https://www.youtube.com/watch?v=GOFp0B9QI1c this overtly rigid guy]] who went as low as 7.8 and didn't give a single 10; and [[https://www.youtube.com/watch?v=Lh7rgX-Ww9U#t=50s this woman]], who clearly makes the crowd unhappy once she gives an 8.9.[[note]]Not the only controversial grade that year; the count went unfinished as disgruntled supporters [[https://www.youtube.com/watch?v=EXfaf1FvpzY invaded the jury area and tore apart the rating papers!]] Yes, Carnival is SeriousBusiness.[[/note]] - and then the rules were downright changed so 9.0 was the lowest rating.
286* Audience tracker Cinemascore uses an A-to-F scale. However, given that they draw their questions from opening night audiences, and said audiences tend to be the people most enthusiastic to see the movie, most Cinemascore ratings skew very high. Only a tiny handful of films have ever gotten an F [[note]][[https://slate.com/culture/2017/09/here-are-the-only-19-movies-to-ever-receive-an-f-cinemascore.html A 2017 analysis]] notes that the nineteen at the time comprised mainly horror movies ("An F in a horror film is equivalent to a B- in a comedy.") and movies where the audience was expecting a different movie ("What these movies have in common is that they take on the cloak of a genre and then refuse to give the audience what they expect from that genre")[[/note]], and a B is considered to be a major sign of trouble.
287* The Parker scale of wine grading tops out at 100, but ratings start at 50, and rating guidelines state that wines rated 50-59 are in some way damaged and wines rated 60-69 are just flat-out badly made. A score of 70 means that the wine is just barely drinkable.
288* The RST (Readability, Strength, Tone) system of signal reporting in ham radio is meant to give the operator on the other end an idea of his signal quality. It's a 3 digit number with the first digit, Readability, ranging from 1 to 5 indicating how well the receiver can understand the signal. The second number, strength, is the only one that's really quantifiable and indicates signal strength on a scale of 1 to 9, and the third number, Tone, also ranges from 1 to 9 and is only relevant when sending Morse code. In practice, Readability will always be a 5, Strength will be a 9 or a 5 depending on how loud you're coming in (or how bad the receiver is at copying Morse), and Tone will always be a 9. These rote responses are so common that most ham radio logging software will auto-fill the signal reports as "599", and may have buttons for "599"and "559"as well.
289* Online business profiles, like Google Business Profile and Yelp, give users the ability to give ratings between 1 and 5 stars, but a 4.0 average is generally considered bad, so any rating below 5 stars effectively becomes a negative rating.
290[[/folder]]
291
292!!In-universe examples:
293
294[[folder:Video Games]]
295* In ''VideoGame/PokemonMastersEX'', NPC allies ("sync pairs") are given a power rating from 1 to 6 stars. However, [[https://bulbapedia.bulbagarden.net/wiki/List_of_sync_pairs every single pair in the game]] has at least 3 stars.
296* In ''My First IGN Interview'' (from the ''IGF Pirate Kart''), you get the option to do a practice interview with an IGN applicant, who then asks you to rate how well she did. You have a choice between 10, 9, 8 or 7 out of 10, and if you pick 7 she gets as offended as if you had chosen 1. (This is obviously a joke about IGN's game rating system.)
297* A mission in ''VideoGame/{{Borderlands 2}}'''s "Mr. Torgue's Campaign of Carnage" DLC involves the player characters being sent after a game reviewer who gave a negative review to a game Mr. Torgue really likes. The review: "Gameplay's pretty dull. It sucked. 6/10." Torgue is half upset because he thinks the game in question is very good, and half upset because by any logical standard a score of 6/10 is above average.
298* A non-review example of this occurs in the ''VideoGame/GuitarHero'' games: You will ''never'' get fewer than 3 stars on anything, no matter how badly you do. It's just a question of whether you get 3, 4 or 5.
299** However, ''Rock Band'' averts this. As you build up to the base score, which is the score you'd get for hitting every single note if there was no combo system and no Overdrive, you go from 0 stars to 1, to 2, and finally to 3. With the combo system and Overdrive, however, getting 3 stars is still laughably easy on most songs. 4- and 5-starring songs is still just as hard (or easy, depending on the song) as it was in ''Guitar Hero''. This all means that it's more than possible to complete songs with scores below three stars.
300*** It's still not possible to get 0 stars--someone tested this with the song "Polly" by Music/{{Nirvana}}. The song literally has only eight notes in its drum part, so it's possible not to hit any of them (and, thus, not to score any points) and still pass the song. The results screen? 0 points and 1 star.
301*** ''Guitar Hero Metallica'' introduces a star meter somewhat similar to ''Rock Band'''s. The difference is, you still can't get less than three stars in GHM; until you have at least three stars, the star meter will "help" you fill it until you reach three, which sometimes entails, for example, automatically filling itself during sections with no notes.
302** ''VideoGame/GuitarHero'' sort of justifies it, because "failed a song" means "got a bad review" and so if you get less than three stars you failed. It's more like a HandWave than a real justification, though.
303** The opposite end of the spectrum occurs for certain [[VideoGame/DanceDanceRevolution DDR]] clones. ''[[VideoGame/InTheGroove In The Groove 2]]''? An "A" is somewhere around low 80%; after A+ is S-, S, S+, one star, two stars, three stars and four stars.
304* Certain games in the ''VideoGame/RhythmHeaven'' series give an explicit numerical score for the player at the end of a rhythm game. Over 85 is Superb, between 60 and 85 is OK, and below 60 fails the stage, forcing the player to try again.
305[[/folder]]
306
307[[folder:Webcomics]]
308* ''Webcomic/PennyArcade'', not surprisingly, [[http://www.penny-arcade.com/comic/2013/02/13 parodied this.]]
309-->"It's a digital nightmare from which I ''cannot'' wake."\
310"So, it's a seven?"\
311"No. I need you to bring me... the ''[[BrokeTheRatingScale forbidden numbers]]''."
312** [[http://www.penny-arcade.com/comic/2008/06/13 Another example.]]
313[[/folder]]
314
315[[folder:Web Animation]]
316* {{Parodied|Trope}} by ''WebAnimation/RedVsBlue'' in one of their PSA videos, [[https://www.youtube.com/watch?v=T5aUvk86XiA "Game On"]]. In the segment on game reviews, among other things Grif says that scores of 1-6 are meaningless because no game ever gets them, and that a 9.9 is the same score as 10, except the reviewer doesn't like the developer for some reason.
317[[/folder]]
318
319[[folder:Western Animation]]
320* ''WesternAnimation/{{Arthur}}'': Exaggerated in "[[Recap/ArthurS16E6BustersBookBattleOnTheBusterScale On the Buster Scale]]", where Buster rates every movie he watches (all being action movies full of robots and explosions) a 10+/10. However, he does consider demoting a movie to just 10/10 because it's not in 3D.
321* Parodied in the TV show ''WesternAnimation/TheCritic''. Jay is told by his boss that his job is to "rate movies on a scale from good to excellent." Jay himself in an inversion: he [[StrawCritic dislikes everything]] and the best score he ever gave a film was a 7 out of 10.
322* In ''WesternAnimation/{{Futurama}}'', Dr. Wernstrom gives Dr. Farnsworth the lowest rating ever: [[http://theinfosphere.org/A_Big_Piece_of_Garbage A, minus, MINUS]]!
323* ''WesternAnimation/TheSimpsons'':
324** In one episode, a journalist who travels around America visiting locations to review visits Springfield. He's repeatedly tricked and abused by the residents and storms off to give Springfield the lowest rating he's given anywhere: 6/10.
325** In "[[Recap/TheSimpsonsS11E3GuessWhosComingToCriticizeDinner Guess Who's Coming to Criticize Dinner?]]", Homer becomes a food critic. At first, being [[BigEater Homer]], he gives everything an excellent review. While his fellow critics eventually convince him to be crueler, he still won't give anything lower than "seven thumbs up".
326[[/folder]]
327
328----
329

Top