Follow TV Tropes


Four-Point Scale

Go To

Duke: Why the hell do you have to be so critical?
Jay: I'm a critic!
Duke: No, your job is to rate movies on a scale from "good" to "excellent"!
Jay: What if I don't like them?
Duke: That's what "good" is for.
The Critic, Pilot

Ever notice how the average score given by a review show somehow tends to be above average?

If you take a stroll on professional game review websites, you will notice that score tend to be in the 6.0 to 10.0 range, even if they're nominally using a ten-point scale. This is called the four point scale, which is also sometimes called the 7 to 9 scale. Two takes exist on why this is so.

The first view considers the four point scale to be a bad thing, and holds this as evidence of a website's lack of integrity (often toward mainstream outlets). The accusation is rarely leveled at the writers themselves, with the blame usually placed on a site's editors or Executive Meddling.

The game journalism industry, like all forms of journalism, thrives on access. Game magazines and websites need to get a steady flow of new games, previews, and promotional materials directly from the publishers in a timely manner, or they're irrelevant. Unfortunately, the game industry does not have to provide this access, and games review sites and magazines are far more reliant on the companies that produce the games than movie critics are on movie companies; indeed, since most websites are expected to provide their content for free, industry advertising is perhaps their most important source of income. There are tales of editorial mandates or outright bribery, but the whole system is set up so that providing a highly critical review of a company's triple-A title is akin to biting the hand that feeds you. This is especially true of previews, which tend to have an artificially positive tone since if a journalist pans a game the company didn't have to show them in the first place, they're unlikely to be invited back to see any of their other work. As such, you're unlikely to see major titles, even the worst of the worst, get panned too hard in for-profit publications. This results in sites like IGN giving insanely negative reviews in order to appear "balanced" by panning smaller titles that don't provide them with large enough kickbacks.

In addition, there's the fact that many of these game review programs draw their audience by reviewing the most anticipated upcoming games; games which are anticipated due to their high degree of quality and polish. Because of this, many critics are incentivized to only review good games for fear of losing ratings. As such, many game reviewers will simply never get around to reviewing the lower quality, bargain bin, shovelware games in order to balance out the scale, hence skewing their score average upwards.

The other view considers the four point scale to be the result of a perfectly reasonable way to award and interpret review scores. This can be understood fairly easily by comparing with the way school assignments in America are graded. In any given class, people will usually get scores ranging from 60% to 100%, with the average being around 70-75%. This then leads people, both reviewer and reader, to expect scores to mean something similar to what they already encountered in real life. Getting ~60% means "this sucks, but it can still be considered a game", ~75% is "average", ~85% is "decent/solid" and anything above 90% is a mark of excellence.

An additional reason for this lies in a form of selective bias for reviews: You're more likely to go to the trouble of writing a review for something in the first place if you really liked it and want to tell others about it, or absolutely loathed it and want to ward others away from it. While this obviously doesn't apply quite as much to professional critics, it is a major factor in the overwhelming positivity or negativity among user-submitted reviews.

The situation with the four point scale has led some reviewers to drop rating scores altogether, or favor an A/B/C/D grading system. Professional reviews tend to keep a rating system to reduce the chance of being misquoted or misinterpreted, as it will be evident that you did not mean the game was "excellent" if there's a big "6/10" or "D" at the end of the article.

The same basic concept applies to every industry; reviewers tend to place things in the upper half of whatever their reviewing scale happens to be, and for the same reasons. That said, it's generally agreed to be much more prominent in gaming than in industries like film. Review aggregator Metacritic, for instance, explicitly has different categorization between films and games: an 85 average is considered "universal acclaim" for films, and "generally favorable" for games (with 90 being considered "universal acclaim" for games), and a 45 average is considered "mixed or average" for films and "generally negative" for games (with 50 being considered "mixed or average" for games).

If reviewers get too negative there's always the risk of fan backlash, because Reviews Are the Gospel. Contrast So Okay, It's Average, where being just below this scale is acknowledged to have some quality, but not a lot. See also Broke the Rating Scale and F--. See also Damned by Faint Praise; when this scale is in effect, scores like 7 or 8 become faint praise.

It is possible to avert this trend, such as by ranking a product's features in relation to one another (one such review for a video game might start: "Soundtrack > Graphics > Plot > Gameplay > Immersiveness") or by giving purely text-based, non-numerical reviews, but this only serves to bypass one's own cognitive biases, not to satiate company execs.

Examples in real life (by subject):

    open/close all folders 

  • This happens to an extent with fan reviews too. If you go to any site where shows can be rated (like Anime News Network) most shows will float above 6.0. Fan reviewers do tend to be, well, fans, which would tend to skew reviews positively. That, and they may pattern themselves after official reviews, even without meaning to. And sometimes the fan reviews "cheat" to bring the score closer to their desired number. The problem is with the way the scores are averaged, encouraging this kind of behaviour. By taking the median score or using a fancy formula, there are ways to make it an 8/10 rated movie is affected the same way by a 7/10 and a 1/10.
    • There's plain selection bias here; no one is forced to watch anime they remotely suspect they won't like. The some-eps rating vs the all-eps rating point spread and population ratio can be instructive.
    • Exception: Some anime series with exceptionally bad Macekre dubs will still have the original version rated highly, but the dub will get low ratings.
      • Fan reviews on video games also apply here. You'll find a mixture of reviews that are all perfect scores or close to it and reviews that give the game the lowest score possible.
  • Truth in Television: the whole business can also be justified in many cases by score entropy. Here's how it goes: you independently, objectively and honestly review game A. You give it, say, 95%. A year later, you review game B. Game B is pretty much game A, with the awesome cranked to eleven. Or with the same awesome, but all the miscellaneous suck ironed out. Watchagonnado? You objectively have to give it 96%. Cue next year. Some reviewers such as GameSpot claim that their standards rise as the average quality of what they review rises, averting this problem in theory but giving rise to a lot of Fan Dumb if actually followed.
    • This is the same concept behind why they have the Olympic favorites in events like Ice Skating do their routines last. If they did them first, and got a perfect score, but were then one-upped by an underdog, the judges can't score the underdog higher than perfect, and controversy erupts. (Of course this also means holding television viewers' interest until the end, rather than the outcome seeming to be a Foregone Conclusion after the actually good competitors have gone - note that the same ordering is usually used for timed events and other events without judged scores where this isn't a factor.)
    • This trope can also be explained in basically all industries because, if you assume the scores are like grades in school, getting a 50% is absolutely terrible. This does lead to bizarre situations with user submitted reviews on sites where a person will give the game a 5 or 6 out of ten while claiming the game was average or somewhat above average, while somebody scoring the game a 7 claims the game was mediocre but without major flaws, or where somebody will give a game an 8.5 or 9/10 because "nothing can be perfect" or because it's not on the system the reviewer likes, while somebody else may score the game a ten saying it's the best game on the system by far despite a few minor flaws.
  • Lore Sjoberg played with this in giving his first and possibly only F on the Book of Ratings to Scrappy-Doo.
    Lore on Potato Bugs: "'Fouler insect never swarmed or flew, nor creepy toad was gross as 'tato bug. Remove the cursed thing before I freak.' — Wm. Shakespeare, Betty and Veronica, Act 1, Scene 23. I can't even go into how nightmarish these vile little affronts to decency and aesthetics are. If I were having an Indiana Jones-style adventure, the Nazis would lock me in a crypt with a herd of potato bugs. And, I might add, I'd choke myself to death with my own whip right then and there rather than let a single evil little one of them touch my still-living body. They're still better than Scrappy-Doo, though. D-"
  • Any horoscope that rates the upcoming day on an alleged scale of one to ten will use a four-point scale.
  • A well-known gun writer came right out and said that negative reviews were not allowed by the editorial staff. He went on to say that they simply wouldn't print reviews for bad guns, so if a new gun came out and none of the major industry mags were reviewing it, take a hint.
  • IMDB seems to actively encourage this, listing any review that rated a movie or show a 7.0 or worse under "Hated It".

  • New car reviews in both magazines and newspapers. Even the Yugo received lukewarm reviews from the major car magazines; these publications are truly frightened at the thought of losing advertising revenue due to giving a poor review. This is doubly true after General Motors pulled its advertising from the Los Angeles Times after one of GM's products was panned in print. This may be the case where this trope is least justified; as compared to everything else on this list, cars and other vehicles are very expensive, and if you buy one the dealer isn't inclined to take returns.
    • European motorcycle magazines seem to have a particular love for BMW motorcycles. A flat spot in the torque curve is a minus for any other marque, but the BMW is praised for having high end power. Or a test of three comparable motorcycles where the two Japanese cycles win on points in the summary, but the article still proclaims the BMW number 1. It's either Euro-chauvinism, or influence by the BMW advertising budget. It doesn't help that BMW routinely provides reviewers with bikes with all the optional extras. Reviewers will gush the entire review on the technological gew-gaws, and then mention in one sentence at the end that these are all optional and cost money. Guess what readers remember?
  • Jeremy Clarkson mentioned this trope frequently in his published reviews. He says that the best thing that happened to his car reviewing was television, because it meant that the previous power relationship was reversed - he was rich enough to say what he liked thanks to TV profiles, and his public profile was so great that car manufacturers could not not send him cars for review. Which he then reviewed honestly. Tellingly, despite years of saying that he despises all Asian cars except Hondas (because Honda was started by a Mr. Honda who had a dream when he was a small boy, like BMW or Lotus, as opposed to simply being the automobile arm of a heavy industry company), firms like Daewoo still sent him cars, which would be savaged.
  • Consumer Reports has a policy against reviewing cars or household goods that they didn't buy incognito from a retailer. Nonetheless, most of its ratings are Good, Very Good or Excellent.
  • Cars at a car show or ones that are being appraised are scored on a scale of 1-6, with 1 being perfect and 6 being junk. Most are scored as a 2-3 because they are generally at a car show and most people don't take junk vehicles out to such things.

    Comic Books 
  • Spirou started out as a comic book magazine. Later on in their lifespan they would also incorporate other stuff as well including a review section for comic books from comic book publishing company Dupuis that came out recently compared to the publication day of the magazine. They use a rating of piccolo hats in order to review how good something is. One hat is not bad and five hats is masterpiece. They never seem to go below this. So if you come across a comic book from Dupuis that was never reviewed its either that it is old or that not even Dupuis themselves would defend that piece of crap.
  • Comic Vine has what some would consider a two point scale. They loved a comic? Five stars. A comic was decent? Four stars. It's rare to see a three or even two star review from them, but when that does happen, people take notice.

    Live-Action TV 
  • Go to Pick a show you hate, any show. It's pretty much guaranteed that most of the ratings won't drop below 7 out of 10. In some cases, reviewers will rate an episode before it's aired, in a "I think this will be good" way.
  • For British television dramas, "average" is actually 77%. Even so, very few dramas go below 70 or over 90 (much was made over the Doctor Who Series 4 finale getting 91% for both parts).
  • As a reality TV example from Dancing with the Stars, you can trip, shuffle, and walk your way across the dance floor for two minutes and still get a four or five. Two and three are put in play extremely rarely, when the judges are trying to force an inferior dancer off the show. In ten seasons, no one has ever been given a one.
    • The Head Judge Len once gave an explanation of each of the ten scores, and getting on the floor and moving your feet grants you a 2. Being vaguely aware that there was music playing was a 3. Dancing mostly in time to said music gets a 4. To get a 1, you literally would have to not dance at all.
    • The Australian version of the show tends to vary a little more with bad dancers getting in the 40-50% range. There are some rarer exceptions: Nikki Webster got a 1 from one of the judges, almost certainly a publicity stunt as people have danced far worse and gotten more. A couple of contestants have gotten 1s from all the judges but you pretty much have to dress up like a clown and go completely insane to get that (which one guy did).
    • Averted in the German version Let's Dance. While recent seasons have seen a surge in 10s and quite a few 30 total dances, judges aren't afraid to go low if a dancer, even if trying, just isn't delivering. 2s and 3s are very common during the first stages, and not often but regularily dancers fail to get double digits total. Especially "the evil judge" - who always exists on these shows - Joachim Llambi shows 1s on a regular basis and even draws a minus before it on occasion. He calls host of the show Sylvie Meis a "rule lawyer" in those cases where she has to remind him of whatever he does, they will still get a minimum of 1. It took quite a few seasons for the first triple 1 though.
    • On Strictly Come Dancing, Craig Revel-Horwood, in particular, has been criticised for his "low" marking - he marks out of the full 10 (and isn't afraid to use 1s or 2s), while the other judges give out sub-6 scores so rarely that it tends to look like a personal insult when they do. This criticism ignores the fact that, logically, if you're using a ten-point scale then a five or six should be average and a seven or above should be good. Things get even worse once the season passes the quarter-final stage, when any mark lower than 9 tends to be roundly booed by the audience.
  • Ice Age (formerly Stars On Ice, Russian Dancing With The Stars on ice), uses standard figure skating scales: 0.0 to 6.0. To put things into perspective, the worst average score in the entire history of the show, awarded to the worst pair on the very first day back in 2006, was 4.8. It's becoming worse over the years: now the average score is 6.0, noticeable mistakes mean 5.9, and bad performance is as low as 5.8. To add insult to injury, judges sometimes complain about how they don't have enough grades to noticeably differentiate between performances of similar qualities, apparently ignoring the fact that they have 57 other grades at their disposal.
  • In Great British Menu, a score of 7 is considered average, and anything below an 8 is considered a disappointment; the lowest score ever given in a judging round was a 2, leading the subject to Rage Quit. On the other side, though, scores of 10 out of 10 are far from unheard of, being essentially an indication that, in its current state, the dish is worthy of being presented at the banquet.
  • Video Power was an early 90s show meant to cover everything related to video games, including reviews of recent titles. It only takes watching a few episodes of this to notice the host never does a game he doesn't recommend.

  • Q Magazine has never gotten over giving five stars to the legendary Oasis trainwreck (for some, anyway) Be Here Now.
  • Sounds of Death, aka S.O.D., is infamous for this. In past years they would publish "reviews" of albums with copy taken straight from the record label's press releases, and in many cases run a glowing review of an album opposite a full-page ad for the same CD!
  • Allmusic zig-zags this:
    • It rarely rates an album below three stars, and never rates an album five stars when it comes out.
    • It isn't unheard of for them to go a little lower. Brooks & Dunn's and Kenny Chesney's discographies include at least a two-star and two-and-a-half star apiece. Kenny has two two-stars.
    • With certain artists it shifts the scale about one-and-a-half stars lower.
    • Allmusic also seems to have a strange hate for later "Weird Al" Yankovic albums, which are usually well-received by others.
    • Some of the reviews date from when Allmusic was still in book form, and in those cases, the stars don't always match up — so they might say an album is unremarkable yet give it four stars, or say it's great but only give it three.note 
      • Charles Manson's first album got 4/5 starsnote , which made the initial reviewnote  a bit confusing when stating that Manson was "as good a songwriter as he was a human being" (though the review concluded telling the reader to "Don't bother").
    • At one point on Allmusic's website, every single one of The Beatles' albums (as originally released in the U.K.) were rated five out of five stars, no matter if the review was critical or not. Conversely, ratings for other issues of their albums (e.g. the American Capitol releases, some of which, such as Meet the Beatles and the U.S. issue of Rubber Soul are considered by some to be better than their canon counterparts) were all over the place.
  • In a similar vein, Country Weekly magazine has used a five-star rating in its albums reviews section since late 2003, a couple years after the late Chris Neal took over as primary reviewer. Almost everything seemed to get an automatic three-star or higher, with the occasional two-and-a-half at worst. Perhaps the only time he averted this trope was in one issue where a Kidz Bop-esque covers album got one star. Before the star-rating system, the mag's reviewers were even more unflinchingly favorable, both from Neal and his predecessors. When a batch of new reviewers took over in late 2009, they got a little more conservative with the stars; one gave an album only two-and-a-half stars, although the tone of the review didn't suggest that the album was even mediocre. Later on, when the review section was expanded to singles, music videos, and other country media as well, the lowest they ever went was two stars for the music video of Zac Brown Band's "The Wind".
    • They switched to letter grades in late 2012. For three years, they managed never to go lower than C-minus (The X Factor winner Tate Stevens' debut, the video for Eli Young Band's "Say Goodnight", Luke Bryan's "That's My Kind of Night", and the video for Florida Georgia Line's "This Is How We Roll"). The magazine finally gave out four Dsnote  and a D-minusnote  in 2015 and 2016, but still never gave out an F before the magazine stopped publication in 2016.
  • Robert Christgau used to be much more diverse in his ratings, which either ranged from E- to A+ (before 1990) or through a wide variety of grades including dud, "neither," honorable mention, and B+ to A+. Now that he no longer has the same encyclopedic approach to reviewing he once had, he only rates albums he likes as part of his "Expert Witness" blog, effectively limiting grades from B+ to A+. Christgau will occasionally use B or lower to signal that a record is a "turkey", signalling a Giftedly Bad record worthy of Bile Fascination.
    • For albums he doesn't grade, he has a scale of three stars to one star for "honorable mentions", which signal flawed albums that have niche appeal — three stars for potential modest Cult Classics; two stars for potentially enjoyable records; one star for potentially likeable records. He also has ✂ (an unworthy album with a good/great song), 😐 (an album with some craft or merit, but not enough) and 💣 (a bad record unworthy of further thought).
  • Generally averted by Rolling Stone; if you look on their website, the vast majority of albums score 3 or 3.5 stars. Higher-scoring albums are usually later albums or remasterings by classic rock artists.
  • Averted by NME, which has a 10-point scale and freely uses all the points on it. They very rarely give out a perfect 10 (usually only once or twice a year, if at all) and this will almost certainly be their album of the year. They also occasionally break the bottom end of the scale by giving out zeroes or even minus figures when they're feeling particularly snarky.
  • John Mc Ferrin, who reviews on John McFerrin Music Reviews, has made a conscious effort to avert this. His system goes 1-9, then A, B, C, D, E, F, and finally 10. In a FAQ on his website, he stated that he set up his system asymetrically, with the first 8 being the equivalent to a 0-7, and the other 8 spanning from 7-10. He explains that otherwise, most albums would get a 7 or 8. For the most part, his system actually inverts it. Very few albums get above a D (The equivalent of 13/16, or an A-). On the opposite end, out of the 1015 albums he has reviewed, only 79 get below a 6 (The equivalent of a C) with only four getting below a 3 (Equal to a D-). Though, to be fair, this could be because he publicly stated he mostly reviews albums by bands he likes.
  • Eminem specifically called out the hip-hop magazine The Source (which rated albums based on a scale of five mics) for doing this, in one of his many diss tracks aimed at the magazine and its editor Benzino, "The Sauce":
    The Source was like our only source of light
    when the mics used to mean somethin', a four was, like,
    you were the shit — now it's like the least you get!
    Three and a half now just means you're a piece of shit.
    Four and a half or five means you're Biggie, Jigga, Nas
    or Benzino — shit, I don't even think you realize
    you're playin' with motherfuckers' lives

  • Review scores on Pinball News have never dropped below roughly 70%, even widely disliked machines like Indiana Jones (Stern) and CSI. The reasons for this are unknown, but considering Pinball News reviewers receive machines directly from manufacturers for review, it may be to prevent those manufacturers from cutting them off.
  • Thoroughly avoided with user aggregate scores on sites like The Internet Pinball Database and Pinside, however: At these sites, a 50% DOES describe a mediocre machine, with really bad ones dropping between 10% to 25%.note  There is no enforcement necessary for either site—it seems the pinball audiences, by and large, naturally do not use the Four-Point Scale.

    Professional Wrestling 
  • This trope hits professional wrestling reviews hard. Virtually nobody is satisfied with any rating below four stars. Japanese wrestling reviewer Mike Campbell has gotten a reputation as a horribly biased negative critic simply because he averts this trope very hard while explaining the pros and cons of a wrestling match in meticulous detail.
  • Averted in the case of Dave Meltzer. He does his best to try and use every point of his -5 to 5 star scale and occasionally going out of that range.

  • The 10-point must system used for scoring various boxing and MMA bouts.
    • In boxing, judges award the winner of the round 10 points and the loser 9 points. Barring fouls, the only way to get fewer than 9 points is to get knocked down, which is rare and usually indicates that the boxer is about to lose. Scores of 7 or fewer would require the boxer to get knocked down several times in a 3-minute span. In that situation, the referee or the fighter's corner would usually stop the fight before the round ended. Sometimes, rules are set in place in which the fight is automatically stopped if three knockdowns occur in a single round, making it impossible to score 7 or fewer points. Thus, in fights that go to decision, the scores are very large, but decided by only a few points. You get 108 points just by managing to not fall down for 12 rounds, and 120 points for winning every single round.
    • MMA also uses the 10-point must system, but has no knockdown rules. Therefore, if you lose the round, you get 9 points. If you're utterly dominated, you'll get 8 points. There's basically no way to get fewer than 8 barring penalties for rules infractions, as a fighter who is performing that poorly would be rescued by the referee. In 2017, changes in the Unified Rules of Mixed Martial Arts made the scoring of 10-8 rounds less strict, allowing 10-8 rounds to be scored when one fighter is defeated soundly, but not completely. The change was made in an effort to combat this trope.
  • In competitive debating tournaments
    • In one scoring system, 75 is considered an average speech, and virtually all speaker scores fall between about 70 and 80, with 79 or 80 being a demigod level speech. Supposedly if someone simply gets up, repeats the topic of the debate, and sits down, that's about a 50. Getting enough judges for a debate can be a problem; often the judging forms are very specific to try to get around the fact that some judges may be, effectively, people who wandered in because they smelled coffee. There are forms where the judge is asked to circle a number from 1 to 5 on 20 different categories, then add the numbers up to give the final score. Since in some categories a 2 is roughly equivalent to "Did not mumble incomprehensible gibberish during the entirety of the debate," 40-50 is about the lowest score you can get if you even attempt to look like you're self-aware.
    • In other formats, each competitor's score is determined by adding the judges' individual scores, each one out of fifty points. Judges are instructed to both score and rank each competitor. Where the fun begins is that judges aren't allowed to give tied scores, and scores are only allowed to differ from each other by one point. The result being that first place, in every round, automatically carries a 50, second place a 49, and so on. Even if a competitor starts his piece over more than once (which automatically carries a ten-point penalty or worse, depending on the format) they're often just given the last place score. Few judges ever rock the vote; a judge who awards a first place a 49 (let alone, say, a 45) is regarded as being unfamiliar with the format. The dark irony hits when you realize that the most veteran judges are the ones willing to be tough; judges who don't know their way around the competition usually just punt it.
  •, a football recruiting site, ranks prospects using the standard 1-5 star scale. Then they have a vague additional ranking system that ranks players on a 4.9-6.1 scale.
  • In ski jumping each jump is scored by five judges. They can award up to 20 points each for style based on keeping the skis steady during flight, balance, good body position, and landing. The highest and lowest style scores are disregarded, with the remaining three scores added to the distance score. However, anything below 18 is usually considered a slightly botched jump and scores below 14 are only ever seen when the jumper falls flat on his face upon landing.
  • In NCAA football, going through an NFL draft voids the remainder of your scholarship years, which often prevents players from finishing any degrees they have not completed. In order to "help" kids who were on the fence about declaring or staying in school, the NCAA allowed them to consult a panel that would predict where they would be drafted should they come out. However, this panel was notoriously optimistic, frequently telling hundreds of kids a year that they would be drafted in the first 3 rounds.note  This had very real consequences as many kids were lured by the promise of NFL riches, fell to late in the draft because they were raw players, and washed out of the NFL before developing.
  • Gymnastics was previously regarded as a one-point scale due to its use of a code of points which starts every competitor off at a fixed score depending on the difficulty of their routine and makes deductions for errors (you'll hear commentators referring to a good performance as "giving no points away to the judges"). Understandably, at elite level, difficulty ratings were usually close to a 10 (if not a 10 proper), and elite scores would very rarely fall below a 9 (only for major mistakes or an unfinished routine — even a fall could still score in the low 9s). These days, all competitors start on 10 points for execution, and have a difficulty rating added on, resulting in elite male gymnasts' hit routines regularly scoring 15 or more, while female gymnasts', though lower, will still top a 14 for a hitnote .
    • Modern gymnastics execution scoring, specifically at major international events, actually inverts this, as judging has become increasingly stringent leading to more deductions and lower execution scores. For all events except vault (which has higher execution due to the nature of the single-element routine), anything above an 8 is generally a solid execution score, an 8.5 or better is pretty darn good, and breaking a 9 is essentially theoretical.
  • This became a point of contention during the 2016 NBA Slam Dunk Contest. Judges were liberally giving out high scores to the point where as the dunks grew more and more impressive, they felt obligated to give them 10s. This led to it taking multiple rounds because the final two athletes kept on tying.

  • In general, electronic products (and products in general) are rated based on their performance in a particular price segment, not overall performance against everything else. The reason is because this would be really unfair to the more affordable and sometimes more practical products.
    • As an example, a $100 graphics card that performs better than its competitors in this price category (typically ±10%) can receive a 9/10. But a $500 graphics card that can't match its competitors may receive a 7/10, even though the $500 graphics card will totally blow the $100 graphics card out of the water in performance alone.
    • Also products should be rated based on the times. It seems silly that a 10/10 product from 10 years ago still holds any weight against a 8/10 product of today. Generally though, the user experience is what counts.
  • Generally the case with all electronics for people: never buy any product given less than an 8 on a 10 point scale. The reasons for this are complicated, but basically boil down to the following few reasons:
    • A lot of it has to do with useability. If a sample of reviewers generally agreed that the useability of the electronic gizmo sucks and thus gives it lower scores, then nobody will buy it because who wants to buy an electronic gadget that's annoying to use?
    • Almost every complaint that you could make about most well known high-tech products is either based on taste (say, iOS vs. android) or is strongly counterbalanced by price (a top end graphics card against a $60 model). The few complaints that don't fall into those two tend towards nitpicking and are often only visible when sitting two things next to each other. So whatever problems you might find can't take too many points off if the device does what it is supposed to for that price.
    • Gadgets have some of the most vehement fanboys on the internet, and so a site that tries to cater to all of them has to hedge their scores to keep everyone happy further pushing the scores closer together.
    • Finally, they have to keep the manufacturers happy too, because those smartphones, SLR cameras and 3D TVs aren't cheap. So they will almost always focus a review on the 'new' feature being touted by the manufacturer and how amazing it is and then ignoring the same feature on similar products who are pushing a different part of their widget as being awesome.
    • This is actually so ingrained that electronics manufacturers who usually make products that receive good reviews have threatened review publications to stop sending them products to review because something that was reviewed got a lower than expected score. Some of those products did indeed receive the equivalent of 7/10. Of course, why would you threaten the very people who review your product to stop sending them your product when you rely on said reviews?
  • Attack of the Show!'s Gadget Pr0n segment has never rated any reviewed item below 70%. Even a digital camera with grainy picture, difficult menus, unresponsive buttons, low battery life, insufficient storage space, and inadequate low light sensitivity that is several hundreds of dollars too expensive will still get the equivalent of a B+.
  • Zig-zagged by Mac|Life back when it was still called MacAddict. At the time, they had three review sections: a generic one, one for interactive CD-ROMs and one for children's software. All three used a four-point scale with their mascot, Max: "Freakin' Awesome", "Spiffy", "Yeah, Whatever" and "Blech!".
    • The catch-all section had reviews written by a panel of reviewers, summarized with the responding four-point scale and a good news/bad news blurb summarizing the product's strongest and weakest points. If they could find even one good thing to say about it, it usually got a "Spiffy" at worst. "Yeah, Whatever" was usually reserved for So Okay, It's Average products, and "Blech!" was all but nonexistant.
    • The interactive CD-ROM section, however, was just the opposite. It used a three-reviewer panel for each CD-ROM, and it was very rare that any of the three had anything good to say about any of the interactive CD-ROMs. You could pretty much guarantee at least one "Blech!" per issue here.
    • And finally, the children's section used feedback from actual children, with a summary from a regular reviewer. The children's panel and the main reviewer were weighted to give the overall rating, but even then, you'd be hard-pressed to find a "Blech!"
    • All of this went out the window when the magazine repackaged itself as more staid and formal, going with a standard five-star scale (which has remained with the shift to Mac|Life).

    Video Games 
  • In Happy Wheels this trope is played straight: If you sort by ratings, you'll inevitably end up with crappy levels that received the 5 star rating either by the creator being the only one to rate it (as suggested by the page image), or by having a "rate 5 stars" message at the end of the crappy level. One sure fire way of finding a good level, is to see the "featured levels". Even then you'd be more than likely to see a a crappy game that was rated five stars because the maker promised a poorly drawn picture of a naked woman for five stars.
  • GamesRadar is fully aware of the four point scale, and examines this phenomenon in their article: "Crap games that scored a seven out of ten." Take with a grain of salt, however, as half of the list is immediate dismissal based on genre, nature, note  or most jarringly, target audiencenote  rather than the content of the games.
  • Edge magazine is one publication that, over the years, has attempted to stick to a rating system where a score of 5 should ideally be perceived as average, not negative. However, their mean score is definitely skewed closer to 7, simply because the magazine is more likely to review relatively polished high-profile games than the bargain-bin budget titles (such as Phoenix Games) that would balance out the weighting the other way. Edge has done quite a lot of self-analysis of its own reviewing/scoring practices over the years, with articles like E124's look at how reviewing practices vary across the gaming publications industry (how much time a reviewer should spend with a game before rating it, how styles of criticism and ratings criteria vary depending on the target audience, and so on). Up until a few years ago, they also did a lot to build up the prestige and mythology around their rarely-awarded 10/10 score (see for example their 10th anniversary issue [E128, October 2003] retrospective look at the highly exclusive club of four games that had received that score up until that point - Super Mario 64 in 1996, Gran Turismo and The Legend of Zelda: Ocarina of Time in 1998, and Halo: Combat Evolved in 2001).
    • Then in 2007, Halo 3, The Orange Box, and Super Mario Galaxy were awarded 10s three months running, and since then the score has been awarded a lot more frequently. (See this interview with the editor for a discussion of their reviewing philosophy from around that time.) In contrast to 10/10, they've only used the dread 1/10 score twice - for the godawful Kabuki Warriors, and FlatOut 3.
  • Shortly before being discontinued, Games for Windows: The Official Magazine (previously Computer Gaming World) switched to a letter grade system like that used in schools, precisely because of this problem. This system is now used on their corresponding website,
    • Computer Gaming World rather famously didn't have numerical/starred reviews for its first fifteen years or so, until the mid 1990s, when readers who didn't want to actually read the whole article and just look at the score finally complained enough that they started giving out 0-5 stars. When they did start actually giving scores to their reviewed games, in most cases they were more than willing to use the entire scale. They even had an "unholy trinity" of games that were rated at zero (Postal 2, Mistmare, and Dungeon Lords).
  • The notorious game reviewer Jeff Gerstmann was fired by GameSpot for panning Kane & Lynch (a game heavily advertised on the site) with a 6.0. However, the site says he was fired for personal reasons. Also, he was not exactly alone among reviewers in scoring the game poorly. Of course, after this controversy, and his firing, Gerstmann started up Giant Bomb. Over there, Gerstmann and his crew use an X-Play-style review scale (1-5 stars, no half-stars), and they're more than willing to dish out 1 and 2 star reviews for bad games. He later reviewed the sequel Kane & Lynch: Dog Days, which he gave a 3 out of 5 (an average score).
    • Alex Navarro (a co-worker and supporter of Gerstmann's) often broke the four point scale when he reviewed games including Big Rigs: Over the Road Racing, Robocop, and Land of the Dead.
    • Gamespot is partially guilty of the scale: browsing their reviews archive, 405 of their 725 pages so far score between 7 and 10 (and only in The New '10s perfect scores became more common - currently there are 20, but only six were released before 2010, with the 4th in 2001, and the next two in 2008).
      • Once upon a time, Gamespot had an excuse for this. A now-long removed from the site breakdown of their scoring system revealed that being technically competent (bug-free console release or a feature-complete PC release that would run on common system configurations at the time) automatically got a game a 6 and other factors built the score up from there. This page, and presumably the system, have been gone from the site for at least five years by now, though.
      • This is brought in an article named "In Defense of the 6.0", where along with "averting technical pitfalls that can pull you out of the experience will warrant a fair score", one of Gamespot's reviewers declares that the Rule of Fun shouldn't be ignored only because the score was below the usual 7-10.
    • Upon Giant Bomb's acquisition by CBSi, which also owns Gamespot, Gerstmann was finally able to fully explain his firing here. The firing ended up being related to review scores after all, but was a more chronic problem of an inexperienced executive team not knowing how to responsibly deal with dropping ad dollars due to (justifiably) low review scores across several mediocre games.
  • Independent review site has a typical floor of 4.0 unless the game is flat-out broken (in the sense of significant glitches).
  • Hardcore Gamer Magazine has an interesting version of this. Each game is reviewed by two staffers; the first gives the in-depth review of the game and awards a score (0.0—5.0 scale), then the second comes in with a "second opinion" score, and gives usually a one or two sentence aside about the game. The two scores are averaged out. And while it's refreshing to see the two scores differing by about half a point, the real entertainment comes from watching the second opinion offering completely derail the score of the main reviewer.
  • RPGFan is notorious for this - with rare exceptions, even a game the reviewer will spend the entire piece criticizing will still get at least a 70. They posted an editorial about it, providing an explanation of their methods and somewhat admitting that the lower half of their scale is pointless, but sidestepped describing their reasoning, instead saying that you should focus on the text of their reviews. They later added a link to a guide with every review, but still have not explained why they score games this way.
  • RPGamer used to score on a scale of 1-10, but ultimately dropped this in favor of a 1-5 system because of this very trend. This led to their reviews since the change actually using the entire scale, with several 1s and 2s given to games that truly tortured the staff members reviewing them. While older scores on the older scales remain unchanged, the review scoring page provides a conversion scale that has led to many games experiencing a severe drop in score when converted to their latest scale.
  • Video game magazine Electronic Gaming Monthly, or EGM, made a conscious effort to avert this: most (previously all) titles they featured were handled by three separate reviewers, and highly varying impressions were surprisingly common. Closer to the end of its run, they switched from a 1-10 scale to a 'grade' system (A, B, B+, etc.) for the purpose of avoiding the Four Point Scale trap entirely.
    • Towards the end of the mag's original run, they handed off the really awful games to internet personality Seanbaby, who wrote humorous reviews lambasting them for being so bad that nobody would - or should - ever play them (many of the reviews can be seen, in extended and uncensored forms, on his website).
      • Eventually this reached its ridiculous-yet-logical conclusion when EGM was denied a review copy of the Game Boy Advance The Cat in the Hat movie tie-in game, which the developer said was because they "didn't want Seanbaby to make fun of it". Or, to put it another way, they acknowledged right out the gate that their game was so bad it wouldn't even rate a 1 in the normal review section. Seanbaby obligingly went out and purchased a copy just so he could lambaste it.
    • There were letters from the editor talking about how some company or another wouldn't give them information about their games anymore because of the bad scores they handed out. This happened at least twice with Acclaim and once with Capcom. In their first encounter with Acclaim, EGM had handed out very low review scores to their Total Recall game for the NES; when Acclaim threatened to pull advertising if they didn't give the game a better review, editor-in-chief Ed Semrad wrote in an editorial column that they could go right ahead, because they were sticking by the review even if it cost them money, because journalistic integrity was more important than a paycheck. The second time this happened, it was because EGM had blasted BMX XXX (and rightfully so); this time, Acclaim threatened to never let them review another game of theirs ever again, to which EGM said "fine by us". Capcom's case was a somewhat different affair: it wasn't a review that got them angry, but instead EGM badmouthing the constant stream of "updates" to Street Fighter II; when Capcom asked EGM to apologize for the remarks in exchange for not pulling advertising, EGM again said that they would not retract the statements even if it cost them Capcom's money, because they felt honesty and independence in their publication was more important. In all three cases, Acclaim and Capcom pulled ads from the mag for a few months before buying adspace again (plus, Acclaim would go bankrupt shortly after BMX XXX anyway).
    • It should also be noted that EGM's review system was heavily inspired by Famitsu's review system. The first issue of EGM, however, featured scores that ranged from 'miss' to 'DIRECT HIT!'.
    • Actually inverted by EGM in 1998, where they revised their review policy in order to give HIGHER scores, specifically 10s. There was a period from late 1994-mid 1998 where no reviewer had given out a single 10 (Sonic & Knuckles being the last one to receive one). After a slew of excellent high-profile games such as GoldenEye and Final Fantasy VII passed through in 1997 with 9.5s, the mag revised its policy in the summer of 1998. Previously, a 10 was only awarded if a reviewer believed the game to be "perfect". But as Crispin Boyer pointed out in his editorial discussing the change, "Since you can find flaws in any game if you wanted … there's really no point in having a 10-point scale if we're only using 9 of them." Thus, a 10 would be given out if the game was to be considered a gold standard of gaming and genre. The very next issue, Tekken 3 would break the 3+-year spell by receiving 10s from three of its four reviewers, and later that year, Metal Gear Solid and Ocarina of Time became the first games to receive 10s across the board in the magazine's long history.
    • EGM also received criticism from readers that some games would receive high scores one year, but the next year, a new-and-improved sequel or an extremely-similar-but-better game would come out to lower scores; alternately, a game that received high scores upon its original release may be ported to another system, or remade years later, to lower scores. Reader logic was that if Game B was better than Game A, objectively, Game B had to be rated higher on the numerical scale (see an entry above). This was addressed multiple times in the reader mail and editorial sections, where it was explained that they did not follow this rule, as long-running and generally high-scoring yearly sports series like Madden or Tony Hawk's Pro Skater would have hit the 10-point ceiling years ago due to improvements in each version. Furthermore, at least technically speaking, games will always be improving due to the more powerful consoles and computers that are released every few years. Finally, innovation naturally tended to score higher because of its originality than when all those ideas were incorporated into every game the next year. EGM explained that instead, they rated games based on the current marketplace, and specifically compared new releases to others within its own genre, while their level of standards would naturally increase into the future as games became more ambitious.
  • Dr. Ashen's review of Karting Grand Prix mocks this, with Ashen referring to the game as "irredeemably awful", then giving it a score of 73% "because I'm a fucking idiot."
    • In an earlier review on the Gamestation, a flea-market handheld game system resembling the original PlayStation, Dr. Ashen gives the system 7/10, saying that it's the lowest score one can give "before the company pulls their advertising".
    • And in yet another review he gives a product 8/10, but "only because it's made in China, and I'm terrified of their government."
  • Zero Punctuation does not give out numerical scores for just this reason.
    • He did give out a numerical score for Wolfenstein (a two out of five stars, which is already an aversion of this trope). Likely the reason he did give out a rating, though, was because he did the review almost entirely in limerick form and just needed a rhyme.
    • In response to "How can you even call it a review without a score?" from his Super Smash Bros. Mailbag Showdown: "If you want a score, how about four, as in four-k you" accompanied by the commenter being flattened by a giant number 4.
    • It is also worth mentioning that, his lack of using scores aside, Yahtzee subverts the whole reason for this trope in the first place (that is, reviewers not giving bad reviews more or less to keep their jobs). His job practically is to give bad reviews, and he often receives criticism when he praises a game.
    • Kotaku doesn't give scores, either, making some commenters confused. Their system of summing up reviews is to ask "Should you buy this game?" with the possible answers being "Yes" for a good game, "Not yet" for a game with significant issues that might be patched in the future, and "No" for a bad game that's irredeemable.
  • British gaming magazine PC Zone's reviews run the whole gamut from 7%-98%. Similarly, a score of 80%+ does NOT automatically gain a "Highly Recommended" award; although these often ARE given out to high scoring games, on occasion they have not been awarded to games that are technically good, but are lacking in some kind of "soul" that the reviewer (and the Second Opinion reviewer) would have liked to see present.
  • This compilation of MetaCritic scores is this trope in all its glory. 70% is worth no points, 60% is -1, and anything below that is -2. It doesn't really prove consistency, for one. That is standard deviations, while this is a total of points. For another, putting negatives that high just makes the lower scorers look even worse. Talk about spin.
  • GameTrailers generally has very informative and reliable reviews that coherently explain the points they try to make as the review itself is going on, but the score at the end falls squarely into this trap, the lowest score they usually give being somewhere in the 4.7 to 5.0 range. It once gave a humorous "suicide review" of Ultimate Duck Hunting presented in the form of the reviewer having killed himself over the game and his review being his suicide note, and went on about how it was bad enough to push him over the edge at every turn, only to give it a 3.2.
  • Nintendo Power is usually good at averting this trope, but some of their reviews of games in popular franchises tend to be given high ratings by default.
    • With this magazine, what you have to watch for is not the score, but the number of pages of the review. The Nintendo blockbusters get two, three, even four page reviews, squishing out reviews for other games.
    • They also admitted in response to a letter that while they use a full ten-point scale, they won't put up a review for a game lower than a two, reasoning it's too bad to even bother with, and they only give out tens for the super-duper cream of the crop.
    • Towards the end of the magazine's run, they held mini reviews for Virtual Console games (old games from eras past and original games) and rated games under "Recommended" (this game is good), "Hmmm..." (your milage will vary), and "Grumble, grumble" (game is bad, don't buy it). This style of scoring seems to have been made to avoid the four point scale.
  • Amiga Computing gave 100% to Xenon 2. A reader called them out on this, asking if they'd give a higher score to an even better game. ("Yup.") They later gave out a score of 109%, and another 100% in the same issue.
  • The UK Official Dreamcast magazine aimed to avert this trope (back around the turn of the millennium even) by insisting on a rating scheme where 5/10 was strictly "Average". This led to a huge amount of complaints from fans who missed the intention behind the scheme and complained that a game they liked got a "harsh" score (The creators of Fur Fighter commented that the 7/10 they got from the magazine was the lowest score the game received). Eventually, the magazine staff made a phrase for each number and put it under each review score so the reader knew what the rating actually "meant". (For instance, any 7/10 rating had the word "good" under it. Shenmue was the only game that let us find out that the word under a 10/10 was "genius").
  • The Finnish gaming magazine Pelit uses this to a degree: They use a percentage scale for their game reviews, and they do use the entire gamut of their scoring system, but anything below 65 is still relatively rare. The magazine used to include an info box that described anything below 65% was below all standards, and 50% and lower meant the game was truly atrocious. While the 50-or-lower reviews are amusing to read (such as their Fight Club review where the entire review was just the phrase "Rule 1 of Fight Club: You do not talk about the Fight Club" with a 20% score), the staff hardly ever go out of their way to seek bad games to review, because they don't hate themselves that much. Instead, they pick games that they know they'll like, or ones that have interesting subject matter or are otherwise noteworthy. Originally their scoring system was chosen to maintain compatibility with other gaming magazines of the time, by the early 2000s there were basically no other respectable magazines around that still used the same scale, and the staff have mentioned repeatedly that they would like to switch to a star-based system or no score at all.
  • Ars Technica has started reviewing video games on a three-point scale: Buy, Rent, and Skip. They expand a bit upon why they use that scale and why they aren't part of Metacritic.
    • ScrewAttack has the same review system, with the exception of using "F' It" rather than "Skip." It's also the system used for the video game reviews in Boys' Life (the magazine of the Boy Scouts), under the names of "Buy," "Borrow," and "Bag," but not many people care about that.
    • Disney Adventures also used to use this rating system as well.
    • Nintendo Power uses a three-tier system for digital download reviews ("Recommended", "Hmmm...", and "Grumble Grumble").
  • Inside Pulse tried to avoid this, but got so many threatening letters from developers that it gave up on a numeric scale entirely, describing games with positive and negative adjectives instead.
  • When Assassin's Creed II was due for release, Ubisoft got caught in a major shitstorm when they announced that they won't give the game out for testing unless the reviewer agrees in advance to give a positive review. Apparently, it didn't need the "boost".
  • Video game review site has been routinely lambasted for using a four point scale from fans who believe a game should have gotten five stars.
  • Spanish mag Nintendo Acción runs on this, to the point some Pokémon fans complained when Pokémon Black and White got only a 94, when other games got 96-98 scores. Though in their defense, said review also lambasts the game's graphics, despite the great animated sprites and the Scenery Porn the game has.
  • While Toonami hosted dozens of video game reviews over the course of the show, only a handful ever scored below 7 out of 10. No games ever scored lower than 6 on that scale either. The creators have admitted this is due to not having a profession reviewer in their group and only playing games they really like, not wanting to fill the air with needless negativity. That being said, they only rarely give a game a perfect 10 out of 10, with 8 by far the most common score.
  • Surprisingly, IGN is often pretty good about averting this trope (witness the 3.0 they gave Ninja Gaiden 3). In fact, they completely averted it in the very early days where they scored games on an integer scale rather than a decimal scale. However, once they moved to a decimal scale around September of 1998, this cropped up more and more frequently. For example, in 2000, they wrote a very critical (and angry!) review for the PC version of Final Fantasy VIII but still gave it a pretty solid 7.4/10 note . Similarly, in 2000, they wrote a very negative review for RealMyst but still gave it a score of 6.5.
  • An absolutely notorious example of the trope came with IGN's review of Hogwarts Legacy. The text of the review utterly excoriated the game, citing a poor story, weak gameplay, and various technical issues... but the score was a nine out of ten. Naturally, IGN was immediately taken to task for giving such a glowing score when their opinion was clearly anything but, which many believed was to avoid angering Warner Brothers.
  • A very notable exception to the rule is the VNDB (Visual Novel Data Base), which, as the name suggests, is a listing of (Japanese) visual novels on the market. When a user attempts to give a 10/10, the site actually warns them that this score is reserved for absolute perfection that is unlikely to ever be improved upon and as such, should be given only two or three times at most over one's lifetime. As a result, the list only has two entries over 9.00note  and less than 50 entries over 8.00, out of a database of well over 10,000 titles. Since visual novels have fairly low requirements to function, as opposed to regular video games, their quality is almost entirely based around the story and therefore highly subjective. As such, even a game that scores around 7.00 can still be very enjoyable.
  • The defunct Game Player's magazine (now absorbed into several other publications) once had a major shakeup after realizing it had fallen into this trope, with even "terrible" games rating 50-60% scores. A new rating scale was devised to even out the score distribution, and was meant to be read in context with the review itself rather than be taken as an absolute. Under the new review system, even a game with a 50% score is probably still worth solid consideration by a fan of the game's genre, and a low-rated game could either be thoroughly underwhelming, or an excellent game for a very small audience of players. 90% and above, however, would be restricted only to games so fantastic that players outside of its genre might consider checking it out, and consequently, very few of these were given out through any particular year.
  • Chris Livingston, of Concerned fame, brings this up in his "Bullet Points" series on Crysis 2:
    I was going to give some points to Crysis 2 because it hasn’t crashed and there haven’t been any graphical glitches. But that’s kind of weird. That’s like buying a Prius and saying “Well, it didn’t explode when I used the turn signal and the airbags didn’t go off in my face when I turned on the radio, so it’s a good car.” PC gamers just have such low expectations for games, I guess. I need to break that habit of awarding points for simply working properly.
  • Tom Chick, who currently writes on his "Quarter To Three" website unabashedly uses the entire scale available to him and has done so for a very long time. This has led to numerous big money titles being given low scores way outside the four point scale. He has opined that if you ask an editor if the four-point scale exists, they will say no. Until you try to rate a game a 3.
  • Neoseeker both averts this and plays it straight. Some reviews, such as Code of Princess and Pokémon Black and White (at least before it was edited) had reviews that consisted of little outside of bashing the game, then gave it a seven and six, respectively, to the games. However, since it allows user reviews, there are some reviews that use 5 as the average, and some that use 7 as bad.
  • Hot Pepper Gaming falls prey to this as well. A fine example of this would be Erin's review of Clash of Clans, in which she savaged it so badly that they punctuated her screed with drumbeats... and then she gave it three out of five.
  • An interesting aversion of this is the German magazine CBS (short for Computer Bild Spiele). This is because they use a system in which they give points from 1 to 5 (5 meaning that it's terrible and 1 meaning that it's perfect. The rating system is similar to that of German schools). After it they would multiply the score with the game's price tag and after seeing the score, they would give it tags ranging from "very expensive" to "very cheap". This makes it possible for a AAA title to have the same score as a shovelware game just because the AAA game costs an insane lot more money than the shovelware game.
    • Of course they handle out extra issues in which they only give the reviews of their AAA games to balance out this problem. The fact that they are only a part of a larger magazine publishing company also helps. They also make a point of never using preview versions of games. While this obviously means their reviews may be as much as two months late compared to the release, this also means the review will be more accurate about the actual state of the game (many, many games nowadays have Day One DLCs and patches that change the game content and performance so much that a preview review is more often than not highly inaccurate).
  • Many mobile apps are major abusers of the ratings system. Often, a 5-star rating can be given from within the app with one click, whereas lower ratings require jumping through more hoops. Particularly egregious are free-to-play games which give users in-game bonuses for leaving a rating... if you see a game with lots of 5-star ratings accompanied by few or no explanatory comments, it's a safe bet that the game is more exploitative than fun.
  • The user-rated difficulty rating scale on GameFAQs is a 2.5-4.0 scale, when counting only the games with least 50 difficulty level votes. Easy games have a difficulty rating of less than 3. Average difficulty games, which represent the majority of games for most genres, are between 3 and 3.5 in difficulty. Minimum threshold for low-end Nintendo Hard starts at around 3.5. Difficulty rating 4 or above are reserved for the hardest of the Nintendo Hard, and several genres (such as Adventure, Puzzle, and Sports) have no games rated with such high difficulty. Only a very small number of games exceed the 4.5 mark, examples of such games include Ghosts and Goblins (NES), Battletoads (NES), Silver Surfer (NES), Ikuraga, and Touhou Chireiden: Subterranean Animism.
    • The choices available for voting a game's difficulty are as follows, and the point value in brackets: Simple (1), Easy (2), Just Right (3), Tough (4), and Unforgiving (5).
  • Eurogamer has recently dispensed with a rating system altogether partially because of this trope. They now only give games an "Avoid", "Recommended" or "Essential" sticker - as well as no sticker at all for games they believe to be So Okay, It's Average.
  • Polygon does their best to use all 10 points of the scale (including half steps) and will actually go back and adjust a score if a game changes enough between release and updates.
  • Jim Sterling of Jimquisition uses a 10 point scale for their reviews on their blog, but completely averts the trope by making the scores actually mean something, such as 5 being average, 7 being good, and so on. However, the trope is played straight by the fans of the games they reviewed, which caused Jim a lot of grief; their review of No Man's Sky caused fans of the game to DDOS their web site because they gave the game a 5/10 for having potential, but wasting it on bad game design. Their site was attacked again when they gave The Legend of Zelda: Breath of the Wild a 7/10, saying that the weapon durability system and other factors annoyed them greatly, but they still enjoyed the game overall. Jim then made an episode pointing out how absurd people were acting over their 7/10 score and wondered how on earth such a score is considered to be horrible. They would since prefer going with impression videos of a game they played to show all the bad and good bits.
  • The old Polish computer game magazine Top Secret rarely gave any game a rating below 7, even when the review contained mainly complaints about the game's quality.
  • Magazine Famitsu averted this in its early years by having four reviewers who each scored the game out of ten. A 40/40 was something almost impossible to attain, the first one being given in 1998. From then to 2007, only six games hit 40/40... but at some point, their policies seem to have changed, such that in the following two years, seven games received that score, and a further twelve have hit it since, while entire years have gone by without a game scoring under 25. Games part of established franchises tend to be even more ironclad; compare Resident Evil 6's Metacritic score (60) with its Famitsu score (39/40).
  • Pokémon GO and Ingress have a system where high leveled players can vote on whether or not a submitted portal/stop should be implemented in the game based on several factors (historical/cultural significance, easy access on foot, accurate location on the map, etc). Said factors can be given a rating from 1 star to 5 stars and giving a nomination 1 star is an automatic rejection (usually reserved for low quality nominations). Most players vote either 1 star or 5 stars and rarely in between. This is due to the systeming punishing players by lowering their review rating (which in turn makes their votes have less of an impact) if they make too many votes that do not agree with the majority.
  • The now-defunct video game review website Crispy Gamer averted this trope by only giving games one of three ratings: Buy It! (for games that they regarded as being excellent), Try It (for games they believed were worth renting for a weekend and only buying if you turned out to really love it), and Fry It! (for games they regarded as having no redeeming qualities). Some readers felt that they were over-aggressive with choosing Fry It! ratings to games just because the reviewer didn't like the genre or story even when gameplay was decent.
  • Valve goes even further; user reviews on Steam use a two-point scale with just a simple thumbs-up/thumbs-down. This leads to numerous comments like "I gave it a thumbs up/down but..." or "I wish there was a neutral option because this game is So Okay, It's Average". But then that's aggregated into more of this kind of scale by reporting the ratio of ups and downs — Overwhelmingly Positive, Mostly Positive, etc. — with games that have "Mixed" reviews or less tending to be seen as low-quality.
  • An odd trend turned up in amateur reviews of Hogwarts Legacy after the pre-launch controversies over antisemitic content and the transphobic views of the IP creator died down. A surprising number of reviewers summarized it as something along the lines of "it's a mediocre open-world game, but it's Harry Potter so 8/10".

    Web Original 
  • YouTube had a rating system that let people give a video a score of up to five stars, though hardly anyone gave less than three, unless the video was particularly bad. This led to a few wide-spread incidents of vote-bots giving dozens or hundreds of one star ratings to people whose videos disagree with the attackers' own political or religious beliefs, where a drop even to four stars will greatly reduce a video's traffic. Youtube has since dropped the 5-star system and changed it to a simple like/dislike system.
    • Similarly, Netflix allowed movie ratings, and aggregated all the user reviews into a star rating. Because there are people that will like something no matter how bad it is, and some people that will hate something no matter how good it is, 1 star and 5 star ratings are impossible. However, if a movie doesn't get above 1 and 1/2 stars, you should probably avoid it, and if it reaches 4 and 1/2 stars, it's probably worth watching. So the scale is skewed, but still relatively accurate. It has since shifted to simple like/dislike too, and at certain point it didn't show the full score, as the rating only serves to determine what will be recommended to the viewer.
  • Similarly, a web site that hosts community content for Left 4 Dead allows people to give reviews on the created content, ranging from 1-100. Trolls or people who exaggerate how much they hate the custom content will generally give a rating between 1-20. Anyone that wants to praise the author to hell or if the author is using an alt account, they will give scores of 90-100. For the latter, the people will ridicule others who give scores between a 60 and an 80, even if the content doesn't meet the standards of receiving a high score. In other words, if the content is decent, you better either give high scores or risk being flamed by the community for being too harsh or a troll.
  • Newgrounds is somewhat of an aversion to this; while the scale is only 0-5, it's an unspoken rule that if it's not up to snuff for the portal, it's a 0, if you just didn't like it or something along those lines you should vote 2, and if you love it vote 5. While 1, 3 and 4 are in there, hardly anyone uses them. Undoubtedly this is partially due to its "Blam"/"Protection" system which, generally, rewards you for relatively high ratings of content others have rated relatively high and low ratings for content others have rated low, in a blind system. The actual reviews, however, can become an extreme example of this, as Retsupurae has demonstrated in their "Retsufrash" videos - they've witnessed perfect or near-perfect scores handed out to games that the reviewer in question had multiple complaints about (usually with no redeeming qualities mentioned), admitted to not finishing, or, in some particular cases, couldn't even get the game to start. They also witnessed one bizarre inversion, where one game they riffed on got a review that called it "one of the best" of its genre, yet only gave it a half-star.
  • On Mobygames, any author of a published game review is rewarded with 1-5 points, depending on how the review quality is rated by the approving staff. The very worst rating the staff can pick is "Average", continuing through "Good", "Great", "Excellent" and "Superb". Though it makes some sense, since if a review deserved a bad rating, it shouldn't be approved for publication anyway.
  • Hentai sharing sites g-e and use a five-star rating system, but like many sites the stars don't mean much - if anything, it's more likely a low star rating means that the work was translated by someone who does so poorly and makes no effort to improve rather than anything to do with the quality of the work itself. In cases where no translation was necessary (works written in English to start with, or image galleries with no text), the star rating becomes more useful.
  • Discussed by Jim Sterling on the Jimquisition after they gave The Legend of Zelda: Breath of the Wild a 7/10, leading the fandom to go berserk over the "low" score. When they still reviewed games and gave them scores, they went out of their way to avert the four-point scale, and according to their scale, a 7/10 was actually a rather good score (good overall, even great, but with a large flaw that affected the experience. In this case, the weapon durability system), but it proved to be enough to take BOTW's Metacritic score from a 98 to a mere and lowly 97, leading to the fandom to DDOS their website, and them to drop reviews altogether in favor of "Jimpressions," which features no score and is simply a "Did they like/dislike it?" video.
  • When Jello Apocalypse started doing film reviews, he had to go so far as to put out a video explaining his scale, saying that "I'm not the American education system." By his scale, anything above 5/10 meant "this movie is a good use of your time", with only a 6 meaning "you liked it but you probably wouldn't watch it twice or tell your friends to go see it." He himself heavily averts this; his reviews of the Pokémon films had the highest scores be 6/10s.
  • Most Retsupurae videos making fun of flash games end with a sampling of Newgrounds reviews. This subject has come up so many times that it's become a meme to leave outright flames on a Retsupurae video ending with 9/10 or 10/10. Occasionally it even gets inverted, when someone leaves a glowing review with an inexplicably low score.
    • In "Sonic Boom City - State of the Review Edition", the guys read the reviews for a game they couldn't get to load. One review: "Didn't load but I'll give you the benefit of the doubt. [2.5/5 stars]"
    slowbeef: People found that helpful! That is the least helpful review in the world!
    • The guys behind Retsupurae, slowbeef and Diabetus, have joked about this several times as well. During the former's Dead to Rights Let's Play, while talking about the critical reviews the game received at release, Diabetus comments that "a 7/10 rating usually means the game is fucking awful". The description for their "Retsufrash" playlist (videos where they make fun of Flash videos and games) also notes that the Flashes in question "deserve the full scorn that an 8 out of 10 offers".

    Web Videos 
  • The titular host of The Angry Joe Show states this is a Pet-Peeve Trope of his. The "Angry Reviews", "Extended Review Discussions" and "Rapid Fire Reviews" have all used just about every number on the 1-10 scale (whole numbers only with no decimals). According to Joe and his team, a 5/10 is their flat average, with reviews for video games building towards the team justifying a higher or lower score. For instance, a 3/10 to Joe will have some decent points, yet he'll detail the negatives and why it's ultimately not worth recommending to his audience; conversely, Joe will preach for a 9/10, but explain why it falls short of a 10/10. Still, there are some examples in the show's history where the trope is Played With.
    • Dance Central received a 7/10, the highest possible from Joe: to others, this may seem like a weak score, but he reasons the game wasn't only fun, it was built specifically to take advantage of the Kinect. Not only did he give it his "Badass Seal of Approval", Joe also placed it at #5 on his "Top Ten Best Games of 2010" over other big titles during that year such as Halo: Reach, Call of Duty: Black Ops and [1].
    • Similarly, Asura's Wrath was given a 6/10, a "slightly above average" game to Joe, but since he was in complete awe of the title from beginning to end, he awarded it his Badass Seal of Approval.

  • The Ontario education system, in addition to giving a percentile score for academic achievement in a subject, also uses a four-point scale for such things as teamwork, organization and initiative: Needs improvement, Satisfactory, Good, and Exceptional.
  • Brokerages have a quid pro quo relationship with the firms that they're supposed to be rating. Usually there's an informal understanding between the two that, if the brokerage advise their investors to sell a particular firm's assets, that firm will stop providing the brokerage with information or other privileges. So brokerages almost never give firms a "sell" rating. You can see a Four Point Scale in corporate credit ratings where junk bonds and high risks get a B-rating, while better investments get A, AA, AAA, etc. In an ordinary education system, a B is a respectable grade, and a C is a clear pass.
  • is a hosting website based around building up a reputation through a publicly visible vouching/feedback system. Negative "reviews" are so rare that many people will refuse to stay with or host people who have even one.
  • eBay ratings, as parodied here in xkcd
    • eBay only has a Positive-Neutral-Negative rating system, but it still skews very much toward positive. Some people leave neutral feedback for sellers when they really should give negative. Part of this is because eBay doesn't allow anonymous feedback and a few sellers flip out and give the buyer negative feedback in retaliation.
    • The system itself discourages users from giving anything other than positive, making the user confirm that they have given the seller ample time, that they have tried to contact the seller about any problems, and that they understand what they're doing in order to give a neutral. This is more confirmation than one has to do to sign up to the system. It's also possible for a buyer to lose feedback privileges altogether for leaving too many negatives in a short space of time.
    • Sellers will receive a warning (possibly followed by the withdrawal of certain selling privileges) if any of their ratings fall below 4.5. This means, in effect, that 4 out of 5 is considered a bad score and that it's actually better not to receive a rating at all than to receive one less than a perfect 5/5. (This is mitigated to some extent in that wherever objectively possible, ratings are assigned automatically by eBay itself, e.g. it's not possible to receive four stars or below for shipping price if your shipping is free.)
    • It's not all one-way though: sellers only have the option of leaving positive feedback, no negative or neutral options at all. (Cases where the buyer is causing a problem are usually better handled by internal dispute resolution, as this information probably wouldn't be useful to anyone else anyway.)
  • The LSAT has a minimum score of 120, and a maximum of 180. The empty range is twice the size of the scored range.
  • The Dutch Cito test at the end of primary school, which partially determines what kind of secondary education a pupil can/will take, has a range of 500-550. (The reason for this is to avoid the Cito results being misinterpreted as IQ.) The empty range is ten times the size of the scored range.
  • If you're involved in humanities degrees in the British university system, you'll almost never see a mark below 35% or above 75%; forty points used on a hundred-point scale. Language marks tend to be capped at the top end to bring them in-line with humanities, since otherwise it would be quite possible to get 100% on a language test. Your final degree in any subject is awarded on a four-point scale, First/2:1/2:2/Third. The thresholds for those are usually 70/60/50/40% respectively.
  • Honours degrees in Australia have the ranks being First-class, 2A, 2B and 3rd Class (generally referred to as just Honours, rather than "Honours 2A" for example). Thresholds are fuzzy since grades are a combination of coursework and dissertation mark, with the relative importance varying on the field, but usually it's about 85%/75%/65%/50%. It's important to note that getting Honours at all is viewed as remarkable, just the higher ones are deemed exceptional (although if you want to go on for postgraduate work like Masters or PhD, you usually need 2A or higher).
  • Until 2016, the SAT had a range from 600 to 2400. Turning in a completely blank test (if it isn't discarded out of hand) would not result in the lowest possible score - they would have to lose points by answering incorrectly.
  • In the Italian university system, passing an exam originally required you to get at least a 6/10 from three professors, which nowadays translates to 18/30 from a single professor. However, because of the significantly greater flexibility regarding when students can (re)take their exams, students who fail to get at least an 18 are simply not graded at all and told to retake the exam at a later opportunity (or drop the subject if it is optional). This means the scores from 0 to 17 are never actually formally awarded. (On a secondary level: even within the 18-30 range there is a tendency of some professors to award very high –even perfect– scores almost as a ‘default’ to students who don’t display obvious gaps in their basic knowledge of a subject.)
  • In music festival ratings (mostly for high school choirs, orchestras and bands), you theoretically have 5 levels you can rate a performance. The scale is 5 = Poor, 4 = Fair, 3 = Good, 2 = Excellent, 1 = Superior. Very few groups get a 4 or 5, and 3s are what's given when something was terrible. The "Excellent" or "2" rating goes to groups that range from acceptable to very good. It's partially meant to be encouraging. You also have to sign your rating form and you want to be invited back - judges get paid.
  • Competitive high school debate organizations use a different scoring system for each event, such as the Lincoln-Douglas event. Judges are asked to score competitors on a 30-point scale, but any score below 20 is to be reserved for extreme circumstances in which the judge must provide a written justification of why they gave a score lower than 20. As long as a contestant gets up, says enough words to fill the time limit, and doesn't use any foul language, they get at least a 20/30.
  • In Southern California, restaurants are given a letter grade based on health and safety standards. It's mostly about how clean the place is. While the rankings follow the usual A, B, and C monikers, most restaurants have an A grade. It's rare that a place has B (even in food courts where its neighbors have As). Since it's an official government statement on a restaurant's hygienic practices, anything below an A is practically a kiss of death — consumers tend to assume that even a B-rated place is a plague pit, even though objectively that's still considered an acceptable rating. Most restaurants overhaul their practices very quickly to get back to an A rating or risk going bankrupt. Although this imbalance was not intended, it's generally seen as an overall good thing from a public health perspective.note  It's also enforced to an extent: Any restaurant that doesn't meet at least the C grade is shut down until they clean up. New York City also has a similar grading system for its eateries, with ratings of A, B, C, and "pending". Any place that doesn't have an A is usually not bothered with by consumers, even though a B rating isn't generally too bad (passable but with minor problems found that can easily be fixed). It should also be noted that "pending" does not mean "not rated yet", it means "failed but was given time to bring things up to par".
  • The USDA beef grading. Most meats that normal consumers have access to (from lowest to highest) are Select, Choice, and Prime. There's also five ranks below that, and from lowest to highest: Canner, Cutter, Utility, Commercial, Standard.
  • Anyone watched the Olympics? Try the gymnastics events sometime. Despite being on a 10 point scale, it's rare for any competitor to get below a 9.5. Rank Inflation is so bad that critical flaws (such as a gymnast tripping and falling on their face) are worth only about a tenth of a point. Flaws that we viewers can't even distinguish? 1/100th of a point off. Scores generally range from 9.7 to 9.9. This is in large part because Olympic gymnastic point guidelines don't differ significantly from those used at lower levels of competition—and you will find lower scores there. The issue is that at the Olympics, you have the world's best—Olympians don't screw up noticeably enough to warrant a lower score.
  • Telephone customer service personnel will occasionally ask you to rate their level of service on a scale of 1-10. If you answer 9 or below, they'll ask for specific reasons why you didn't give them a 10. Customers who can't or don't care to name specific flaws in the service will probably amend their rating to 10. This makes a rating of 10 equivalent to acceptable service with no specific complaints rather than outstanding or beyond expectations service. For the company who the phone agent is working for, anything less than a 10 can and often will be used to deny raises to the employeed, regardless of how frivilous, trivial, or absurd the reason for it is.
  • People often rate appearance on a 4 point scale. Studies have shown that when asked to rate their own appearance a person will rate themselves somewhere in the 6 to 9 range on a scale of 10 but will only rarely rate people lower than a 4 and even then most admit feeling guilty.
  • From 2005 to 2012 Ofsted school inspections in Britain graded schools on a scale of Outstanding, Good, Satisfactory, or Inadequate. Schools and their senior staff would invariably be criticised for being "Satisfactory". In 2012 "Satisfactory" was renamed "Requires Improvement", reflecting what the grade had come to mean.
  • Enlisted Performance Reports in the US Army and US Air Force. For many years rating inflation was rife, and it became standard for everyone to get perfect scores on their evaluations. Anything less than a perfect score meant you had seriously fucked up. Unfortunately this meant if you were an exceptional performer, there was no way to indicate it on the report, because both great soldiers and the merely average were receiving the same scores. This also led to odd situations where a senior enlisted member would be convicted of something like sexual assault and news reports would emphasize their history of outstanding performance evaluations. This ranking inflation would lead the Army to revise the rating system, including limiting the number of top ratings that anyone is able to give someone over the course of their career.
  • Grading in US high schools and colleges follows this to a tee, and may be the reason why this trope is so prevalent elsewhere. Most schools grade along a five-letter system of "A", "B", "C", "D", and "F", Much like the California health inspectors' grading system above, this has a real-world justification in that, if you finish your class knowing less than 60% of the material you learned, then you don't deserve to pass the class. Some colleges go further and make a "D" (or even "C-" at University Of California, Santa Cruz, where it's pass/fail at other colleges) a failing grade as well, on the grounds that just barely passing this class means that you aren't ready for the next one. When you've spent your whole life associating a 65% with "barely adequate", and you become a professional critic, that's going to rub off on your grading. It's for this reason that some magazines and websites (such as Entertainment Weekly) simply use the same A-through-F grading system.
  • Most scholarships and grants requires a student to have at least a 3.0 GPA or an 80% average. Goes even further in Graduate School. Most Graduate Programs requires a student to maintain an 80% average at minimum. They take it one step further by threatening to drop a student if they get even a C in more than one class.
  • The Soviet/Russian education system ostensibly uses numbers from 1 to 5, but in practice 1 is virtually never used, with 2 being the lowest grade, standing for failure.
  • AAA (American Automobile Association) uses a five-diamond scale to rate the quality of lodging and restaurants, both in their yearly Tour Books and online. According to their criteria, one-diamond simply means that the property is undistinguished and unremarkable, but still not "bad". In addition to these, they have "AAA Approved", which means that it meets a specific set of quality guidelines regardless of star rating. They also stop listing properties if they fall below the minimum standards and never list properties under the threshhold, so if it's not in the Tour Book, it's probably not worth checking out.
  • A curious example of this trope is found in the Doomsday Clock from the Bulletin of the Atomic Scientists, which grades how close we are to global catastrophe, in terms of how close the clock face is to midnight. Even when the clock is at 11:55, the result is actually more-or-less safe — the Cold War is over and none of the countries with nuclear weapons have any interest in starting World War III. The earliest time shown on the clock, when the world was considered safest, immediately after the fall of the Soviet Union, was 11:43 PM. It is implied that the very existence of nukes is what's keeping the time so close to midnight.
  • The AP (Advanced Placement) program, by which American high school students, usually juniors and seniors, can get credit for intro college courses, averts this in general. Scores range from 1 to 5, and most scores are 3s, with 1s and 5s typically the least common. (However, tests that are considered harder, like calculus, physics, and foreign languages, skew upward in their scoring because of who tends to take them.) A 4-5 is usually enough for college credit; many colleges also accept 3s (sometimes for slightly less credit than higher numbers, but still some value) and a few accept 2s or use them to place students into honors versions of intro courses.
  • Illinois K-8 students take a standardized test called the MAP test. Scores typically go up to about 260 or 270, average out at 240, and anything below 220 in middle school is considered super low.
  • The driver rating system for ridesharing services Uber and Lyft operate on this scale, as well. The scale runs from one to five stars, but drivers face removal from the systems if their average rating falls below 4.5 in most regions. Likewise, food delivery gigs like Doordash and Grubhub can also remove drivers from the system if their overall rating falls below similar threshold.
  • NVIDIA combined this with Rank Inflation. When it came around for a model number refresh with the GeForce 200 series, NVIDIA denoted the prefix before the number would indicate the card's performance tier, which would be no prefix (10), G (10-20), GT (30-50), and GTX (60-90). Come the GeForce 700 series, the 50 number graduated to GTX and a new tier was effectively made, the GeForce TITAN. With the GeForce 900 series, there is no GT card a consumer can buy. The problem is that there's a huge performance difference swing between, for example, the GTX 950 and the GTX 980 Ti.
  • When AMD had a model number refresh, they added an R# prefix and uses odd numbers up to 9. Except the lowest R# starts at R5.
  • The Carnival parades of Rio de Janeiro and São Paulo end in a vote count where the grades supposedly go from 0.0 to 10. It was usually a ten point scale, as rarely a judge would give something lower than 9.0 - exceptions included this overtly rigid guy who went as low as 7.8 and didn't give a single 10; and this woman, who clearly makes the crowd unhappy once she gives an 8.9.note  - and then the rules were downright changed so 9.0 was the lowest rating.
  • Audience tracker Cinemascore uses an A-to-F scale. However, given that they draw their questions from opening night audiences, and said audiences tend to be the people most enthusiastic to see the movie, most Cinemascore ratings skew very high. Only a tiny handful of films have ever gotten an F note , and a B is considered to be a major sign of trouble.
  • The Parker scale of wine grading tops out at 100, but ratings start at 50, and rating guidelines state that wines rated 50-59 are in some way damaged and wines rated 60-69 are just flat-out badly made. A score of 70 means that the wine is just barely drinkable.
  • The RST (Readability, Strength, Tone) system of signal reporting in ham radio is meant to give the operator on the other end an idea of his signal quality. It's a 3 digit number with the first digit, Readability, ranging from 1 to 5 indicating how well the receiver can understand the signal. The second number, strength, is the only one that's really quantifiable and indicates signal strength on a scale of 1 to 9, and the third number, Tone, also ranges from 1 to 9 and is only relevant when sending Morse code. In practice, Readability will always be a 5, Strength will be a 9 or a 5 depending on how loud you're coming in (or how bad the receiver is at copying Morse), and Tone will always be a 9. These rote responses are so common that most ham radio logging software will auto-fill the signal reports as "599", and may have buttons for "599"and "559"as well.
  • Online business profiles, like Google Business Profile and Yelp, give users the ability to give ratings between 1 and 5 stars, but a 4.0 average is generally considered bad, so any rating below 5 stars effectively becomes a negative rating.

In-universe examples:

    Video Games 
  • In Pokémon Masters EX, NPC allies ("sync pairs") are given a power rating from 1 to 6 stars. However, every single pair in the game has at least 3 stars.
  • In My First IGN Interview (from the IGF Pirate Kart), you get the option to do a practice interview with an IGN applicant, who then asks you to rate how well she did. You have a choice between 10, 9, 8 or 7 out of 10, and if you pick 7 she gets as offended as if you had chosen 1. (This is obviously a joke about IGN's game rating system.)
  • A mission in Borderlands 2's "Mr. Torgue's Campaign of Carnage" DLC involves the player characters being sent after a game reviewer who gave a negative review to a game Mr. Torgue really likes. The review: "Gameplay's pretty dull. It sucked. 6/10." Torgue is half upset because he thinks the game in question is very good, and half upset because by any logical standard a score of 6/10 is above average.
  • A non-review example of this occurs in the Guitar Hero games: You will never get fewer than 3 stars on anything, no matter how badly you do. It's just a question of whether you get 3, 4 or 5.
    • However, Rock Band averts this. As you build up to the base score, which is the score you'd get for hitting every single note if there was no combo system and no Overdrive, you go from 0 stars to 1, to 2, and finally to 3. With the combo system and Overdrive, however, getting 3 stars is still laughably easy on most songs. 4- and 5-starring songs is still just as hard (or easy, depending on the song) as it was in Guitar Hero. This all means that it's more than possible to complete songs with scores below three stars.
      • It's still not possible to get 0 stars—someone tested this with the song "Polly" by Nirvana. The song literally has only eight notes in its drum part, so it's possible not to hit any of them (and, thus, not to score any points) and still pass the song. The results screen? 0 points and 1 star.
      • Guitar Hero Metallica introduces a star meter somewhat similar to Rock Band's. The difference is, you still can't get less than three stars in GHM; until you have at least three stars, the star meter will "help" you fill it until you reach three, which sometimes entails, for example, automatically filling itself during sections with no notes.
    • Guitar Hero sort of justifies it, because "failed a song" means "got a bad review" and so if you get less than three stars you failed. It's more like a Hand Wave than a real justification, though.
    • The opposite end of the spectrum occurs for certain DDR clones. In The Groove 2? An "A" is somewhere around low 80%; after A+ is S-, S, S+, one star, two stars, three stars and four stars.
  • Certain games in the Rhythm Heaven series give an explicit numerical score for the player at the end of a rhythm game. Over 85 is Superb, between 60 and 85 is OK, and below 60 fails the stage, forcing the player to try again.


    Web Animation 
  • Parodied by Red vs. Blue in one of their PSA videos, "Game On". In the segment on game reviews, among other things Grif says that scores of 1-6 are meaningless because no game ever gets them, and that a 9.9 is the same score as 10, except the reviewer doesn't like the developer for some reason.

    Western Animation 
  • Arthur: Exaggerated in "On the Buster Scale", where Buster rates every movie he watches (all being action movies full of robots and explosions) a 10+/10. However, he does consider demoting a movie to just 10/10 because it's not in 3D.
  • Parodied in the TV show The Critic. Jay is told by his boss that his job is to "rate movies on a scale from good to excellent." Jay himself in an inversion: he dislikes everything and the best score he ever gave a film was a 7 out of 10.
  • In Futurama, Dr. Wernstrom gives Dr. Farnsworth the lowest rating ever: A, minus, MINUS!
  • The Simpsons:
    • In one episode, a journalist who travels around America visiting locations to review visits Springfield. He's repeatedly tricked and abused by the residents and storms off to give Springfield the lowest rating he's given anywhere: 6/10.
    • In "Guess Who's Coming to Criticize Dinner?", Homer becomes a food critic. At first, being Homer, he gives everything an excellent review. While his fellow critics eventually convince him to be crueler, he still won't give anything lower than "seven thumbs up".

Unorganized, excessively wordy and kinda too "complain-y"... I'll give this page 8 out of 10.