Follow TV Tropes

Following

Context UsefulNotes / PowersOfTwoMinusOne

Go To

1@@[[MediaNotes/{{Python}} for x in range(16):\
2 print((2 ** (x + 1)) - 1)]]@@
3
4Video games are nothing more than computer programs, and they're limited by the systems they run on. One particularly strict limitation is the fact that computers can't handle numbers of arbitrary size with any kind of efficiency. Because efficiency is normally pretty important for {{video game}}s, this places some hard limits on the range of values they can work with. Because computers work in binary, that size has to be a certain number of bits (binary digits -- a zero or a one). Numbers stored in a computer's memory are called ''variables''.
5
6Computers also have a limit to the maximum size a variable can be. It's not a hard limit, but using bigger numbers slows down computation considerably. Whenever you hear about the "number of bits" of a video game console (e.g., the Platform/NintendoEntertainmentSystem being "8-bit"), that's what they're referring to. The NES has an 8-bit data bus; it can only move 8 bits of data around at a time. If it has to move any more, it has to do it in more than one cycle (and if it has to do math on them, it needs to CarryTheOne). In technical terms, this is called "word size."
7
8In newer video games, many things -- say, the number of items in the player's HyperspaceArsenal -- are {{cap}}ped at reasonable limits because the variable actually used to store that value has a decidedly ''un''reasonable limit. However, in older days, it was common to let the computer itself cap these values, because the limits were a lot lower -- an 8-bit computer such as the NES has a "limit" of 255.
9
10The highest value you can store in a variable or register can be determined by the number of bits it uses. It's 2 to the power of the number of bits, minus 1 because we're counting from zero. Here's a handy table.
11
12* '''8 bits''' can hold a value as high as 255. The Platform/{{Atari 2600}} and [[Platform/{{Atari 7800}} 7800]], Platform/NintendoEntertainmentSystem, Platform/SegaMasterSystem, Platform/GameBoy and early home computers were all 8-bit.
13* '''12 bits''' can hold a value as high as 4095. Some NES games like ''VideoGame/FinalFantasyI'' and ''VideoGame/{{Rygar}}'' used hacks to get values this large.
14* '''16 bits''' can hold a value as high as 65,535. The Platform/SuperNintendoEntertainmentSystem and Platform/SegaGenesis are 16-bit, as are [=PCs=] made in the mid-1980s, some late-1990s handhelds, and a few Japanese gaming computers.
15* '''24 bits''' can hold a value as high as 16,777,215.
16* '''32 bits''' can hold a value as high as 4,294,967,295. Most home consoles from the [[MediaNotes/TheFourthGenerationOfConsoleVideoGames fourth generation]] to the [[MediaNotes/TheSixthGenerationOfConsoleVideoGames sixth generation]] were 32-bit (the Platform/{{Nintendo 64}} and Platform/PlayStation2 being the exceptions), as was the Platform/{{Wii}}, and all post-2000 handhelds prior to the Platform/NintendoSwitch. So are desktop [=PCs=] made from 1987 to 2007, and most smartphones made before 2018. Most devices will never feasibly reach this cap, but there are a few major exceptions which made changing over to 64-bit a necessity, such as the [[MillenniumBug Y2K38 Problem]] (in which the number of seconds passed since 1970 will reach that amount, causing problems in systems that use [[https://en.m.wikipedia.org/wiki/Unix_time Unix Time]]) and the limited pool of [=IPv4=] addresses.
17* '''64 bits''' can hold a value as high as 18,446,744,073,709,551,615.[[note]]About 18.4 Quintillion[[/note]] The Platform/{{Nintendo 64}}[[note]]though most games run in 32-bit mode[[/note]], Platform/PlayStation2, all home consoles since the [[MediaNotes/TheSeventhGenerationOfConsoleVideoGames seventh generation]] (except the Wii), and modern desktop [=PCs=] are all 64-bit, with 64-bit operating systems becoming rather commonplace in 2010 (at least, on new computers).
18
19Note that in addition to the above, most recent processors have additional "vector" processors for dealing with multiple calculations in parallel; these have so far been up to 512 bits wide. Also, floating point calculations are often done at different sizes.
20
21So what happens if you try to store a number bigger than that? Say you were trying to add one to a 16-bit variable that's already set to 65,535. It would "wrap" -- adding one to 65,535 would give you zero. Adding two would give you one. And so on, and so forth. The same goes the other way -- subtracting one from zero gives you 65,535. When this happens, it's called "carry" if expected or "overflow" if not. This is the source of many GoodBadBugs; most of them are a case of programmer oversight, as the "overflow" flag in the CPU is triggered whenever this happens. Also see {{Cap}}.
22
23However, that's not the end of the story. All of those values assume that the variable is unsigned, meaning it can't hold a negative number. Signed numbers essentially sacrifice one bit in order to be able to store both positive ''and'' negative numbers. Zero is taken to be the first "positive" number, whereas negative numbers start at -1, which makes the negative limit higher (absolutely speaking). This effectively reduces by half the maximum number that the value can hold for the sake of representing negative numbers. Here are some revised tables:
24
25* '''8 bits''' can hold a value from -128 to 127.
26* '''12 bits''' can hold a value from -2048 to 2047.
27* '''16 bits''' can hold a value from -32,768 to 32,767.
28* '''32 bits''' can hold a value from -2,147,483,648 to 2,147,483,647.
29
30And so on.
31
32It should be noted that although processors can work with variables of various sizes, they don't generally have to; a 64-bit processor can still work with 8-bit values. Thus, the limit of any given value could vary depending on the actual size of the variable used to store it. Smaller variables are often used to reduce the memory required for a game. If it's really necessary, most systems also have ways to represent numbers much ''larger'' than the hardware word size, potentially with millions of digits, but this is rarely needed in video games (and is ''significantly'' slower, too).
33
34So that explains the numbers themselves. But what causes them to crop up in games? There are a few reasons these numbers show up:
35
36* As a {{Cap}}. Programming for older systems is really ''really'' hard; these systems were relatively good at graphics processing, but very bad at calculations, so every CPU cycle saved was a big deal.[[note]]Platform/Atari2600 was hit the hardest: it had ''no'' UsefulNotes/VideoRAM, so programmers had to create the picture by timely manipulation of the graphic output registers, but as it also had no interrupt lines in the {{CPU}}, this was possible only by the careful timing of the program itself and inserting graphical output commands into the correct places.[[/note]] If you cap a value to the actual limit of your variable, you can enforce the cap with a "did previous instruction overflow" test, which is usually faster than a "is X greater than Y" test (the latter often involves hidden subtraction, even on newer processors).
37** Note that many games exhibit bugs due to ''not'' checking the overflow after addition. That is, there's a cap because once you cross the cap, something breaks or behaves strangely.
38* To use shifts. Computers can multiply and divide by powers of two by using left- and right-shift operations. Shifts are much, ''much'' faster than multiplication or division, but only work for powers of two.
39** Programmers in the early days found ways around that. Want to multiply by 9? Shift left three times and add on the original number. This trick is expanded upon and used by some modern processors.
40*** It might backfire on modern processors as multiplication takes a few cycles but the shifts would introduce data hazards and cause stall cycles. Hopefully the processor will implement out-of-order execution.
41* Errors. Many function calls use -1 as an error code. If this value is not checked for, and then gets stored into an unsigned variable, it becomes the highest number the variable can hold (due to how "two's complement" signing works.)
42* Divide By Zero. As a specific example of the above, some older systems would return the highest number the variable can hold as the result of division by zero (think of it as "infinity"). This is much more desirable than crashing when the player's save file is on the line.
43** As of Divide By Zero for floating point numbers the behaviour is mandated by IEEE 754 (a standard of handling floating point numbers). Sometimes this makes things worse if it hides an error - instead of clear crash during test phase you have a potentially corrupted save file.
44* For alignment purposes. For most purposes, data in memory has to be aligned to its size. For example, a one-byte (8-bit) value can be stored at any address, but a four-byte value must be stored on some platforms at an address that is a multiple of four. When game programmers used to map out memory used by the game, they would keep in mind this alignment requirement. Say a given block of memory is used to store player inventory, and each item in the inventory is a single byte. It makes sense to give the player a power of two items, since any value stored after it will have the same alignment as the start of the block.
45* Reading in data for something it's not meant for. A very frequent source of GoodBadBugs in olden times was when data was loaded as something that it actually wasn't. For example, suppose something goes iffy in the code, and the game starts reading map data as if it were attack data. But map data is something very different from attack data. Perhaps the game uses the value FFFF (65535 in hexadecimal) to indicate the end of the map. This is a common value to use for this purpose (called a "terminal value" in programming jargon) because it's the ''last'' value two bytes can store, so it's easy to remember. (One FF by itself might be used for, say, the end of a room, or the end of a script that runs on that map.) If the code starts reading in a map as attack data, however, these values may end up anywhere. If FFFF were read in as the damage of an attack, for example, the attack would deal 65535 damage (possibly capped to 9999, of course).
46* Because programmers have these numbers etched into their mind-- to a sufficiently binary-minded programmer, "256" (binary 100000000) is more of a round number than "300". Hence they show up for no reason at all.
47** That's a part of the reasoning for the whole concept of "magic numbers" in engineering. When you are making something you need to decide on some properties (like plank size or ceiling height, for example) anyway, so why not use the ones that someone [[FollowTheLeader has already used before]]? And then it happens that the similar sizes allow for compatibility and interoperability, it becomes a self-reinforcing convention, and we all end up with 2×4s[[note]]Dimensional lumber, used in United States and Canada. They are named after the lumber's profile thickness, in inches. 2×4s, it should be noted, are [[NonIndicativeName actually 1½×3½]]. Aren't standards fun?[[/note]]. The same logic applies to the programming too.
48* As a naming convention. Everything in a computer indexes, or starts, from position 0. For example, in an 8-bit number, bit 0 is the first bit. How does this make sense? Because if it's a 1, the value of the number is 2^0, or 1.
49
50Now, everything thus far has been about integers. If you want to store a fractional component, you have two choices:
51* Fixed point. This means you're essentially taking your number and sticking the decimal point at a specific spot in that number. The point never moves around. On the plus side, these are very fast to work with. Note that the number is still in binary, so it's about halves and quarters and such, rather than the tenths and hundredths and such that decimals use.
52* Floating point. This works like scientific notation; one bit is used to store the sign, some number of bits (called the mantissa) are used to store the number to the right of the decimal point, and the remaining bits (called the exponent) are, predictably, used to store the power of 2 to multiply by. (We use 2 instead of 10 because, again, computers work in binary.) It should be noted that this is a gross oversimplification -- if you've got ten minutes to kill and like ''VideoGame/SuperMario64'', speedrunner WebVideo/Pannenkoek2012 goes into [[https://www.youtube.com/watch?v=9hdFG2GcNuA more detail here.]]
53
54It's of course also possible just to store a rational number as a fraction by storing two integers, which has the advantage of avoiding various arithmetic errors that are found in fixed point and floating point numbers, but such a representation is rarely used outside of rigorous mathematical programming as it not only uses memory less efficiently (equivalent fractions can have different numerators and denominators), but it's slower to compute addition as denominators need to be made the same. The mathematical errors intrinsic to fixed and floating point numbers are the reason for many of the sporadic errors in modern games as they cause unhandled edge cases in physical simulations.

Top