History Main / BinaryBitsAndBytes

7th Aug '13 4:23:25 AM Frank75
Is there an issue? Send a Message


First things first. Despite being represented as ZeroesAndOnes, binary is not actually made of them. It can be any two distinct states; it's commonly represented as zeroes and ones because "00011101" is much easier to read and comprehend than "off, off, off, on, on, on, off, on". The states can be hole or no hole (ye olde punchcarde), voltage/no voltage (RAM), magnetic field polarities ({{Magnetic Disk}}s), reflective/not reflective (optical discs, e.g. {{Compact Disc}}s), or anything else; the binary 0s and 1s are simply practical methods to represent the state of the electronic hardware. Which state represents which "digit" varies by architecture, but it's canonical to say either 0 or "off" and 1 or "on". Nowadays in some cases, it is not even a state that is represented by ones and zeroes, it is the ''change'' of the state, with 1 being an increase of some value, and 0 being a drop, for example.

A bit is one of these pieces of information. Its value - the state - can either be 1 or 0. A bit is the smallest single unit of information possible in an electronic device. Everything you work with on your computer is composed of nothing but a long series of bits, and all numbers are internally represented as a binary number. It ''is'' technically possible to build a computer that counts in regular decimal numbers, and some early ones ''were'' in fact built this way, until the benefits of a binary system were understood, but bits are used for various reasons: they're easier to handle, the underlying electronics are cheaper, and because it also allows for ''logical operations'' in addition to your standard addition and subtraction. In practice, it's more common to use hexadecimal (base 16) numbers, because they just happen to be a comfortable shorthand for binary numbers.

A byte is a set of eight bits. Why not ten, since we count in base ten? Well, a single bit isn't very useful, which is why from the earliest days of computing they were grouped in bytes (which is [[PunnyName actually a pun]]) that usually were the smallest unit of addressing that is, the computer might've point into its own memory down to a single byte, but not to the individual bits. On the other hand, while ''processing'' information it was usually handled in different groupings based on the CentralProcessingUnit design, called 'words', and each computer had its own word length. Some computers, especially military ones, could have decidedly odd word lengths, like 21 or 37 bits, and thus byte length varied from 5 to 11 bits (a word length usually being a multiple of a byte length). Smaller processors and microcontrollers could even have a sub-byte word lengths -- for example, Intel 4004, the very first microprocessor ever, had a word that was 4 bits long.

It was the IBM System/360 mainframe of 1964, an incredibly influential computer system, that standardized the 8-bit byte, because it was one of the first systems geared for the text processing in addition to a pure number crunching, and it encoded text as 8 bits per symbol, so it was quite logical to have a 8-bit byte, for the computer could then address each symbol separately, saving the work on transcoding. For the same reason it also codified words constructed out of bytes before that the "byte addressing" described above wasn't a given, a lot of machines had pure "word addressing", without breaking it to bytes. This resolved a conflict between processing, where a longer word is more efficient, and addressing, where a shorter word is more efficient. Each address in the 360 corresponded to one byte, but it processed data in words that were four bytes (32 bits) long.

A byte has 2^8, or 256 unique combinations to work with. For example, a single byte can represent any integer between 0 and 255, or between -128 and +127 if you use one of the bits to indicate whether the number is positive or negative (2^7 = 128). The maximum representable number is always one less than two to the power of the number of bits, because zero takes up one combination of all the possible bit permutations.

So, what does that 8-bit, 16-bit, 32-bit, or 128-bit on a gaming system mean?

It means word length. Computer processors deal with words, and it's easiest to use words that are a multiple of the eight bit byte. In most cases, this ''doesn't matter''. You can still program a 16-bit processor such that it displays numbers above 65,535; it will simply use two words to store the number. It's not going to block you from getting a high score or anything. These limitations can be gotten around with clever programming and knowledge of binary arithmetic.

However, the processor is limited in two ways. First, word length; specifically, it can only access one word in one operation. A 16-bit processor has to use two add instructions to get the answer to the question, "What is 70,000 + 1?" through integers, while a 32-bit computer can do so in one instruction. This has a knock-on effect on the speed of the processor; while a 16-bit and a 32-bit processor might both have the same absolute speed rating (say ten million instructions per second, or 10MHz), the 16-bit processor will require twice as many instructions, and therefore twice as much time, to handle numbers which don't fit in a single one of its words.

Second, the size of the address space. This is how the processor finds data to work on, and where to put the new data once it's finished processing. This is also measured in bits, but in this case, the decimal number created by the string of bits defines how many memory locations the processor can see. And what's at each location? Traditionally, one byte. So a 16-bit address space means that the processor can see 65,536 one-byte memory locations (known as 64 kilobytes, or 64KB, or 64K). A 32-bit address space can see 4,294,967,296 bytes, or four gigabytes, or 4GB. Note that address zero counts too, so in addressing, you use the power of 2 without any subtraction.

This is actually a significant problem today. Memory is so cheap that 4 or even 8 gigabytes of RAM are easily within reach of the average customer, and programs are getting to the point where they can make use of it. But processors, operating systems, and software with 32-bit address spaces are limited to 4GB in total. PC games like ''VideoGame/SupremeCommander'' have issues with this, for example. To compound the problem, graphics cards need address space for their VideoRAM, and many other devices use some of it as well, so the amount of actual RAM that a processor can see is usually limited to half of its address space. This problem is solved by increasing the address space to 64 bits, or 16 exabytes, so large it makes 4 gigabytes look like a speck. (In theory, anyway; most such processors are actually artificially capped at 48 to 52 bits.) Considering that the difference between 2^32 and 2^64 is so enormous, and considering also that modern IC manufacturing technology is getting to the point where transistors just ''can't'' be made any smaller, that should last us a while.

What's this mean in practice? More bits means you can discuss much bigger concepts without needing goofy programmer-unfriendly solutions. It means that you can get better graphics and larger environments, as larger textures and bigger levels can be loaded into memory. It's not the only important thing: there's not much benefit if you don't have processor speed, memory, graphics processing capability, and storage space for the relevant stuff to float around in first. In addition, 64-bit addresses take up twice as much space as 32-bit addresses, meaning more memory is required for the 64-bit system (although usually not as bad as double, since not everything needs to be replaced by a 64-bit version).

There is a lot more information than that, but we're just trying to give a starter course here.
----
<<|HowVideoGameSpecsWork|>>

to:

First things first. Despite being represented as ZeroesAndOnes, binary is not actually made of them. It can be any two distinct states; it's commonly represented as zeroes and ones because "00011101" is much easier to read and comprehend than "off, off, off, on, on, on, off, on". The states can be hole or no hole (ye olde punchcarde), voltage/no voltage (RAM), magnetic field polarities ({{Magnetic Disk}}s), reflective/not reflective (optical discs, e.g. {{Compact Disc}}s), or anything else; the binary 0s and 1s are simply practical methods to represent the state of the electronic hardware. Which state represents which "digit" varies by architecture, but it's canonical to say either 0 or "off" and 1 or "on". Nowadays in some cases, it is not even a state that is represented by ones and zeroes, it is the ''change'' of the state, with 1 being an increase of some value, and 0 being a drop, for example.

A bit is one of these pieces of information. Its value - the state - can either be 1 or 0. A bit is the smallest single unit of information possible in an electronic device. Everything you work with on your computer is composed of nothing but a long series of bits, and all numbers are internally represented as a binary number. It ''is'' technically possible to build a computer that counts in regular decimal numbers, and some early ones ''were'' in fact built this way, until the benefits of a binary system were understood, but bits are used for various reasons: they're easier to handle, the underlying electronics are cheaper, and because it also allows for ''logical operations'' in addition to your standard addition and subtraction. In practice, it's more common to use hexadecimal (base 16) numbers, because they just happen to be a comfortable shorthand for binary numbers.

A byte is a set of eight bits. Why not ten, since we count in base ten? Well, a single bit isn't very useful, which is why from the earliest days of computing they were grouped in bytes (which is [[PunnyName actually a pun]]) that usually were the smallest unit of addressing that is, the computer might've point into its own memory down to a single byte, but not to the individual bits. On the other hand, while ''processing'' information it was usually handled in different groupings based on the CentralProcessingUnit design, called 'words', and each computer had its own word length. Some computers, especially military ones, could have decidedly odd word lengths, like 21 or 37 bits, and thus byte length varied from 5 to 11 bits (a word length usually being a multiple of a byte length). Smaller processors and microcontrollers could even have a sub-byte word lengths -- for example, Intel 4004, the very first microprocessor ever, had a word that was 4 bits long.

It was the IBM System/360 mainframe of 1964, an incredibly influential computer system, that standardized the 8-bit byte, because it was one of the first systems geared for the text processing in addition to a pure number crunching, and it encoded text as 8 bits per symbol, so it was quite logical to have a 8-bit byte, for the computer could then address each symbol separately, saving the work on transcoding. For the same reason it also codified words constructed out of bytes before that the "byte addressing" described above wasn't a given, a lot of machines had pure "word addressing", without breaking it to bytes. This resolved a conflict between processing, where a longer word is more efficient, and addressing, where a shorter word is more efficient. Each address in the 360 corresponded to one byte, but it processed data in words that were four bytes (32 bits) long.

A byte has 2^8, or 256 unique combinations to work with. For example, a single byte can represent any integer between 0 and 255, or between -128 and +127 if you use one of the bits to indicate whether the number is positive or negative (2^7 = 128). The maximum representable number is always one less than two to the power of the number of bits, because zero takes up one combination of all the possible bit permutations.

So, what does that 8-bit, 16-bit, 32-bit, or 128-bit on a gaming system mean?

It means word length. Computer processors deal with words, and it's easiest to use words that are a multiple of the eight bit byte. In most cases, this ''doesn't matter''. You can still program a 16-bit processor such that it displays numbers above 65,535; it will simply use two words to store the number. It's not going to block you from getting a high score or anything. These limitations can be gotten around with clever programming and knowledge of binary arithmetic.

However, the processor is limited in two ways. First, word length; specifically, it can only access one word in one operation. A 16-bit processor has to use two add instructions to get the answer to the question, "What is 70,000 + 1?" through integers, while a 32-bit computer can do so in one instruction. This has a knock-on effect on the speed of the processor; while a 16-bit and a 32-bit processor might both have the same absolute speed rating (say ten million instructions per second, or 10MHz), the 16-bit processor will require twice as many instructions, and therefore twice as much time, to handle numbers which don't fit in a single one of its words.

Second, the size of the address space. This is how the processor finds data to work on, and where to put the new data once it's finished processing. This is also measured in bits, but in this case, the decimal number created by the string of bits defines how many memory locations the processor can see. And what's at each location? Traditionally, one byte. So a 16-bit address space means that the processor can see 65,536 one-byte memory locations (known as 64 kilobytes, or 64KB, or 64K). A 32-bit address space can see 4,294,967,296 bytes, or four gigabytes, or 4GB. Note that address zero counts too, so in addressing, you use the power of 2 without any subtraction.

This is actually a significant problem today. Memory is so cheap that 4 or even 8 gigabytes of RAM are easily within reach of the average customer, and programs are getting to the point where they can make use of it. But processors, operating systems, and software with 32-bit address spaces are limited to 4GB in total. PC games like ''VideoGame/SupremeCommander'' have issues with this, for example. To compound the problem, graphics cards need address space for their VideoRAM, and many other devices use some of it as well, so the amount of actual RAM that a processor can see is usually limited to half of its address space. This problem is solved by increasing the address space to 64 bits, or 16 exabytes, so large it makes 4 gigabytes look like a speck. (In theory, anyway; most such processors are actually artificially capped at 48 to 52 bits.) Considering that the difference between 2^32 and 2^64 is so enormous, and considering also that modern IC manufacturing technology is getting to the point where transistors just ''can't'' be made any smaller, that should last us a while.

What's this mean in practice? More bits means you can discuss much bigger concepts without needing goofy programmer-unfriendly solutions. It means that you can get better graphics and larger environments, as larger textures and bigger levels can be loaded into memory. It's not the only important thing: there's not much benefit if you don't have processor speed, memory, graphics processing capability, and storage space for the relevant stuff to float around in first. In addition, 64-bit addresses take up twice as much space as 32-bit addresses, meaning more memory is required for the 64-bit system (although usually not as bad as double, since not everything needs to be replaced by a 64-bit version).

There is a lot more information than that, but we're just trying to give a starter course here.
----
<<|HowVideoGameSpecsWork|>>
[[redirect:UsefulNotes/BinaryBitsAndBytes]]
22nd Jun '13 12:10:21 PM Nintyboi
Is there an issue? Send a Message


A byte is a set of eight bits. Why not ten, since we count in base ten? Well, a single bit isn't very useful, which is why from the earliest days of computing they were grouped in bytes (which is [[PunnyName actually a pun]]) that usually were the smallest unit of addressing that is, the computer might've point into its own memory down to a single byte, but not to the individual bits. On the other hand, while ''processing'' information it was usually handled in different groupings based on the CentralProcessingUnit design, called 'words', and each computer had its own word length. Some computers, especially military ones, could have decidedly odd word lengths, like 21 or 37 bits, and thus byte legnth varied from 5 to 11 bits (a word length usually being a multiple of a byte length). Smaller processors and microcontrollers could even have a sub-byte word lengths -- for example, Intel 4004, the very first microprocessor ever, had a word that was 4 bits long.

to:

A byte is a set of eight bits. Why not ten, since we count in base ten? Well, a single bit isn't very useful, which is why from the earliest days of computing they were grouped in bytes (which is [[PunnyName actually a pun]]) that usually were the smallest unit of addressing that is, the computer might've point into its own memory down to a single byte, but not to the individual bits. On the other hand, while ''processing'' information it was usually handled in different groupings based on the CentralProcessingUnit design, called 'words', and each computer had its own word length. Some computers, especially military ones, could have decidedly odd word lengths, like 21 or 37 bits, and thus byte legnth length varied from 5 to 11 bits (a word length usually being a multiple of a byte length). Smaller processors and microcontrollers could even have a sub-byte word lengths -- for example, Intel 4004, the very first microprocessor ever, had a word that was 4 bits long.
17th Sep '12 6:32:19 PM AaronEm
Is there an issue? Send a Message


This is actually a significant problem today. Memory is so cheap that 4 or even 8 gigabytes of RAM are easily within reach of the average customer, and programs are getting to the point where they can make use of it. But processors, operating systems, and software with 32-bit address spaces are limited to 4GB in total. PC games like ''VideoGame/SupremeCommander'' have issues with this, for example. To compound the problem, graphics cards need address space for their VideoRAM, and many other devices use some of it as well, so the amount of actual RAM that a processor can see is usually limited to half of its address space. This problem is solved by increasing the address space to 64 bits, or 16 exabytes, so large it makes 4 gigabytes look like a speck. (In theory, anyway; most such processors are actually artificially capped at 48 to 52 bits.) That should do for a while. Then again, that's what we said when we went to 32 bits.

to:

This is actually a significant problem today. Memory is so cheap that 4 or even 8 gigabytes of RAM are easily within reach of the average customer, and programs are getting to the point where they can make use of it. But processors, operating systems, and software with 32-bit address spaces are limited to 4GB in total. PC games like ''VideoGame/SupremeCommander'' have issues with this, for example. To compound the problem, graphics cards need address space for their VideoRAM, and many other devices use some of it as well, so the amount of actual RAM that a processor can see is usually limited to half of its address space. This problem is solved by increasing the address space to 64 bits, or 16 exabytes, so large it makes 4 gigabytes look like a speck. (In theory, anyway; most such processors are actually artificially capped at 48 to 52 bits.) That Considering that the difference between 2^32 and 2^64 is so enormous, and considering also that modern IC manufacturing technology is getting to the point where transistors just ''can't'' be made any smaller, that should do for last us a while. Then again, that's what we said when we went to 32 bits.
while.
17th Sep '12 6:30:22 PM AaronEm
Is there an issue? Send a Message


However, the processor is limited in two ways. First, word length. It can only access one word in one operation. A conventional 16-bit processor has to use two add instructions to get the answer to the question, "What is 70,000 + 1?" through integers, while a 32-bit computer can do so in one instruction.

to:

However, the processor is limited in two ways. First, word length. It length; specifically, it can only access one word in one operation. A conventional 16-bit processor has to use two add instructions to get the answer to the question, "What is 70,000 + 1?" through integers, while a 32-bit computer can do so in one instruction.
instruction. This has a knock-on effect on the speed of the processor; while a 16-bit and a 32-bit processor might both have the same absolute speed rating (say ten million instructions per second, or 10MHz), the 16-bit processor will require twice as many instructions, and therefore twice as much time, to handle numbers which don't fit in a single one of its words.
17th Sep '12 6:26:33 PM AaronEm
Is there an issue? Send a Message


First things first. Despite being represented as ZeroesAndOnes, binary is not actually made of them. It can be any two distinct states; it's commonly represented as zeroes and ones because "00011101" is much easier to read and comprehend than "off, off, off, on, on, on, off, on". The states can be hole or no hole (ye olde punchcarde), high/low voltages (RAM), magnetic field polarities ({{Magnetic Disk}}s), pits (optical discs, e.g. {{Compact Disc}}s), or anything else; the binary 0s and 1s are simply practical methods to represent the state of the electronic hardware. Which state represents which "digit" varies by architecture, but it's canonical to say either 0 or "off" and 1 or "on". Nowadays in some cases, it is not even a state that is represented by ones and zeroes, it is the ''change'' of the state, with 1 being an increase of some value, and 0 being a drop, for example.

to:

First things first. Despite being represented as ZeroesAndOnes, binary is not actually made of them. It can be any two distinct states; it's commonly represented as zeroes and ones because "00011101" is much easier to read and comprehend than "off, off, off, on, on, on, off, on". The states can be hole or no hole (ye olde punchcarde), high/low voltages voltage/no voltage (RAM), magnetic field polarities ({{Magnetic Disk}}s), pits reflective/not reflective (optical discs, e.g. {{Compact Disc}}s), or anything else; the binary 0s and 1s are simply practical methods to represent the state of the electronic hardware. Which state represents which "digit" varies by architecture, but it's canonical to say either 0 or "off" and 1 or "on". Nowadays in some cases, it is not even a state that is represented by ones and zeroes, it is the ''change'' of the state, with 1 being an increase of some value, and 0 being a drop, for example.
1st Sep '12 6:37:38 AM Khathi
Is there an issue? Send a Message


A byte is a set of eight bits. Why not ten, since we count in base ten? Well, a single bit isn't very useful, which is why from the eraliest days of computing they were grouped in bytes (which is [[PunnyName actually a pun]]) that usually were sthe smallest unit of addressing that is, the computer might've point into its own memory down to a single byte, but not individual bits. On the other hand, while ''processing'' information, it was usually handled in different groupings based on the CentralProcessingUnit design, called 'words', and each computer had its own word length. Some computers, especially military ones, could have decidedly odd word lengths, like 21 or 37 bits, and thus byte legnth varied from 5 to 11 bits (a word length usually being a multiple of a byte length). Smaller processors and microcontrollers could even have a sub-byte word lengths -- for example, Intel 4004, the very first microprocessor ever, had a word that was 4 bits long.

It was the IBM System/360 mainframe of 1964, an incredibly influential computer system, that standardized the 8-bit byte, because it was one of the first systems geared for the text processing in addition to a pure number crunching, and it encoded text as 8 bits per symbol, so it was quite logical to have a 8-bit byte then, for the computer could then address each symbol separately, saving the work on transcoding. For the same reason it also codified words constructed out of bytes before that the "byte addressing" described above wasn't a given, a lot of machines had pure "word addressing", without breaking it to bytes. This resolved a conflict between processing, where a longer word is more efficient, and addressing, where a shorter word is more efficient. Each address in the 360 corresponded to one byte, but it processed data in words that were four bytes (32 bits) long.

to:

A byte is a set of eight bits. Why not ten, since we count in base ten? Well, a single bit isn't very useful, which is why from the eraliest earliest days of computing they were grouped in bytes (which is [[PunnyName actually a pun]]) that usually were sthe the smallest unit of addressing that is, the computer might've point into its own memory down to a single byte, but not to the individual bits. On the other hand, while ''processing'' information, information it was usually handled in different groupings based on the CentralProcessingUnit design, called 'words', and each computer had its own word length. Some computers, especially military ones, could have decidedly odd word lengths, like 21 or 37 bits, and thus byte legnth varied from 5 to 11 bits (a word length usually being a multiple of a byte length). Smaller processors and microcontrollers could even have a sub-byte word lengths -- for example, Intel 4004, the very first microprocessor ever, had a word that was 4 bits long.

It was the IBM System/360 mainframe of 1964, an incredibly influential computer system, that standardized the 8-bit byte, because it was one of the first systems geared for the text processing in addition to a pure number crunching, and it encoded text as 8 bits per symbol, so it was quite logical to have a 8-bit byte then, byte, for the computer could then address each symbol separately, saving the work on transcoding. For the same reason it also codified words constructed out of bytes before that the "byte addressing" described above wasn't a given, a lot of machines had pure "word addressing", without breaking it to bytes. This resolved a conflict between processing, where a longer word is more efficient, and addressing, where a shorter word is more efficient. Each address in the 360 corresponded to one byte, but it processed data in words that were four bytes (32 bits) long.
1st Sep '12 6:34:50 AM Khathi
Is there an issue? Send a Message


A byte is a set of eight bits. Why not ten, since we count in base ten? In the early days of computing, bits were handled in 'words', and each computer had its own word length. Some computers, especially military ones, could have decidedly odd word lengths, like 21 or 37 bits, and smaller processors and microcontrollers could have a sub-byte word lengths -- for example, Intel 4004, the very first microprocessor ever, had a word that was 4 bits long. It was the IBM System/360 mainframe of 1964 that standardized the 8-bit byte, and words constructed out of bytes. This resolved a conflict between processing, where a longer word is more efficient, and addressing, where a shorter word is more efficient. Each address in the 360 corresponded to one byte, but it processed data in words that were four bytes (32 bits) long.

to:

A byte is a set of eight bits. Why not ten, since we count in base ten? In Well, a single bit isn't very useful, which is why from the early eraliest days of computing, bits computing they were grouped in bytes (which is [[PunnyName actually a pun]]) that usually were sthe smallest unit of addressing that is, the computer might've point into its own memory down to a single byte, but not individual bits. On the other hand, while ''processing'' information, it was usually handled in different groupings based on the CentralProcessingUnit design, called 'words', and each computer had its own word length. Some computers, especially military ones, could have decidedly odd word lengths, like 21 or 37 bits, and smaller thus byte legnth varied from 5 to 11 bits (a word length usually being a multiple of a byte length). Smaller processors and microcontrollers could even have a sub-byte word lengths -- for example, Intel 4004, the very first microprocessor ever, had a word that was 4 bits long. long.

It was the IBM System/360 mainframe of 1964 1964, an incredibly influential computer system, that standardized the 8-bit byte, because it was one of the first systems geared for the text processing in addition to a pure number crunching, and it encoded text as 8 bits per symbol, so it was quite logical to have a 8-bit byte then, for the computer could then address each symbol separately, saving the work on transcoding. For the same reason it also codified words constructed out of bytes before that the "byte addressing" described above wasn't a given, a lot of machines had pure "word addressing", without breaking it to bytes. This resolved a conflict between processing, where a longer word is more efficient, and addressing, where a shorter word is more efficient. Each address in the 360 corresponded to one byte, but it processed data in words that were four bytes (32 bits) long.
17th Jul '12 7:07:03 PM DamianYerrick
Is there an issue? Send a Message


However, the processor is limited in two ways. First, word length. It can only access one word in one operation. A conventional 16-bit processor has to use two instructions to get the answer to the question, "What is 70,000 + 1?" through integers, while a 32-bit computer can do so in one instruction.

to:

However, the processor is limited in two ways. First, word length. It can only access one word in one operation. A conventional 16-bit processor has to use two add instructions to get the answer to the question, "What is 70,000 + 1?" through integers, while a 32-bit computer can do so in one instruction.
3rd May '12 12:07:49 PM nombretomado
Is there an issue? Send a Message


This is actually a significant problem today. Memory is so cheap that 4 or even 8 gigabytes of RAM are easily within reach of the average customer, and programs are getting to the point where they can make use of it. But processors, operating systems, and software with 32-bit address spaces are limited to 4GB in total. PC games like SupremeCommander have issues with this, for example. To compound the problem, graphics cards need address space for their VideoRAM, and many other devices use some of it as well, so the amount of actual RAM that a processor can see is usually limited to half of its address space. This problem is solved by increasing the address space to 64 bits, or 16 exabytes, so large it makes 4 gigabytes look like a speck. (In theory, anyway; most such processors are actually artificially capped at 48 to 52 bits.) That should do for a while. Then again, that's what we said when we went to 32 bits.

to:

This is actually a significant problem today. Memory is so cheap that 4 or even 8 gigabytes of RAM are easily within reach of the average customer, and programs are getting to the point where they can make use of it. But processors, operating systems, and software with 32-bit address spaces are limited to 4GB in total. PC games like SupremeCommander ''VideoGame/SupremeCommander'' have issues with this, for example. To compound the problem, graphics cards need address space for their VideoRAM, and many other devices use some of it as well, so the amount of actual RAM that a processor can see is usually limited to half of its address space. This problem is solved by increasing the address space to 64 bits, or 16 exabytes, so large it makes 4 gigabytes look like a speck. (In theory, anyway; most such processors are actually artificially capped at 48 to 52 bits.) That should do for a while. Then again, that's what we said when we went to 32 bits.
9th Dec '11 12:00:09 AM Tonestronaut
Is there an issue? Send a Message


A byte has 2^8, or 256 unique combinations to work with. For example, a single byte can represent any integer between 0 and 255, or between -128 and +127 you use one of the bits to indicate whether the number is positive or negative (2^7 = 128). The maximum representable number is always one less than two to the power of the number of bits, because zero takes up one combination of all the possible bit permutations.

to:

A byte has 2^8, or 256 unique combinations to work with. For example, a single byte can represent any integer between 0 and 255, or between -128 and +127 if you use one of the bits to indicate whether the number is positive or negative (2^7 = 128). The maximum representable number is always one less than two to the power of the number of bits, because zero takes up one combination of all the possible bit permutations.
This list shows the last 10 events of 12. Show all.
http://tvtropes.org/pmwiki/article_history.php?article=Main.BinaryBitsAndBytes