Follow TV Tropes

Following

History MediaNotes / RandomAccessMemory

Go To

OR

Is there an issue? Send a MessageReason:
None


Most people simply call RAM "memory" now. It comprises the main operating part of the computer's [[UsefulNotes/MemoryHierarchy storage hierarchy]]. Just as UsefulNotes/ClockSpeed is misunderstood to be the only measure of UsefulNotes/CentralProcessingUnit power, capacity is thought to be the only important measurement in when it comes to Random Access Memory.

to:

Most people simply call RAM "memory" now. It comprises the main operating part of the computer's [[UsefulNotes/MemoryHierarchy [[MediaNotes/MemoryHierarchy storage hierarchy]]. Just as UsefulNotes/ClockSpeed MediaNotes/ClockSpeed is misunderstood to be the only measure of UsefulNotes/CentralProcessingUnit power, capacity is thought to be the only important measurement in when it comes to Random Access Memory.



Like a CPU, memory speed is measured in UsefulNotes/ClockSpeed in between latency. And latency tends to affect memory more than processors. This is because one also has to take into account the speed of the bus, the shared electrical pathway between components. With RAM embedded on the CPU die, there is a very short distance and a dedicated pathway that the bits can travel across, while RAM placed in other areas requires the bits to travel the shared bus, which may have other devices using it. This means factors such as the bus speed and the number of other devices requiring the bus can contribute to data-transfer latency. Even the physical length of the bus can become a non-trivial factor in how fast data can be moved in and out of RAM.

to:

Like a CPU, memory speed is measured in UsefulNotes/ClockSpeed MediaNotes/ClockSpeed in between latency. And latency tends to affect memory more than processors. This is because one also has to take into account the speed of the bus, the shared electrical pathway between components. With RAM embedded on the CPU die, there is a very short distance and a dedicated pathway that the bits can travel across, while RAM placed in other areas requires the bits to travel the shared bus, which may have other devices using it. This means factors such as the bus speed and the number of other devices requiring the bus can contribute to data-transfer latency. Even the physical length of the bus can become a non-trivial factor in how fast data can be moved in and out of RAM.



In addition to their compact size (for example, a unit holding 1K, a rather generous amount of the time, was a flat frame only 20x20x1 cm square), they were also rather cheap. Cores had to be assembled by hand, even in their last days (early attempts to mechanize the process failed and were abandoned once semiconductor RAM appeared), so most manufacturers used the cheap labor of East Asian seamstresses and embroiderers (who had been made redundant by the widespread adoption of sewing machines) thus making it affordable. Most UsefulNotes/MainframesAndMinicomputers used core memory, and it was ingrained into the minds of the people who worked on them to such extent that even now you can meet a situation when the word "core" is used as a synonym for RAM, even though computers in general haven't used it since TheSeventies.

to:

In addition to their compact size (for example, a unit holding 1K, a rather generous amount of the time, was a flat frame only 20x20x1 cm square), they were also rather cheap. Cores had to be assembled by hand, even in their last days (early attempts to mechanize the process failed and were abandoned once semiconductor RAM appeared), so most manufacturers used the cheap labor of East Asian seamstresses and embroiderers (who had been made redundant by the widespread adoption of sewing machines) thus making it affordable. Most UsefulNotes/MainframesAndMinicomputers Platform/MainframesAndMinicomputers used core memory, and it was ingrained into the minds of the people who worked on them to such extent that even now you can meet a situation when the word "core" is used as a synonym for RAM, even though computers in general haven't used it since TheSeventies.
Is there an issue? Send a MessageReason:
None


A misunderstood aspect of memory is that more memory automatically equates to better performance. This probably started around in TheNineties when "just good enough" computers were sold. Technology was improving at such a rapid pace that the amount of RAM in a recently purchased computer may not be enough to run a program a half year down the road. The amount of RAM available to a computer is a massive YMMV in terms of performance. But the test is actually simple to determine if a system would benefit from more. If RAM is constantly full and using the hard drive's swap file[[note]]This is a sort of cheat to fool programs into thinking the computer has more memory than it really does. When the OS believes something in RAM is not being accessed or used all that much, it'll store that data in a swap file to make room for more useful data[[/note]], the system could definitely benefit from more RAM. If RAM is barely being used, then the system isn't really using it so adding more won't help. This changed later operating systems, however, where extra memory is passively used to hold extra data files for fast access by programs, filling up the longer the system is on. If the memory is needed for active use, then the cache is pushed out to make room.

to:

A misunderstood aspect of memory is that more memory automatically equates to better performance. This probably started around in TheNineties when "just good enough" computers were sold. Technology was improving at such a rapid pace that the amount of RAM in a recently purchased computer may not be enough to run a program a half year down the road. The amount of RAM available to a computer is a massive YMMV in terms of performance. But the test is actually simple to determine if a system would benefit from more. If RAM is constantly full and using the hard drive's swap file[[note]]This is a sort of cheat to fool programs into thinking the computer has more memory than it really does. When the OS believes something in RAM is not being accessed or used all that much, it'll store that data in a swap file to make room for more useful data[[/note]], the system could definitely benefit from more RAM. If RAM is barely being used, then the system isn't really using it so adding more won't help. This changed with later operating systems, however, where extra memory is passively used to hold extra data files for fast access by programs, filling up the longer the system is on. If the memory is needed for active use, then the cache is pushed out to make room.
Is there an issue? Send a MessageReason:
[[Examples Are Not Recent No ti present.


A misunderstood aspect of memory is that more memory automatically equates to better performance. This probably started around in TheNineties when "just good enough" computers were sold. Technology was improving at such a rapid pace that the amount of RAM in a recently purchased computer may not be enough to run a program a half year down the road. The amount of RAM available to a computer is a massive YMMV in terms of performance. But the test is actually simple to determine if a system would benefit from more. If RAM is constantly full and using the hard drive's swap file[[note]]This is a sort of cheat to fool programs into thinking the computer has more memory than it really does. When the OS believes something in RAM is not being accessed or used all that much, it'll store that data in a swap file to make room for more useful data[[/note]], the system could definitely benefit from more RAM. If RAM is barely being used, then the system isn't really using it so adding more won't help. This is changing on modern operating systems, however, where extra memory is passively used to hold extra data files for fast access by programs, filling up the longer the system is on. If the memory is needed for active use, then the cache is pushed out to make room.

to:

A misunderstood aspect of memory is that more memory automatically equates to better performance. This probably started around in TheNineties when "just good enough" computers were sold. Technology was improving at such a rapid pace that the amount of RAM in a recently purchased computer may not be enough to run a program a half year down the road. The amount of RAM available to a computer is a massive YMMV in terms of performance. But the test is actually simple to determine if a system would benefit from more. If RAM is constantly full and using the hard drive's swap file[[note]]This is a sort of cheat to fool programs into thinking the computer has more memory than it really does. When the OS believes something in RAM is not being accessed or used all that much, it'll store that data in a swap file to make room for more useful data[[/note]], the system could definitely benefit from more RAM. If RAM is barely being used, then the system isn't really using it so adding more won't help. This is changing on modern changed later operating systems, however, where extra memory is passively used to hold extra data files for fast access by programs, filling up the longer the system is on. If the memory is needed for active use, then the cache is pushed out to make room.
Is there an issue? Send a MessageReason:
Page was movedfrom UsefulNotes.Random Access Memory to MediaNotes.Random Access Memory. Null edit to update page.
Is there an issue? Send a MessageReason:
None


Typically, a UsefulNotes/GraphicsProcessingUnit does not have cache. Video memory fills that role. But "Embedded Dynamic RAM" is pretty damn close. It's stuck right next to the processor instead of inside of it. The gain is larger size (but still much smaller than standard memory), and its clock speed still matches the processor. The tradeoffs are smaller bandwidth (but still about 10 to 100 times more than standard memory), and slower latency (but still much, much faster than standard memory), and increased manufacturing cost.

to:

Typically, a UsefulNotes/GraphicsProcessingUnit MediaNotes/GraphicsProcessingUnit does not have cache. Video memory fills that role. But "Embedded Dynamic RAM" is pretty damn close. It's stuck right next to the processor instead of inside of it. The gain is larger size (but still much smaller than standard memory), and its clock speed still matches the processor. The tradeoffs are smaller bandwidth (but still about 10 to 100 times more than standard memory), and slower latency (but still much, much faster than standard memory), and increased manufacturing cost.
Is there an issue? Send a MessageReason:
None


* '''Magnetic RAM''' is basically a return to core on a new level, where each ferrite donut of the old-style core is replaced by a ferrite grain in an IC. It has the density advantage of a DRAM (there is some penalty, but it's not that big), its speed is closer to static RAM, it's completely non-volatile and it can be written as fast at it is read (not to mention as many ''times'' as needed), negating most of the UsefulNotes/FlashMemory drawbacks. Unfortunately, due to Flash selling like ice-cream on a hot day, few producers could spare their fabs to produce it, and it requires significant motherboard redesign to boot. This and several technological bottlenecks seem to lock it in DevelopmentHell for the time.

to:

* '''Magnetic RAM''' is basically a return to core on a new level, where each ferrite donut of the old-style core is replaced by a ferrite grain in an IC. It has the density advantage of a DRAM (there is some penalty, but it's not that big), its speed is closer to static RAM, it's completely non-volatile and it can be written as fast at it is read (not to mention as many ''times'' as needed), negating most of the UsefulNotes/FlashMemory MediaNotes/FlashMemory drawbacks. Unfortunately, due to Flash selling like ice-cream on a hot day, few producers could spare their fabs to produce it, and it requires significant motherboard redesign to boot. This and several technological bottlenecks seem to lock it in DevelopmentHell for the time.
Is there an issue? Send a MessageReason:
None


* '''[=DDR5=] RAM''': 4.8GHz-8.4GHz. Launched in mid 2020, and the first modules arriving in mid-2021 with Intel's Alder Lake [=CPUs=] being the first to support the standard. AMD's Zen 4 [=CPUs=] too uses [=DDR5=] memory. [=DDR5=] RAM is a lot more like [=GDDR5X=] in that it actually has two parallel access lines to the RAM module as opposed to the predecessors' single line, making it more like quad data rate RAM on paper.

to:

* '''[=DDR5=] RAM''': 4.8GHz-8.4GHz. Launched in mid 2020, and the first modules arriving in mid-2021 with Intel's Alder Lake [=CPUs=] being the first to support the standard. AMD's Zen 4 [=CPUs=] too uses [=DDR5=] memory. [=DDR5=] RAM is a lot more like [=GDDR5X=] in that it actually has two parallel access lines to the RAM module as opposed to the predecessors' single line, making it more like quad data rate RAM on paper.
paper. [=DDR5=] RAM is also the first of its kind to support oddball sizes (for example, 24GB, 48GB and so on, per module).
Is there an issue? Send a MessageReason:
fixing info
Is there an issue? Send a MessageReason:
fixing info


* '''[=DDR5=] RAM''': 4.8GHz-8.4GHz. Launched in mid 2020, and the first modules arriving in mid-2021 with Intel's Alder Lake [=CPUs=] being the first to support the standard. AMD's Zen 4 [=CPUs=] too uses [=DDR5=] memory. [=DDR5=] RAM is a lot more like [=GDDR6=] in that it actually has two parallel access lines to the RAM module as opposed to the predecessors' one, making it more like quad data rate RAM on paper.

to:

* '''[=DDR5=] RAM''': 4.8GHz-8.4GHz. Launched in mid 2020, and the first modules arriving in mid-2021 with Intel's Alder Lake [=CPUs=] being the first to support the standard. AMD's Zen 4 [=CPUs=] too uses [=DDR5=] memory. [=DDR5=] RAM is a lot more like [=GDDR6=] [=GDDR5X=] in that it actually has two parallel access lines to the RAM module as opposed to the predecessors' one, single line, making it more like quad data rate RAM on paper.
Is there an issue? Send a MessageReason:
None


Memory is not all about capacity. Unless a system or game is idle, memory will not stay with the same data indefinitely. It's constantly moving data on and off the memory chips to handle the ever changing data. In other words, capacity is important, but so is how fast it can move data on and off the chip. In situations where the machine has to multitask (such as [=PCs=], UsefulNotes/PlayStation3, and UsefulNotes/XBox360), capacity can increase performance, but the returns diminish quickly (i.e., if you double the RAM, it might really boost performance, but if you double it again, it won't do much). More available RAM is helpful for storing more data that you wish to use immediately. It prevents more frequent access to the slower hard drive/DVD/Blu-Ray disc, which to a processor takes an eternity.

to:

Memory is not all about capacity. Unless a system or game is idle, memory will not stay with the same data indefinitely. It's constantly moving data on and off the memory chips to handle the ever changing data. In other words, capacity is important, but so is how fast it can move data on and off the chip. In situations where the machine has to multitask (such as [=PCs=], UsefulNotes/PlayStation3, Platform/PlayStation3, and UsefulNotes/XBox360), Platform/XBox360), capacity can increase performance, but the returns diminish quickly (i.e., if you double the RAM, it might really boost performance, but if you double it again, it won't do much). More available RAM is helpful for storing more data that you wish to use immediately. It prevents more frequent access to the slower hard drive/DVD/Blu-Ray disc, which to a processor takes an eternity.



The DDR stands for "Double Data Rate". Typically, RAM processes the data once per clock cycle, while this kind of memory does it twice. It does come at the cost of slightly slower latency, but doubling the clock speed is a huge advantage for gaming. DDR became commercially available, and the UsefulNotes/XBox was the first console to use DDR memory, while the competing UsefulNotes/{{Playstation 2}}, and later 3, used the competing Rambus DRAM (see below). Each generation of DDR has reduced the operating voltage, which means it uses less power for each memory transfer. However, increasing speeds mean overall power use may still be higher.

to:

The DDR stands for "Double Data Rate". Typically, RAM processes the data once per clock cycle, while this kind of memory does it twice. It does come at the cost of slightly slower latency, but doubling the clock speed is a huge advantage for gaming. DDR became commercially available, and the UsefulNotes/XBox Platform/XBox was the first console to use DDR memory, while the competing UsefulNotes/{{Playstation Platform/{{Playstation 2}}, and later 3, used the competing Rambus DRAM (see below). Each generation of DDR has reduced the operating voltage, which means it uses less power for each memory transfer. However, increasing speeds mean overall power use may still be higher.



This meant the earlier versions were not that good for graphics. It didn't hurt the UsefulNotes/PlayStation2, which used it for regular memory, not for video memory, but the UsefulNotes/{{Nintendo 64}} did use it for video memory. This was one of many bottlenecks that kept the system from performing as well as its graphics looked.

Rambus DRAM is evidently good for video playback, hence why the [=PS2=] and [=PS3=] are considered such good movie players for their times. The UsefulNotes/PlayStationPortable doesn't use that kind of memory, given that the increased power consumption would drain the battery. This has meant that UMD movie playback on [=TVs=] is notably washed out. It was briefly used in the early 2000s for home [=PCs=]; however, although it was indeed blazing fast, upgrading it was way too expensive due to the high licensing fees that module manufacturers ended up passing down to the consumers, and many motherboard manufacturers felt that the licensing fees Rambus charged was too high (and again, those who put up with the high licensing costs passed the fee down to the consumers- this made both the RAM modules and motherboards appear more expensive than the other option, which is SDRAM).

Although Rambus produced a specification for [=XDR2=], the idea had already effectively been outcompeted by GDDR and was never used. The UsefulNotes/Playstation3 was the last significant product to use this memory type.

to:

This meant the earlier versions were not that good for graphics. It didn't hurt the UsefulNotes/PlayStation2, Platform/PlayStation2, which used it for regular memory, not for video memory, but the UsefulNotes/{{Nintendo Platform/{{Nintendo 64}} did use it for video memory. This was one of many bottlenecks that kept the system from performing as well as its graphics looked.

Rambus DRAM is evidently good for video playback, hence why the [=PS2=] and [=PS3=] are considered such good movie players for their times. The UsefulNotes/PlayStationPortable Platform/PlayStationPortable doesn't use that kind of memory, given that the increased power consumption would drain the battery. This has meant that UMD movie playback on [=TVs=] is notably washed out. It was briefly used in the early 2000s for home [=PCs=]; however, although it was indeed blazing fast, upgrading it was way too expensive due to the high licensing fees that module manufacturers ended up passing down to the consumers, and many motherboard manufacturers felt that the licensing fees Rambus charged was too high (and again, those who put up with the high licensing costs passed the fee down to the consumers- this made both the RAM modules and motherboards appear more expensive than the other option, which is SDRAM).

Although Rambus produced a specification for [=XDR2=], the idea had already effectively been outcompeted by GDDR and was never used. The UsefulNotes/Playstation3 Platform/Playstation3 was the last significant product to use this memory type.



1T-SRAM is something of a NonIndicativeName. The technology uses DRAM (which typically uses 1 transistor, hence "1T"), but has the support circuitry built in to handle DRAM refreshing so that the memory controller on the system doesn't have to do it. This makes it look like SRAM to the rest of the system. However, this limits how fast the RAM can operate. Probably its most famous use was being the system RAM in the [[UsefulNotes/NintendoGameCube GameCube]] and [[UsefulNotes/NintendoWii Wii]].

to:

1T-SRAM is something of a NonIndicativeName. The technology uses DRAM (which typically uses 1 transistor, hence "1T"), but has the support circuitry built in to handle DRAM refreshing so that the memory controller on the system doesn't have to do it. This makes it look like SRAM to the rest of the system. However, this limits how fast the RAM can operate. Probably its most famous use was being the system RAM in the [[UsefulNotes/NintendoGameCube [[Platform/NintendoGameCube GameCube]] and [[UsefulNotes/NintendoWii [[Platform/NintendoWii Wii]].
Is there an issue? Send a MessageReason:
None


* '''[=DDR5=] RAM''': 4.8GHz-8.4GHz. Launched in mid 2020, and the first modules arriving in mid-2021 with Intel's Alder Lake [=CPUs=] being the first to support the standard. AMD's Zen 4 [=CPUs=] too uses [=DDR5=] memory. DDR5 RAM is a lot more like GDDR6 in that it actually has two parallel access lines to the RAM module, making it more like quad data rate RAM on paper.

to:

* '''[=DDR5=] RAM''': 4.8GHz-8.4GHz. Launched in mid 2020, and the first modules arriving in mid-2021 with Intel's Alder Lake [=CPUs=] being the first to support the standard. AMD's Zen 4 [=CPUs=] too uses [=DDR5=] memory. DDR5 [=DDR5=] RAM is a lot more like GDDR6 [=GDDR6=] in that it actually has two parallel access lines to the RAM module, module as opposed to the predecessors' one, making it more like quad data rate RAM on paper.
Is there an issue? Send a MessageReason:
None


* '''[=DDR5=] RAM''': 4.8GHz-8.4GHz. Launched in mid 2020, and the first modules arriving in mid-2021 with Intel's Alder Lake [=CPUs=] being the first to support the standard. AMD's Zen 4 [=CPUs=] too uses [=DDR5=] memory.

to:

* '''[=DDR5=] RAM''': 4.8GHz-8.4GHz. Launched in mid 2020, and the first modules arriving in mid-2021 with Intel's Alder Lake [=CPUs=] being the first to support the standard. AMD's Zen 4 [=CPUs=] too uses [=DDR5=] memory.
memory. DDR5 RAM is a lot more like GDDR6 in that it actually has two parallel access lines to the RAM module, making it more like quad data rate RAM on paper.

Changed: 1042

Is there an issue? Send a MessageReason:
None


GDDR RAM is a variant of DDR designed specifically for use with [=GPUs=]. It allows higher memory bandwidth as well as adding some extra functions, such as the ability to fill whole memory blocks with a single colour. Although based on DDR RAM, it has evolved somewhat separately and so doesn't quite match up in terms of generations. [=GDDR4=] and 5 were both based on [=DDR3=]. This was followed by [=GDDR5X=], which is technically ''quad'' data rate and not really DDR at all. [=GDDR6=] is an evolution of this, diverging further from the standard DDR. The first commercial [=GPUs=] using [=GDDR6=] were released in 2018, and as of mid-2019 it is used by all new [=GPUs=] from both Nvidia and AMD.

to:

GDDR RAM is a variant of DDR designed specifically for use with [=GPUs=]. It allows higher memory bandwidth as well as adding some extra functions, such as the ability to fill whole memory blocks with a single colour. The cost however, is higher latency, but most of the work [=GPUs=] do is highly predictable, so memory requests can be made ahead of time. Although based on DDR RAM, it has evolved somewhat separately and so doesn't quite match up in terms of generations. [=GDDR4=] and 5 were both based on [=DDR3=]. This was followed by [=GDDR5X=], which is technically ''quad'' data rate and not really DDR at all. [=GDDR6=] is an evolution of this, diverging further from the standard DDR. The first commercial [=GPUs=] using [=GDDR6=] were released in 2018, and as of mid-2019 it is used by all new [=GPUs=] from both Nvidia and AMD.



Probably a good middle ground between standard memory and embedded memory. It has average clock speed and bandwidth, but also average capacity, combined with a latency only slightly slower than EDRAM. The Game Cube and the Wii use this kind of memory (yes the Wii uses this memory, EDRAM, and [=GDDR3=] all at once).

to:

1T-SRAM is something of a NonIndicativeName. The technology uses DRAM (which typically uses 1 transistor, hence "1T"), but has the support circuitry built in to handle DRAM refreshing so that the memory controller on the system doesn't have to do it. This makes it look like SRAM to the rest of the system. However, this limits how fast the RAM can operate. Probably a good middle ground between standard memory its most famous use was being the system RAM in the [[UsefulNotes/NintendoGameCube GameCube]] and embedded memory. It has average clock speed and bandwidth, but also average capacity, combined with a latency only slightly slower than EDRAM. The Game Cube and the Wii use this kind of memory (yes the Wii uses this memory, EDRAM, and [=GDDR3=] all at once).
[[UsefulNotes/NintendoWii Wii]].



HBM is the result of a wall that GDDR type memory hit. That is, even though [=GDDR5=] has reached an impressive 7.0 [=GHz=], it takes ''a lot'' of power to run it. In order to reduce power consumption and increase memory bandwidth at the same time, AMD teamed up with memory manufacturer Hynix to create HBM. The idea is to simply stack RAM dies on top of each other, use high density through-silicon-vias as communication channels, and use an interposing layer as the base that the GPU also sits on to talk to the memory. The result is a staggering 4096-bit bus interface in its first implementation and dozens of watts in power savings for the same amount of memory. The concept is similar to package-on-package manufacturing used in system-on-chip companies, where the processor is stacked on top of RAM in the same package. HBM is currently in it’s second generation, with the only difference from the first generation being a faster signalling speed, the memory bus is still 4096-bit wide.

As of 2021, usage of HBM has be mostly relegated to top-end GPU designs. It's likely that for volume production, HBM may present yield issues. If the final assembly has any problems, it's much harder to troubleshoot and isolate what is the problem in order to fix it. The discrete VRAM designs are modular and easier to isolate a troublesome component to replace it.

to:

HBM is the result of a wall that GDDR type memory hit. That is, even though [=GDDR5=] has reached an impressive 7.0 [=GHz=], it takes ''a lot'' of power to run it. In order to reduce power consumption and increase memory bandwidth at the same time, AMD teamed up with memory manufacturer Samsung and SK Hynix to create HBM. The idea is to simply stack RAM dies on top of each other, use high density through-silicon-vias as communication channels, and use an interposing layer as the base that the GPU also sits on to talk to the memory. The result is a staggering 4096-bit bus interface in its first implementation and dozens of watts in power savings for the same amount of memory. The concept is similar to package-on-package manufacturing used in system-on-chip companies, where the processor is stacked on top of RAM in the same package. HBM is currently in it’s second its third generation, with the only difference from the first generation being a faster signalling speed, the memory bus is still 4096-bit wide.

As of 2021, 2023, usage of HBM has be mostly relegated to top-end GPU designs. It's The main issue is cost, with the last reported cost of HBM2 being nearly 3x that of GDDR5, and that's without the interposer. In addition, it' likely that for volume production, HBM may present yield issues. If the final assembly has any problems, it's much harder to troubleshoot and isolate what is the problem in order to fix it. The discrete GDDR-based VRAM designs are modular and easier to isolate a troublesome component to replace it.
it.
Is there an issue? Send a MessageReason:
None


Currently, we're into the fourth generation of DDR RAM. The generations are as follows:

to:

Currently, we're into the fourth fifth generation of DDR RAM. The generations are as follows:



* '''[=DDR5=] RAM''': 4.8GHz-6.4GHz. Launched in mid 2020, and the first modules arriving in mid-2021 with Intel's Alder Lake [=CPUs=] being the first to support the standard. AMD's Zen 4 [=CPUs=] too uses [=DDR5=] memory.

to:

* '''[=DDR5=] RAM''': 4.8GHz-6.8GHz-8.4GHz. Launched in mid 2020, and the first modules arriving in mid-2021 with Intel's Alder Lake [=CPUs=] being the first to support the standard. AMD's Zen 4 [=CPUs=] too uses [=DDR5=] memory.
memory.
Is there an issue? Send a MessageReason:
Zen 4 CP Us has launched.


* '''[=DDR5=] RAM''': 4.8GHz-6.4GHz. Launched in mid 2020, and the first modules arriving in mid-2021 with Intel's Alder Lake [=CPUs=] being the first to support the standard. AMD's upcoming Zen 4 [=CPUs=] too will use [=DDR5=] memory.

to:

* '''[=DDR5=] RAM''': 4.8GHz-6.4GHz. Launched in mid 2020, and the first modules arriving in mid-2021 with Intel's Alder Lake [=CPUs=] being the first to support the standard. AMD's upcoming Zen 4 [=CPUs=] too will use uses [=DDR5=] memory.
Is there an issue? Send a MessageReason:
None


Memory is not all about capacity. Unless a system or game is idle, memory will not stay with the same data indefinitely. It's constantly moving data on and off the memory chips to handle the ever changing data. In other words, capacity is important, but so is how fast it can move data on and off the chip. In situations where the machine has to multitask (such as [=PCs=], UsefulNotes/PlayStation3, and UsefulNotes/XBox360), capacity can increase performance, but the returns diminish quickly (i.e., if you double the RAM, it might really boost performance, but if you double it again, it won't do much). More available RAM is helpful for storing more data that you wish to use immediately. It prevents more frequent access to the slower hard drive/DVD/Blu-Ray disk, which to a processor takes an eternity.

to:

Memory is not all about capacity. Unless a system or game is idle, memory will not stay with the same data indefinitely. It's constantly moving data on and off the memory chips to handle the ever changing data. In other words, capacity is important, but so is how fast it can move data on and off the chip. In situations where the machine has to multitask (such as [=PCs=], UsefulNotes/PlayStation3, and UsefulNotes/XBox360), capacity can increase performance, but the returns diminish quickly (i.e., if you double the RAM, it might really boost performance, but if you double it again, it won't do much). More available RAM is helpful for storing more data that you wish to use immediately. It prevents more frequent access to the slower hard drive/DVD/Blu-Ray disk, disc, which to a processor takes an eternity.
Is there an issue? Send a MessageReason:
None


Not to be confused with the album by Music/DaftPunk.

to:

Not to be confused with [[Music/RandomAccessMemories the album album]] by Music/DaftPunk.
Tabs MOD

Changed: 25

Is there an issue? Send a MessageReason:
misuse


Just in case you are wondering, it's pronounced "cash", not "[[ItIsPronouncedTroPAY ca-shay]]". This is memory stuck right in the CPU. Why? Well often the CPU needs to store data for certain processing. It doesn't need to store a lot, but it needs to store it in memory as fast as possible. Cache fills that purpose. By sticking it right inside the processor, the latency is no slower than the processor's, and the clock speed matches. It does mean that the cache can only be so large. The 360 has the most cache of any home consoles, and it's just 1 megabyte in size. But it's the speed that counts, since it's designed to keep up with the CPU. Many modern [=PCs=] and consoles have multiple cache levels of varying sizes, for example a modern PC from 2010 has at least three levels. Usually, Level 1 is the fastest, but has the smallest amount of storage, and as the level goes higher, the speed is reduced but storage amount is increased.

to:

Just in case you are wondering, it's pronounced "cash", not "[[ItIsPronouncedTroPAY ca-shay]]"."ca-shay". This is memory stuck right in the CPU. Why? Well often the CPU needs to store data for certain processing. It doesn't need to store a lot, but it needs to store it in memory as fast as possible. Cache fills that purpose. By sticking it right inside the processor, the latency is no slower than the processor's, and the clock speed matches. It does mean that the cache can only be so large. The 360 has the most cache of any home consoles, and it's just 1 megabyte in size. But it's the speed that counts, since it's designed to keep up with the CPU. Many modern [=PCs=] and consoles have multiple cache levels of varying sizes, for example a modern PC from 2010 has at least three levels. Usually, Level 1 is the fastest, but has the smallest amount of storage, and as the level goes higher, the speed is reduced but storage amount is increased.
Is there an issue? Send a MessageReason:
None


* ''Upper Memory Area (UMA)'': On many systems, it is possible to populate the motherboard with more RAM beyond 640k, up to 1MB. This excess RAM area is known as the Upper Memory Area and co-exists with "memory holes" intended for communication with EMS cards and other peripherals. On systems where more memory is needed, more then 640k worth of RAM DIP-chips are installed, and the OS then cleverly figures out which part of the UMA is free and which are "memory holes" meant for communication with expansion cards, and marks them accordingly. These UMA locations can then be "backfilled" by UMA-aware programs, in blocks.

to:

* ''Upper Memory Area (UMA)'': On many systems, it is possible to populate the motherboard with more RAM beyond 640k, up to 1MB. This excess RAM area is known as the Upper Memory Area and co-exists with "memory holes" intended for communication with EMS cards and other peripherals. On systems where more memory is needed, more then 640k worth of RAM DIP-chips are installed, and the OS then cleverly figures out and device drivers work in tandem marking which part of the UMA is free are memory holes and which are "memory holes" meant for communication with expansion cards, and marks them accordingly.extra UMA memory. These UMA locations can then be "backfilled" by UMA-aware programs, in blocks.
Is there an issue? Send a MessageReason:
None


* '''[=DDR5=] RAM''': 4.8GHz-6.4GHz. Launched in mid 2020, and the first modules arriving in mid-2021 with Intel's Alder Lake [=CPUs=] being the first to support the standard. AMD'S upcoming Zen 4 [=CPUs=] too will be [=DDR5=].

to:

* '''[=DDR5=] RAM''': 4.8GHz-6.4GHz. Launched in mid 2020, and the first modules arriving in mid-2021 with Intel's Alder Lake [=CPUs=] being the first to support the standard. AMD'S AMD's upcoming Zen 4 [=CPUs=] too will be [=DDR5=].use [=DDR5=] memory.
Is there an issue? Send a MessageReason:
None


* '''[=DDR4=] RAM''': 1.6GHz-4.3GHz. Memory module size starts at 4GB. The RAM reached it's full 4.3GHz potential by the end of 2016. As of late 2019, [=CPUs=] still only officially support a maximum RAM speed of 3.2GHz. However, newer [=CPUs=] unofficially support much higher speeds, with over 5GHz possible with overclocking. The last consumer chips to use [=DDR3=] RAM are the AMD Zen 3 and Intel Rocket Lake [=CPUs=].

to:

* '''[=DDR4=] RAM''': 1.6GHz-4.3GHz. Memory module size starts at 4GB. The RAM reached it's full 4.3GHz potential by the end of 2016. As of late 2019, [=CPUs=] still only officially support a maximum RAM speed of 3.2GHz. However, newer [=CPUs=] unofficially support much higher speeds, with over 5GHz possible with overclocking. The last consumer chips to use [=DDR3=] [=DDR4=] RAM are the AMD Zen 3 and Intel Rocket Lake [=CPUs=].
Is there an issue? Send a MessageReason:
None


* '''[=DDR4=] RAM''': 1.6GHz-4.3GHz. Memory module size starts at 4GB. JEDEC (the body governing the RAM standard) is already warning that the RAM will reach it's full 4.3GHz potential by the end of 2016 [[https://en.wikipedia.org/wiki/DDR5_SDRAM and wants to start work on DDR5 RAM soon, aiming for a release date of 2020]]. As of late 2019, [=CPUs=] still only officially support a maximum RAM speed of 3.2GHz, and [=DDR5=] seems a way off yet. However, newer [=CPUs=] unofficially support much higher speeds, with over 5GHz possible with overclocking, so the end of [=DDR4=] is certainly in sight.

to:

* '''[=DDR4=] RAM''': 1.6GHz-4.3GHz. Memory module size starts at 4GB. JEDEC (the body governing the The RAM standard) is already warning that the RAM will reach reached it's full 4.3GHz potential by the end of 2016 [[https://en.wikipedia.org/wiki/DDR5_SDRAM and wants to start work on DDR5 RAM soon, aiming for a release date of 2020]].2016. As of late 2019, [=CPUs=] still only officially support a maximum RAM speed of 3.2GHz, and [=DDR5=] seems a way off yet. 2GHz. However, newer [=CPUs=] unofficially support much higher speeds, with over 5GHz possible with overclocking, so overclocking. The last consumer chips to use [=DDR3=] RAM are the end of [=DDR4=] is certainly AMD Zen 3 and Intel Rocket Lake [=CPUs=].
* '''[=DDR5=] RAM''': 4.8GHz-6.4GHz. Launched
in sight.
mid 2020, and the first modules arriving in mid-2021 with Intel's Alder Lake [=CPUs=] being the first to support the standard. AMD'S upcoming Zen 4 [=CPUs=] too will be [=DDR5=].

Added: 366

Changed: 255

Is there an issue? Send a MessageReason:
None


HBM is the result of a wall that GDDR type memory hit. That is, even though [=GDDR5=] has reached an impressive 7.0 [=GHz=], it takes ''a lot'' of power to run it. In order to reduce power consumption and increase memory bandwidth at the same time, AMD teamed up with memory manufacturer Hynix to create HBM. The idea is to simply stack RAM dies on top of each other, use high density through-silicon-vias as communication channels, and use an interposing layer as the base that the GPU also sits on to talk to the memory. The result is a staggering 4096-bit bus interface in its first implementation and dozens of watts in power savings for the same amount of memory. The concept is similar to package-on-package manufacturing used in system-on-chip companies, where the processor is stacked on top of RAM in the same package. HBM is currently in it’s second generation, with the only difference from the first generation being a faster signalling speed, the memory bus is still 4096-bit wide. However, it seems that as of 2019, most manufacturers have given up on HBM memory and have moved back to GDDR type memory, with AMD switching to [=GDDR6=] for it’s ''Navi'' architecture GPUs, while NVIDIA had switched back to [=GDDR6=] with the GTX1660 cards.

to:

HBM is the result of a wall that GDDR type memory hit. That is, even though [=GDDR5=] has reached an impressive 7.0 [=GHz=], it takes ''a lot'' of power to run it. In order to reduce power consumption and increase memory bandwidth at the same time, AMD teamed up with memory manufacturer Hynix to create HBM. The idea is to simply stack RAM dies on top of each other, use high density through-silicon-vias as communication channels, and use an interposing layer as the base that the GPU also sits on to talk to the memory. The result is a staggering 4096-bit bus interface in its first implementation and dozens of watts in power savings for the same amount of memory. The concept is similar to package-on-package manufacturing used in system-on-chip companies, where the processor is stacked on top of RAM in the same package. HBM is currently in it’s second generation, with the only difference from the first generation being a faster signalling speed, the memory bus is still 4096-bit wide. However, it seems

As of 2021, usage of HBM has be mostly relegated to top-end GPU designs. It's likely
that as of 2019, most manufacturers have given up on for volume production, HBM memory may present yield issues. If the final assembly has any problems, it's much harder to troubleshoot and have moved back to GDDR type memory, with AMD switching to [=GDDR6=] for it’s ''Navi'' architecture GPUs, while NVIDIA had switched back to [=GDDR6=] with isolate what is the GTX1660 cards.
problem in order to fix it. The discrete VRAM designs are modular and easier to isolate a troublesome component to replace it.
Is there an issue? Send a MessageReason:
None


Rambus DRAM is evidently good for video playback, hence why the [=PS2=] and [=PS3=] are considered such good movie players for their times. The UsefulNotes/PlayStationPortable doesn't use that kind of memory, given that the increased power consumption would drain the battery. This has meant that UMD movie playback on [=TVs=] is notably washed out. It was briefly used in the early 2000s for home [=PCs=]; however, although it was indeed blazing fast, upgrading it was way too expensive, and on the manufacturing end, many motherboard manufacturers felt that the licensing fees Rambus charged was too high.

to:

Rambus DRAM is evidently good for video playback, hence why the [=PS2=] and [=PS3=] are considered such good movie players for their times. The UsefulNotes/PlayStationPortable doesn't use that kind of memory, given that the increased power consumption would drain the battery. This has meant that UMD movie playback on [=TVs=] is notably washed out. It was briefly used in the early 2000s for home [=PCs=]; however, although it was indeed blazing fast, upgrading it was way too expensive, expensive due to the high licensing fees that module manufacturers ended up passing down to the consumers, and on the manufacturing end, many motherboard manufacturers felt that the licensing fees Rambus charged was too high.
high (and again, those who put up with the high licensing costs passed the fee down to the consumers- this made both the RAM modules and motherboards appear more expensive than the other option, which is SDRAM).
Is there an issue? Send a MessageReason:
None
Is there an issue? Send a MessageReason:
None


HBM is the result of a wall that GDDR type memory hit. That is, even though [=GDDR5=] has reached an impressive 7.0 [=GHz=], it takes ''a lot'' of power to run it. In order to reduce power consumption and increase memory bandwidth at the same time, AMD teamed up with memory manufacturer Hynix to create HBM. The idea is to simply stack RAM dies on top of each other, use high density through-silicon-vias as communication channels, and use an interposing layer as the base that the GPU also sits on to talk to the memory. The result is a staggering 4096-bit bus interface in its first implementation and dozens of watts in power savings for the same amount of memory. The concept is similar to package-on-package manufacturing used in system-on-chip companies, where the processor is stacked on top of RAM in the same package. HBM is currently in it’s second generation, with the only difference from the first generation being a faster signalling speed, the memory bus is still 4096-bit wide. However, it seems that as of 2019, most manufacturers have given up on HBM memory and have moved back to GDDR type memory, with AMD switching to [=GDDR6=] for it’s ''Navi'' architecture GPUS.

to:

HBM is the result of a wall that GDDR type memory hit. That is, even though [=GDDR5=] has reached an impressive 7.0 [=GHz=], it takes ''a lot'' of power to run it. In order to reduce power consumption and increase memory bandwidth at the same time, AMD teamed up with memory manufacturer Hynix to create HBM. The idea is to simply stack RAM dies on top of each other, use high density through-silicon-vias as communication channels, and use an interposing layer as the base that the GPU also sits on to talk to the memory. The result is a staggering 4096-bit bus interface in its first implementation and dozens of watts in power savings for the same amount of memory. The concept is similar to package-on-package manufacturing used in system-on-chip companies, where the processor is stacked on top of RAM in the same package. HBM is currently in it’s second generation, with the only difference from the first generation being a faster signalling speed, the memory bus is still 4096-bit wide. However, it seems that as of 2019, most manufacturers have given up on HBM memory and have moved back to GDDR type memory, with AMD switching to [=GDDR6=] for it’s ''Navi'' architecture GPUS.
GPUs, while NVIDIA had switched back to [=GDDR6=] with the GTX1660 cards.
Is there an issue? Send a MessageReason:
None


HBM is the result of a wall that GDDR type memory hit. That is, even though [=GDDR5=] has reached an impressive 7.0 [=GHz=], it takes ''a lot'' of power to run it. In order to reduce power consumption and increase memory bandwidth at the same time, AMD teamed up with memory manufacturer Hynix to create HBM. The idea is to simply stack RAM dies on top of each other, use high density through-silicon-vias as communication channels, and use an interposing layer as the base that the GPU also sits on to talk to the memory. The result is a staggering 4096-bit bus interface in its first implementation and dozens of watts in power savings for the same amount of memory. The concept is similar to package-on-package manufacturing used in system-on-chip companies, where the processor is stacked on top of RAM in the same package. HBM is currently in it’s second generation, with the only difference from the first generation being a faster signalling speed, the memory bus is still 4096-bit wide. However, it seems that most manufacturers have given up on HBM memory and have moved back to GDDR type memory.

to:

HBM is the result of a wall that GDDR type memory hit. That is, even though [=GDDR5=] has reached an impressive 7.0 [=GHz=], it takes ''a lot'' of power to run it. In order to reduce power consumption and increase memory bandwidth at the same time, AMD teamed up with memory manufacturer Hynix to create HBM. The idea is to simply stack RAM dies on top of each other, use high density through-silicon-vias as communication channels, and use an interposing layer as the base that the GPU also sits on to talk to the memory. The result is a staggering 4096-bit bus interface in its first implementation and dozens of watts in power savings for the same amount of memory. The concept is similar to package-on-package manufacturing used in system-on-chip companies, where the processor is stacked on top of RAM in the same package. HBM is currently in it’s second generation, with the only difference from the first generation being a faster signalling speed, the memory bus is still 4096-bit wide. However, it seems that as of 2019, most manufacturers have given up on HBM memory and have moved back to GDDR type memory.
memory, with AMD switching to [=GDDR6=] for it’s ''Navi'' architecture GPUS.

Top