الأربعاء، 15 يوليو 2020

DDR5 Memory Specification Finalized: Up to 6400GT/s, 2TB LRDIMMs

JEDEC, the consortium in charge of DDR technology development and standardization, has announced that it has completed the last revision to the DDR5 standard. While time-to-market varies, historically it takes 12-18 months from specification finalization to consumer-ready product, and JEDEC has indicated it expects a similar time frame here.

So, what’s new in DDR5? A fair number of things. DDR5’s maximum die density is 64Gbit, up from 16Gbit for DDR4. Maximum capacity for standard DIMMs (not LRDIMMs) is 128GB at speeds of up to DDR5-6400. The burst length has been doubled, to 16, and each DIMM now contains two 32-bit memory channels as opposed to a single 64-bit channel.

The reason for the channel shift has to do with the increased burst length. Standard cache lines are 64-bytes long, and this is the default expected size for memory operations. With a burst length of 16, a 64-bit channel would fetch 128 bytes / 1024 bits of data. This effectively wastes a great deal of bandwidth fetching data that the CPU doesn’t want and likely can’t use.

Adopting 2x 32-bit channels per DIMM allows JEDEC to double the burst length and improve efficiency, since the two banks can operate independently of each other. Voltage (Vdd and Vddq) have both dropped, from 1.2v with DDR4 to 1.1v with DDR5. The size of the decrease is half the size of the one from DDR3 to DDR4, where voltage fell from 1.5v to 1.2v, so it’ll be interesting to see how many power advantages DDR5 brings to the table over DDR4.

LRDIMMs are expected to stack up to eight dies per chip, which is where the 2TB figure comes from. These, however, will be server deployments not intended for the standard consumer market.

One thing to keep in mind is that increasing bandwidth does not automatically decrease latency. As the chart below shows, different RAM generations with vastly different clocks often end up in the same place on latency.

The advantages of DDR5 will be its lower power consumption, higher density, and in some cases, the higher amount of bandwidth provided for integrated GPUs. Dual-channel DDR5-6400 will deliver 102GB/s of bandwidth for an integrated APU, compared with 51GB/s for DDR4 and 25.6GB/s for DDR3. These boosts always make a material difference in APU performance, and with Intel and AMD both expected to debut new integrated graphics solutions over the next few years, we’re certain both will be welcome. As always, memory manufacturers will push the spec higher than JEDEC officially supports; Hynix has already talked about its plans for DDR5-8400, which would deliver 134.4GB/s of memory bandwidth.

Unlike in the past, when new DDR memory speeds took over for the old standard at its previous frequency level, DDR5 is expected to launch at DDR5-4800, not DDR5-3200. One major change to the DRAM specification with DDR5 is the use of onboard voltage regulators. Typically, the voltage regulators for DRAM have been located on the motherboard. Going forward, every type of DIMM will contain its own integrated voltage regulator. This is expected to reduce motherboard cost and complexity, at the cost of some increased cost per-DIMM. The advantage to this, at least in theory, is that the motherboard’s DRAM voltage regulation hardware doesn’t have to be built to handle a worst-case scenario. Each DIMM will provide its own voltage regulation (JEDEC calls this philosophy “pay as you go.”)

DDR5 will likely debut next year, possibly in servers first, to be followed by desktop hardware. This may also imply that companies like AMD will keep AM4 around a little longer than we thought — originally, it was thought that we might see DDR5 adoption by 2020 or 2021. If the server market leads on early adoption, late 2021 or early 2022 might be more realistic.

Now Read:



sourse ExtremeTechExtremeTech https://ift.tt/32rFC2u

ليست هناك تعليقات:

إرسال تعليق