الثلاثاء، 24 أغسطس 2021

AMD Unveils New Ryzen V-Cache Details at HotChips 33

AMD gave us more information about its upcoming V-Cache at Hot Chips this year, the annual conference where semiconductor engineers from all over the industry come together to crow over disclose details regarding their technical achievements in the past 12 months.

Earlier this year, AMD announced that it would not advance directly from Zen 3 to Zen 4. Instead, it would iterate on the Zen 3 core by stacking a full 64MB of 7nm L3 cache vertically on the core. AMD claims this can improve performance by up to 15 percent based on 1080p gaming results. The improvement in other applications is unknown.

On the one hand, this is a tried-and-true tactic in the CPU industry. Intel first slapped an extra 2MB of L3 on the 2003-era Gallatin Xeon, which later became the first Pentium 4 Extreme Edition. The modification was good for a 10-20 percent uplift depending on application. Fast forward 20 years and the only thing that seems to have changed is the amount of L3 required for the same degree of improvement — 64MB instead of 2MB.

Looking at the situation this way, however, misses the technical achievement AMD is announcing. Chips can be 3D in multiple ways: They may utilize 3D transistor structures (FinFETs, future GAAFETs), or they may stack multiple dies on top of each other. While smartphone SoCs have utilized package-on-package (PoP) mounting for many years, there are thermal restrictions on the types of products that can go this route. It’s very easy to cook your lower die with heat from the upper silicon.

In AMD’s case, the company claims to have integrated its V-NAND technology directly above the 2D L3 cache. This keeps the L3 from absorbing additional heat dissipation off the ALUs and other hot spots on the actual CPU die. One tidbit AMD shared today is that it isn’t actually limited to stacking “just” 64MB of L3 on top of the chip, but could actually increase the number of stacks in total. This raises some interesting questions about what the performance benefits of having such a large L3 actually might be. This technique probably faces diminishing marginal returns, but that could change if AMD develops some interesting ways to take advantage of such a large L3 cache. We would expect the additional memory bandwidth (2TB/s per chiplet, according to AMD) to take pressure off the memory bus. As such, it might be most beneficial to 32-core and 64-core Threadripper systems, especially above the 64MB mark.

But talk of >64MB cache per chiplet is a bit premature considering that V-Cache isn’t on sale yet. AMD claims that it can hit similar yields on V-Cache CPUs versus standard chips, implying that we shouldn’t see huge price increases due to yield. Whether AMD raises its prices in general is a separate question, of course. The company claims the hybrid bonding method developed by TSMC offers a 3x improvement in interconnect efficiency and a whopping 15x improvement compared to a traditional microbump array. TSMC is using its SoIC (System of Integrated Chips) technology to connect the various dies together. An explicit goal of this process, according to TSMC, is to enable “the heterogeneous integration (HI) of known good dies (KGDs) with different chip sizes, functionalities, and wafer node technologies.” SoIC uses copper-copper bonding to improve electrical efficiency instead of microbumps.

We expect V-Cache chips in-market late this year or early next, but comments today suggest early next year might be a more likely date. The ongoing semiconductor shortage may have delayed AMD’s plans; news out of the industry this week suggests we may not see a return to normal supply and inventory levels until at least the middle of 2022.

Now Read:



sourse ExtremeTechExtremeTech https://ift.tt/2WmEKLT

ليست هناك تعليقات:

إرسال تعليق