الأربعاء، 11 يناير 2023

Intel Announces 4th Gen Xeon Scalable Processors Overflowing With Accelerator Engines

The delays are over — Intel has unveiled its latest 4th Gen Xeon Scalable Processors, code-named Sapphire Rapids. These chips are all about high-performance computing in servers and data centers, and they have the specs to prove it. The 4th Gen Xeon Scalable CPUs support up to 60 cores per socket and accelerator engines for a plethora of high-performance computing applications.

AI will be a major focus of the 4th Gen Xeon Scalable Processors. Currently, most machine learning models run on GPU hardware, which is the only way to maintain throughput for complex AI. The new Xeons have Intel Advanced Matrix Extensions (AMX), which accelerates the dense matrix multiplication at the heart of many machine learning workloads. Intel claims a 10x performance increase for AI performance compared with the 3rd Gen Xeon parts. That might not be enough to train a complex model, but it could be enough to run and modify them without the hassle of using a discreet GPU.

That’s not the end of the accelerators — not even close. There are also accelerators for crypto processing, Advanced Vector Extensions, In-Memory Analytics, Advanced Vector Extensions, and more. Each of these technologies benefits specific workloads, adding up to what Intel claims is the most efficient server chip on the market.

There are technically two classes of products in the Sapphire Rapids family. The Xeon Scalable Processors are for general server usage, and the Xeon CPU Max Series chips are for data centers. The Xeons use a tiled architecture, which allows Intel to customize for different applications. The Extreme Core Count (XCC) and Max series CPUs utilize a 4-tile layout to support as many cores as possible, but the Max die package drops some accelerator links in favor of expanded memory bandwidth for better handling of large data sets.

The “mainstream” 4th Gen Xeon Salable chips use a Medium Core Count (MCC) die that drops the core count to 32 or less but offers higher clock speeds and lower latency. It’s also worth noting all the cores in Intel’s latest Xeons are the faster Golden Cove “P-Cores,” whereas Raptor Lake and other Intel Core parts currently have a mix of performance and efficiency cores. The Xeons do, however, share Raptor Lake’s support for DDR5 memory and PCIe 5.0. It also has Compute Express Link (CXL) 1.1 for higher data throughput.

Previously, Intel hoped it would be first to market with technologies like DDR5, PCIe 5.0, and Compute Express Link (CXL) for server chips. However, a series of delays gave AMD a chance to launch the Epyc Genoa chips with those features last fall. The delays are reportedly thanks to the company’s Intel 7 process node, which, despite the name, is actually an improved 10nm technology. Intel went with the name because it feels the enhanced 10nm process is comparable in functionality to 7nm designs coming out of TSMC and Samsung fabs.

There are a few dozen models of the new Xeon Scalable processors across the standard and Max product families. They’ll be marketed as Bronze, Silver, Gold, Platinum, and Max SKUs, with the most powerful 60-core Max CPU will retail for $17,000, and the top dog 56-core Platinum Xeon will be a little over $10,000.

Now read:



sourse ExtremeTechExtremeTech https://ift.tt/M6zAmZi

ليست هناك تعليقات:

إرسال تعليق