الاثنين، 2 أغسطس 2021

When Will Ray Tracing Come to Midrange GPUs?

AMD’s 6600 XT announcement last week wasn’t about ray tracing — the company didn’t mention the feature much one way or the other — but it got me thinking. We’re coming up on three years since Turing introduced ray tracing to high-end and ultra-high-end graphics cards. Last fall, Nvidia introduced Ampere, which featured substantially better ray tracing performance than Turing and AMD countered with RDNA2. While RDNA2 is not as fast in ray tracing workloads as Ampere, it’s typically faster than the RTX 2000 family.

The introduction and improvement in ray tracing performance is following a standard adoption pattern, but only at the high end of the graphics card market. Users who regularly spend more than $330 on a GPU have had the option to buy into Turing and Ampere from Nvidia and will soon have the RX 6600 XT from AMD. Nobody below the $330 market has gotten so much as a whisper of support (assuming, of course, that the $330 market existed).

This article discusses GPU MSRPs as they’re set by AMD and Nvidia as opposed to the prices they practically sell for right now, but one obvious reason AMD and Nvidia may have slowed their GPU introductions is because the market has overwhelmingly encouraged them to sell as many high-end cards as possible. The plethora of rumors around GPUs like the RTX 3080 Ti and 3070 Ti illustrate how Nvidia rewrote its launch plans around those two cards and there’s no reason the same shortages and overwhelming demand couldn’t have impacted the introduction of lower-end Ampere or RDNA2 GPUs. Absent the pandemic, we might have seen RTX or ray tracing-capable AMD GPUs below $300 already.

It’s also possible that neither Nvidia nor AMD are in any particular hurry to push the feature lower than that in the stack for a mixture of practical, technical, and financial reasons. Ray tracing puts a considerable burden on modern GPUs. Most games that offer it can’t run it very well at 4K with all other detail levels set to maximum. Even at the top of the stack there are often tradeoffs between utilizing RT and playing at maximum resolution. We’ve also seen evidence that 8GB is not enough for some GPUs to maintain playable frame rates in 4K because the cards run out of VRAM:

As our 6700 XT testing showed, there are times when the RTX 3070 goes from the top of the stack to the bottom due to running out of VRAM in 4K ray tracing workloads. The 12GB RX 6700 XT drops from 23 fps to 16 fps in Watch Dogs Legion ray tracing when it shifts from 1440p to 4K. The 8GB RTX 3070 drops from 67 fps to 14 fps. This is the difference between being underpowered and incapable. Neither GPU can run the game at a playable frame rate but only one of them is running out of VRAM. We see a similar pattern in Godfall 4K, where the RTX 6700 XT drops from 72.7 to 37 fps and the RTX 3070 drops from 82.5 fps to 11.1 fps.

What’s striking about this situation is that it’s a deliberate choice on Nvidia’s part and possibly on AMD’s as well.

Relative to the GTX 1080, the GTX 1060 offered 50 percent as many texture mapping units and GPU cores and 75 percent as many render outputs. Relative to the RTX 3080, the RTX 3060 offers 40 percent as many cores and texture mapping units and just 50 percent the render outputs. The GTX 1080 launched at $599 or $699 depending on whether you bought the Founder’s Edition. The RTX 3080 launched at $699. Clock speed is not a significant factor in these comparisons; all GPUs discussed ran at very similar speeds.

When Nvidia launched the GTX 1060 in 2016, it launched the 6GB GPU at $250. The RTX 3060 debuted at a significantly higher price of $329, yet manages to be weaker against the RTX 3080 than its predecessor was against the GTX 1080. While there is a 12GB SKU for the RTX 3060, it was likely created to compete more effectively against AMD’s RX 6700 XT — the RTX 3060 Ti and RTX 3070 are both 8GB cards, as is the RTX 3070 Ti.

It’s the RTX 3060 Ti that offers 55 percent as many cores as the RTX 3080 and 83 percent the ROPs. A customer who bought a GTX 1060 when the GTX 1080 was the hottest thing around would need to buy an RTX 3060 Ti to be in the same position relative to the RTX 3080. That’s not one price band jump — it’s two, from ~$250 to ~$400 at MSRP.

It is much harder to evaluate AMD’s position. AMD’s midrange GPUs were based on a different architecture than its high-end cards from the introduction of Fury (which offered a refined GCN and a limited 4GB HBM) through the RX 480 and RX 580 era. The performance gap between the 5700 XT and the 5600 XT ranged from 1.15x – 1.3x at 1440p. We don’t know what the performance gap between the 6600 XT and 6700 XT will be yet and do not wish to speculate, but the 6600 XT is within 25 percent of the 6700 XT’s clocks and core count. The major difference between the two GPUs is a dramatic reduction in the amount of L3 (from 96MB to 32MB) and a 1/3 reduction in memory bandwidth.

Another striking facet of the situation is how GPU VRAM at a given price has stopped growing below the $400 mark with the solitary exception of the 12GB RTX 3060. In 2010, $249 bought you a Radeon HD 5770 with 1GB of VRAM. In 2014, that same $249 bought you an R9 285 with 2GB of VRAM. In 2016, $249 bought you an 8GB RX 480. In six years, the amount of VRAM you could buy at the same price point octupled. Five years later, $200 – $300 buys you either a Radeon 5600 LE (6GB, no ray tracing) or a GTX 1660 / 1660  Ti (6GB, no ray tracing). Overall performance in this segment has increased but VRAM loadouts haven’t. 8GB is probably enough for 1080p RT, but 6GB might not be, especially when paired with the smaller memory buses on these cards.

The RTX 3060 is capable of delivering playable frame rates around 30fps at 1440p according to Eurogamer’s review, but we have some idea what the RTX 3050 is capable of.  Reviews of that solution in a laptop suggest it is hampered by its 4GB frame rate buffer, and enabling ray tracing hits the GPU hard. The RTX 3050 Ti takes a 23 percent performance hit against the RTX 3060 at the same TDP and its lack of VRAM causes real problems in certain games. We can expect the desktop version to be a bit higher performing, but it’s unlikely to be a drastic overhaul.

It is much harder to compare AMD in this same fashion because the company was restricted to 4GB VRAM with Fury in 2015 and Vega bombed two years later. One could plausibly argue that AMD lacked effective high-end competition against Nvidia from mid-2016 to late 2020 when the launch of RDNA2. But neither company appears in any great hurry to push ray tracing into the putative midrange market.

One explanation for this is that companies may be waiting to introduce ray tracing at the midrange until the advent at 5nm, when the performance boost from a new node helps push the feature lower in the product stack. The pandemic may have also forced Nvidia and AMD to trim their plans for boosted VRAM capacities in this market segment. There has been a shortage of GDDR6 the past 10 months or so. AMD may have even opted to release smaller RDNA2 GPUs as a way of ensuring it could ship as many cards as possible at a time when capacity is severely restrained and much of its manufacturing capability is obligated to go towards the Xbox Series S|X and PlayStation 5.

But while we have to acknowledge the pandemic’s impact on the broader market, that’s been a 12-month affair, not 60. GPU VRAM capacities below $400 have been largely stagnant for five years (RTX 3060 excluded). If Nvidia was serious about pushing ray tracing out to gamers, the RTX 3060 Ti would be priced like the RTX 3060 and the RTX 3060 would be a $249 drop-in replacement for the 5 year-old GTX 1060 at $249. We’ll wait for the 6600 XT to launch before pronouncing judgment, but at $100 more expensive than the RX 5600 XT, it’s no more a replacement for that GPU than the RTX 3060 is for the GTX 1060.

Nvidia and AMD are both making more money than they ever have before; Nvidia earned more in the first quarter of 2021 than it earned in full-year 2016. Shortages and the realities of the COVID-19 pandemic may have impacted both companies in the short term, but neither has made any effort to deliver a strong, ray-tracing capable GPU at a reasonable midrange price. Instead, they’ve pulled the price of historically midrange cards higher and have thus far kept ray tracing performance confined above the $300 price point almost three years after the feature’s introduction. The 6600 XT’s high price indicates AMD has little interest in changing that this year.

This, again, is not good for PC gaming. In the past, it was possible for PC gamers to look down their noses at consoles. Today, PC gamers ought to seriously be asking what the hell is going on. The PlayStation 5 GPU is a 2304:144:64 configuration with 448GB/s of memory bandwidth shared between the CPU and GPU. The Radeon 6700 XT is a 2560:160:64 GPU with 12GB of dedicated GDDR6 and 384GB/s of bandwidth. Both run at clock speeds over 2GHz, though the 6700 XT is a bit faster. The 6700 XT does have a large L3 cache where the PlayStation 5 does not (as far as we know), but it’s still noteworthy that the GPU matching the PlayStation 5 costs nearly $500 at MSRP, whereas you can buy most of the same GPU + everything else for $500.

This was not the case in 2013. In 2013, the closest GPU to the PS4 was the R9 270, a 1280:80:32 configuration at 900MHz. It retailed for $179. The PS4 retailed for $500, with an 1152:72:32 configuration. It is absolutely true that the PS4 and Xbox One used lower-end GPUs than the PS5 and Xbox Series S|X did, but neither Microsoft nor Sony are believed to be losing huge amounts of money on every platform — certainly nothing like Sony did on the PS3 early in that platform’s life. I am not claiming that the Radeon 6700 XT is a $200 GPU — let’s not be silly — but the gap between what AMD sold to Sony and what it sold to enthusiasts is much, much larger than it was seven years ago. The huge L3 cache is undoubtedly part of that. So are economies of scale.

The console GPUs are unusually strong for their price points. Their equivalent midrange parts in the PC market are not.

Now Read:



sourse ExtremeTechExtremeTech https://ift.tt/2VnmCAT

ليست هناك تعليقات:

إرسال تعليق