-The crux of this whole issue is that the 6900xt is 7% faster than the 6800xt. Basically 0 movement on price to performance assuming the 7800xt costs $699 (which it likely will).
But I guess that's just the way this generation is gonna go.
I suspect the 7800XT will have a perf/$ improvement, if they ever launch it. It will likely perform around the 6900/6950XT and those cards will be discontinued. That isn’t a bad thing. Both the 6900 and 6950XT are faster than the 6800XT.
Remember, the 6900XT was $1,000 at launch. AMD would be offering that performance for $300-$350 less. Also ignore the x50 refresh for a moment. We will likely get another x50 refresh later this year.
Note that AMD has dropped the 7900XT to $849.
That means that pricing for the 7800XT is likely to come in under that. Hopefully for consumers it won’t be $799, but rather $649-$699.
EDIT: part of me wonders if AMD does have chips binned for 3ghz+, but they are saving them for the x50 refresh.
I bet you wonder right. Why sell premium chips first when you can satisfy current demands of rabid gamers with worse chips and later show them the better thing, forcing them to upgrade and sell their cards at a loss? It makes more business sense for them to do that. The only losers are the gamers who want the very best and for them, such losses are just part of the game.
I suspect the 7800XT will have a perf/$ improvement, if they ever launch it. It will likely perform around the 6900/6950XT and those cards will be discontinued. That isn’t a bad thing. Both the 6900 and 6950XT are faster than the 6800XT.
Remember, the 6900XT was $1,000 at launch. AMD would be offering that performance for $300-$350 less. Also ignore the x50 refresh for a moment. We will likely get another x50 refresh later this year.
Note that AMD has dropped the 7900XT to $849.
That means that pricing for the 7800XT is likely to come in under that. Hopefully for consumers it won’t be $799, but rather $649-$699.
EDIT: part of me wonders if AMD does have chips binned for 3ghz+, but they are saving them for the x50 refresh.
The 7900XTX at $1000 clearly will set the eventual prices for the line, if not at launch, then soon after. We got 130% - 150% improvement (1080p > 4K) at the same price as the previous top card. This is my expectation for a few months out, with the slowly sinking price of the 7900XT as an example. AMD might/will try for more, but external forces should prevent it happening.
Note: Jake has commented that Nvidia’s tools may not show the true BVH structure. That’s a distinct possibility, as the structure implied by Nsight is indeed ridiculously wide. The rest…
Note: Jake has commented that Nvidia’s tools may not show the true BVH structure. That’s a distinct possibility, as the structure implied by Nsight is indeed ridiculously wide. The rest…
Not surprising Nvidia's approach to RT is on compute throughput. So with that approach RT will always profit of bigger and fatter GPUs. Conversely it is much harder to scale down to smaller and lower wattage chips.
Regarding AMD's slower approach, I liked the last two sentences of this summary (bolding is mine):
"AMD BVH makes it more vulnerable to cache and memory latency, one of a GPU’s traditional weaknesses. RDNA 3 counters this by hitting the problem from all sides. Cache latency has gone down, while capacity has gone up. Raytracing specific LDS instructions help reduce latency within the shader that’s handling ray traversal. Finally, increased vector register file capacity lets each WGP hold state for more threads, letting it keep more rays in flight to hide latency. A lot of these optimizations will help a wide range of workloads beyond raytracing. It’s hard to see what wouldn’t benefit from higher occupancy and better caching."
That way non-RT workloads can potentially profit of the improvements as well.
If the N31 bug stuff is true and AMD were aiming for those kinds of clocks at 350-400W then they would have had an entirely different product on their hands.
On the bright side though if it is true and they can fix it the 7950 XTX and XT should offer some pretty big gains.
AFAIK, the first scheduled event is just about to start. No chance of them talking about Navi33 for desktop, eh? lol
Edit2: Now the GPUOpen page just says "video coming soon" for all the 6 event thumbnails/descriptions. So I guess now we wait. Will update if something comes up and am faster than usual suspects such as VideoCardz.
I was reading through the slide deck and thought this little blurb was interesting about how ML was used to assist with the optimization aspect. Basically, using ML to better produce closed-form solutions and hand tuning, rather than using ML entirely (i.e. black box).
- Fake frames are fake frames and they're garbage, but like all things AMD vs NVIDA part of the fun is seeing how AMD gerry rigs the crap out of their hardware and software to do what NV does with like 5% of the resources and knowhow.
Speaking of Fake Frames™, I glanced at the Cyberpunk RT-everywhere trailer and Nvidia are pushing Fake Frames heavily in that.
What I also noticed was that aside from reflections everywhere, the models there were often very low polygon.
- Fake frames are fake frames and they're garbage, but like all things AMD vs NVIDA part of the fun is seeing how AMD gerry rigs the crap out of their hardware and software to do what NV does with like 5% of the resources and knowhow.
Hahaha, at least with AMD fake frames, you the consumer ain't paying extra for silicon that is "needed" to enable that technology. Nvidia loves to tell everyone that DLSS3 can only work on their latest architecture, but it seems more and more to me like a repeat of Freesync vs. hardware based G-sync. I love it when free and "good enough" beats out proprietary solutions.
Fake frames suck. I would laugh if AMD just cranks up the fake factor to generate even more of them just to make NVidia look bad in a bar chart. Hell, just offer an "extreme" option that blows up the FPS number even if the experience becomes pure garbage just so I can mock all of the people who've made awful arguments defending this.
Just remember that time spent on this pointless garbage is time that wasn't used to make something else or improve existing features.
I'm sure this will make its way into the console refreshes and both Sony and Microsoft will be boasting about the 4K 120 FPS capabilities of their consoles.
All of the people who swore up and down that this was just as good (or even that the FSR/DLSS looks better than native) are going to have a hard time justifying why they're buying a $900 GPU to put in a $1000 PC instead of just buying a $500 console that can perform just as well.
I found it funny how this specific "garbage" is pushed to compensate another "garbage", in a sense of how much resources and PR efforts it tаkes to push RT in games and consumers' heads, compared to the actual improvement in visual perception it provides just to eventually dlss/fsr'ed everything.
I find the frame generation yet another brute-forcing approach to keep the status quo of how is frame rendering being done in general. Easy to plug-in "solution" for bigger numbers. Same as image scaling techniques. Why aren't variable rate shading-like techniques used more often - technically, it can be considered fine grained sub-image scaling from generation (rendering) perspective.
It should be slightly cheaper to produce than the Navi 23 as:
It's built on the cheaper 6nm node and is slightly smaller (204mm² vs 237mm²).
It's rumored to be motherboard and pin compatible
It's roughly 10% faster on average:
As notebookcheck has RX 7600S results up, we can compare it against the very similar RX 6700S (the only differences being RDNA2 vs RDNA3 and 14Gbps memory vs 16Gbps).
Here are the game results for both. In some games the difference is within margin of error (2-3%). In some games it's 15% faster ad even 20%+ in one.
I know It's only a single sample in a small selection of games, but it still gives a ballpark performance increase.
It's SKUs will land in the ballpark of RX 6800 - 6900 XT as far as performance goes.
Best case: It's single digits faster than the RX 6950XT
Worst case: the top SKU at least competes with the RX 6900XT
It's almost certainly more expensive to produce than Navi 22, even when cut down!
It has a 200mm² 4nm GDC and 3 - 4x 36.5 mm² 6nm MCDs, exotic packaging tech and a native 256 bit memory bus (cut down to 192 bit with 3 MCDs).
Navi 22 is a monolithic 335 mm² die on the 7nm process, with a native 192 bit memory bus (cut down to 160 bit)
What it all (most probably) means:
The Navi 33 SKUs (7600 series) will be relatively cheap to produce, even 250$ SKUs shouldn't really be a problem, if RX 6600 is any indication.
The Navi 32 SKUs (7800 and possibly 7700 series) probably won't be cheap enough for the 7700 series.
The hypothetical best Navi 33 chip (7600XT ?) will at best perform in the ballpark of 6700 non-XT 10GB at 1080p and slightly slower at 1440p
The most castrated (192 bit, 3x MCD) Navi 32 SKU that still makes any sense should still perform in the ballpark of RX 6800 at 1440p.
128 bit version with 2x MCD (but still a 200 mm² 5nm GCD) IMO does't make any sense against Nvidias monolithic 146 mm² AD107 and 190 mm²
All in all, that leaves quite the gap in the lineup to fill as:
Navi 33 just doesn't scale to Navi 22 performance levels to be used in the 7700 series
Navi 32 almost certainly isn't cheap enough to produce, to be sold as the 7700 series (at least in mass).
It isn't reasonable to expect one chip to cover the RX 6900, 6800 and 6700 price-brackets
How will AMD address this gap?
My guess is they will continue selling Navi 22 as the stop-gap. Either as the RX 6750 XT or rebranded into the RX 7700 (non-xt) series.
It should hold its own against at least the upcoming RTX 4060 (something Navi 33 will probaly struggle with). This means it should be competitive at least in the 300$ - 400$ price-range.
I don't really see any other alternative unless AMD just decides we will get no competitive cards at all in the price range (est. 400$ - 650$).
It should be slightly cheaper to produce than the Navi 23 as:
It's built on the cheaper 6nm node and is slightly smaller (204mm² vs 237mm²).
It's rumored to be motherboard and pin compatible
It's roughly 10% faster on average:
As notebookcheck has RX 7600S results up, we can compare it against the very similar RX 6700S (the only differences being RDNA2 vs RDNA3 and 14Gbps memory vs 16Gbps).
Here are the game results for both. In some games the difference is within margin of error (2-3%). In some games it's 15% faster ad even 20%+ in one.
I know It's only a single sample in a small selection of games, but it still gives a ballpark performance increase.
It's SKUs will land in the ballpark of RX 6800 - 6900 XT as far as performance goes.
Best case: It's single digits faster than the RX 6950XT
Worst case: the top SKU at least competes with the RX 6900XT
It's almost certainly more expensive to produce than Navi 22, even when cut down!
It has a 200mm² 4nm GDC and 3 - 4x 36.5 mm² 6nm MCDs, exotic packaging tech and a native 256 bit memory bus (cut down to 192 bit with 3 MCDs).
Navi 22 is a monolithic 335 mm² die on the 7nm process, with a native 192 bit memory bus (cut down to 160 bit)
What it all (most probably) means:
The Navi 33 SKUs (7600 series) will be relatively cheap to produce, even 250$ SKUs shouldn't really be a problem, if RX 6600 is any indication.
The Navi 32 SKUs (7800 and possibly 7700 series) probably won't be cheap enough for the 7700 series.
The hypothetical best Navi 33 chip (7600XT ?) will at best perform in the ballpark of 6700 non-XT 10GB at 1080p and slightly slower at 1440p
The most castrated (192 bit, 3x MCD) Navi 32 SKU that still makes any sense should still perform in the ballpark of RX 6800 at 1440p.
128 bit version with 2x MCD (but still a 200 mm² 5nm GCD) IMO does't make any sense against Nvidias monolithic 146 mm² AD107 and 190 mm²
All in all, that leaves quite the gap in the lineup to fill as:
Navi 33 just doesn't scale to Navi 22 performance levels to be used in the 7700 series
Navi 32 almost certainly isn't cheap enough to produce, to be sold as the 7700 series (at least in mass).
It isn't reasonable to expect one chip to cover the RX 6900, 6800 and 6700 price-brackets
How will AMD address this gap?
My guess is they will continue selling Navi 22 as the stop-gap. Either as the RX 6750 XT or rebranded into the RX 7700 (non-xt) series.
It should hold its own against at least the upcoming RTX 4060 (something Navi 33 will probaly struggle with). This means it should be competitive at least in the 300$ - 400$ price-range.
I don't really see any other alternative unless AMD just decides we will get no competitive cards at all in the price range (est. 400$ - 650$).
Depending on economies of scale it might only cost a tiny bit more than n22 to fab. 200mm2 is more chips per wafer than n23 let alone n22. And since the MCDs are being reused, and they're so small, it may be possible that the cost is effectively negligible in the grand scheme.
Then again I could also see a reality where they were hoping for higher prices, and now n32 is going to be a mobile first chip and mostly only sold as the 7800xtx on desktop. Their current lineup seems like its begging for a refresh into rdna3+ with revised CU counts and silicon refinements. If that is their plan and if they can get new chips out by winter then they should have no issue riding the rdna 2 stock to fill the low end. Kinda along the lines of what you're suggesting just riding the 6700xt, they could go all the way until rdna4 since it should only be 12-15 months out.
AMD Radeon RX 7600M XT in action A promotion video for the Chinese Metaphuni Metamech gaming laptop shows first benchmarks featuring AMD RX 7600M XT GPU. Metamech gaming laptop with Radeon RX 7600M XT GPU, Source: Bilibili It may be weeks since AMD announced the Radeon RX 7600/7700 series for...
videocardz.com
Some benchmarks of the 7600M XT. It's very competitive with the 4060L but no further. The bigger issue may be more getting OEMs to use it in volume.
There's an Asus ROG to compete with the Steam Deck now that promises 50% more performance at 15W and 100% at 35W and apparently delivers.
How much is thanks to RDNA3 and how much the Zen 4 cores help I wonder...
Other thing that makes me wonder, is how crippled it is by the SO. All these competitors that came out recently with "better" hardware disappointed being not significantly faster than the Deck.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.