It should be slightly cheaper to produce than the Navi 23 as:
It's built on the cheaper 6nm node and is slightly smaller (204mm² vs 237mm²).
It's rumored to be motherboard and pin compatible
It's roughly 10% faster on average:
As notebookcheck has RX 7600S results up, we can compare it against the very similar RX 6700S (the only differences being RDNA2 vs RDNA3 and 14Gbps memory vs 16Gbps).
Here are the game results for both. In some games the difference is within margin of error (2-3%). In some games it's 15% faster ad even 20%+ in one.
I know It's only a single sample in a small selection of games, but it still gives a ballpark performance increase.
It's SKUs will land in the ballpark of RX 6800 - 6900 XT as far as performance goes.
Best case: It's single digits faster than the RX 6950XT
Worst case: the top SKU at least competes with the RX 6900XT
It's almost certainly more expensive to produce than Navi 22, even when cut down!
It has a 200mm² 4nm GDC and 3 - 4x 36.5 mm² 6nm MCDs, exotic packaging tech and a native 256 bit memory bus (cut down to 192 bit with 3 MCDs).
Navi 22 is a monolithic 335 mm² die on the 7nm process, with a native 192 bit memory bus (cut down to 160 bit)
What it all (most probably) means:
The Navi 33 SKUs (7600 series) will be relatively cheap to produce, even 250$ SKUs shouldn't really be a problem, if RX 6600 is any indication.
The Navi 32 SKUs (7800 and possibly 7700 series) probably won't be cheap enough for the 7700 series.
The hypothetical best Navi 33 chip (7600XT ?) will at best perform in the ballpark of 6700 non-XT 10GB at 1080p and slightly slower at 1440p
The most castrated (192 bit, 3x MCD) Navi 32 SKU that still makes any sense should still perform in the ballpark of RX 6800 at 1440p.
128 bit version with 2x MCD (but still a 200 mm² 5nm GCD) IMO does't make any sense against Nvidias monolithic 146 mm² AD107 and 190 mm²
All in all, that leaves quite the gap in the lineup to fill as:
Navi 33 just doesn't scale to Navi 22 performance levels to be used in the 7700 series
Navi 32 almost certainly isn't cheap enough to produce, to be sold as the 7700 series (at least in mass).
It isn't reasonable to expect one chip to cover the RX 6900, 6800 and 6700 price-brackets
How will AMD address this gap?
My guess is they will continue selling Navi 22 as the stop-gap. Either as the RX 6750 XT or rebranded into the RX 7700 (non-xt) series.
It should hold its own against at least the upcoming RTX 4060 (something Navi 33 will probaly struggle with). This means it should be competitive at least in the 300$ - 400$ price-range.
I don't really see any other alternative unless AMD just decides we will get no competitive cards at all in the price range (est. 400$ - 650$).
Depending on economies of scale it might only cost a tiny bit more than n22 to fab. 200mm2 is more chips per wafer than n23 let alone n22. And since the MCDs are being reused, and they're so small, it may be possible that the cost is effectively negligible in the grand scheme.
Then again I could also see a reality where they were hoping for higher prices, and now n32 is going to be a mobile first chip and mostly only sold as the 7800xtx on desktop. Their current lineup seems like its begging for a refresh into rdna3+ with revised CU counts and silicon refinements. If that is their plan and if they can get new chips out by winter then they should have no issue riding the rdna 2 stock to fill the low end. Kinda along the lines of what you're suggesting just riding the 6700xt, they could go all the way until rdna4 since it should only be 12-15 months out.
AMD Radeon RX 7600M XT in action A promotion video for the Chinese Metaphuni Metamech gaming laptop shows first benchmarks featuring AMD RX 7600M XT GPU. Metamech gaming laptop with Radeon RX 7600M XT GPU, Source: Bilibili It may be weeks since AMD announced the Radeon RX 7600/7700 series for...
videocardz.com
Some benchmarks of the 7600M XT. It's very competitive with the 4060L but no further. The bigger issue may be more getting OEMs to use it in volume.
There's an Asus ROG to compete with the Steam Deck now that promises 50% more performance at 15W and 100% at 35W and apparently delivers.
How much is thanks to RDNA3 and how much the Zen 4 cores help I wonder...
Other thing that makes me wonder, is how crippled it is by the SO. All these competitors that came out recently with "better" hardware disappointed being not significantly faster than the Deck.
There's an Asus ROG to compete with the Steam Deck now that promises 50% more performance at 15W and 100% at 35W and apparently delivers.
How much is thanks to RDNA3 and how much the Zen 4 cores help I wonder...
Other thing that makes me wonder, is how crippled it is by the SO. All these competitors that came out recently with "better" hardware disappointed being not significantly faster than the Deck.
In theory, not in practice.
That's why I mentioned these recent competitors that came out with more cores and CUs. No one performed as expected for the specs. The Deck is really like a "console", delivering more performance with less.
In theory, not in practice.
That's why I mentioned these recent competitors that came out with more cores and CUs. No one performed as expected for the specs. The Deck is really like a "console", delivering more performance with less.
Possibly, but the guy is 60 years ago and probably can retire comfortably. I think the retirement is real, but having a younger SVP take over might have a boost on morale if the engineers didn't like the previous one.
-Very cool article. Basically the Deck APU makes some serious CPU sacrifices to ensure the GPU is kept fed by the LPDDR5 ram, giving the chip a very favorable compute to bandwidth ratio.
There is some legit custom work that makes the Deck APU more gaming focused than your run of the mill desktop APU.
The price probably doesn't excite AMD at all. Fair or not I'm guessing they would feel like they would have to sell it for less than the $599 of the 4070.
They might also be somewhat packaging and assembly limited.
Can't help but feel that the way they approached going multi chip was excessively risky for little gain.
A far more cautious approach would have been going multi chip only for the top card and then making the top chip bigger - get a card out there which clearly wins all benchmarks, price it high and work out everything about how to go multi chip from that.
Barely 4 years at AMD does seem short for a former CEO of Synaptics (of 7+ years). Generous take would be he was brought in for a specific goal and met it.
The price probably doesn't excite AMD at all. Fair or not I'm guessing they would feel like they would have to sell it for less than the $599 of the 4070.
So, basically, their margins are so poor it's not worth it. AMD can use it's wafer allocation for other, higher margin, chips. Also, that means an entire generation of GPUs (RDNA3 dGPUs) are a financial failure. I think Bergman blew it, and that's why he's out (Lisa Su -> hey Bergman, just a thought, you could retire now and enjoy your family and such, or ....).
Companies broaden scope of mobile graphics collaboration to bring leadership AMD Radeon graphics technology to expanded portfolio of Samsung Exynos SoCs
I expect we'll start to see information start to trickle out by the end of the month. The rumors at the time of launch said that they had to do some significant redesigns and that we wouldn't see N32 until the middle of 2023.
I don't know if there's anything major prior to Computex, but AMD probably wants to get as much free press out of any announcement as they can.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.