Question Speculation: RDNA2 + CDNA Architectures thread

Page 140 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146
All die sizes are within 5mm^2. The poster here has been right on some things in the past afaik, and to his credit was the first to saying 505mm^2 for Navi21, which other people have backed up. Even still though, take the following with a pich of salt.

Navi21 - 505mm^2

Navi22 - 340mm^2

Navi23 - 240mm^2

Source is the following post: https://www.ptt.cc/bbs/PC_Shopping/M.1588075782.A.C1E.html
 

HurleyBird

Platinum Member
Apr 22, 2003
2,760
1,455
136
Or just maybe it has something to do with them moving from massively cash starved to actually having the resources to do a decent job? That's much more a priori probable than a single management person changing.

It definitely has something to do with RTG borrowing some excellent Zen engineers (clockspeeds), but cash starved? Throwing more cash at the same engineers doesn't make them better engineers. Granted, they may have hired some additional talent, but there's a reason why simply throwing cash at a technical problem doesn't go very far. Just look at Intel right now.
 

kurosaki

Senior member
Feb 7, 2019
258
250
86
500mm ZEN 4 FTW!

But seriously, why aren't CPUs a little bigger and far more performant if the *individual* core size was doubled, so that IPC was doubled (in theory) - Is the know-how just not there to scale IPC with individual core size?
I think this has been run trough an earlier x86 review by Anand him self many years ago, or something like that. But I'll give it a try:

The X86-64 CPU core is like hardware for a specific instruction set. (ISA). That ISA works in a specific manor and demands a specific set of work to be done, the core does that job. You can work in parallell, eg, more executions per clock or, serial, more MHz.
To drive the parallell workload per core you need more transistors, but they also eat more energy. A wider front end in a x86 sense has to have a lot more to it to make any use of the parallellism. more transistors again.
As the nodes shrink, the main x86 core has gone so small that you produce an enormous wattage at a small surface are = W/cm2 has gone up = bad.
to work on improvements on such a small core for more performance without burning up would itself be a feature. there's where several-core comes into the picture. If you have reached the watt per mm^2 ceiling, duplicate for more performance and so on.
If they could have come away with just doubling everything and get away with it, they would have long time ago. It's all connected to w/mm^2 in the end and a game of trade-offs.

I think ARM64 is better suited for the further development of the single core, where they in some instances are far beyond both AMD and Intel in IPC. The future isn't now, but soon.

In five years, it would be sick if I ran a gaming rig consisting of a Win for ARM, a ARM CPU and an INTEL GPU. The odds on that, would not bet any serious money... ;) But it is a plausible scenario, everything can happen, and it's happening faster and faster. If a platform is on its peak, another will follow soon..
 

caswow

Senior member
Sep 18, 2013
525
136
116
It's all forum psychology, he is a scapegoat. AMD didn't deliver a card that could beat Nvidia while he was there and he has now left. Putting all the blame on him means that people can hope RDNA2 can now beat Nvidia because he's not in charge. Where as in-fact he was just a cog in a very big wheel and him being there, or not being there will have in the end made a very small difference to how RDNA2 turns out.

people need to decide either people who get paid big bucks need to bring good things if not take all the blame. thats how it should be. on one side people get told "hurr durr only few people can do xy job thats why they get big bucks" on the other hand we have people telling us dont put the blame on person xy
 

PhoBoChai

Member
Oct 10, 2017
119
389
106
I think 505mm^2 N21 is nonsense.

That would give you 83 good dies per wafer. Even if they were all 6900XT and the 6900XT is 3080+5% it means AMD can look to charge around $700 per card. That is revenue of $58,100.

OTOH a wafer of Zen3 dies is 782 good CCDs. Even if all are used in 5600X's that would be a revenue of $230k.

A 500+ mm^2 GPU is far far far too costly for AMD when they can generate 4x the revenue by making Zen3 parts instead.

Edit To Add: The other option is that availability is like the 3080/3090, non existent.

That's the downside of any monolithic design vs chiplets and in general GPU revenue is weaker than CPU, per silicon die area.

EPYC is ramping like crazy lately for anyone who has noticed, VMs, datacenter and super computers all aboard. The beauty of Zen 2 or Zen 3 dies here is per wafer, their revenue is MASSIVELY more than anything AMD can get with gaming GPUs.

When your wafer is limited, you want to devote most of them to the higher revenue & profit products.
 

PhoBoChai

Member
Oct 10, 2017
119
389
106
And yet, the same team just without that special "clog" is now performing and behaving so differently, it's like, just like, his departure made a difference.

More like they actually have funding since Zen 1 success in 2017, right after his departure. AMD been on a hiring spree. And Lisa Su also moved Zen engineers to help the graphics division. IDK if anyone here has ever been in charge of R&D, but manpower + money makes a hell of a difference.
 

Glo.

Diamond Member
Apr 25, 2015
5,803
4,777
136

Another info from Paul. According to his source, the full board power, for reference RX 6900 XT is 280W power drawn, and that this GPU is competing very well with RTX 3090, albeit, some games its faster, in some games its slower.

His source firmly suggests that Reference design is under 300W of power, despite of what Igor suggested yesterday.
 

Edgy

Senior member
Sep 21, 2000
366
20
81
Well... Keller probably had the worst draw of the straw if one considers funding/resources or lack thereof (especially compared to Intel) but nevertheless Ryzen was/is considered a roaring success.

What I see is that Keller left AMD much better than when he was hired - no drama, no excuses.

Raja certainly didn't do that. If what I read is true that Raja complained about half his team being pulled into Navi development before him leaving for Intel - this leads to 2 significant extrapolations:

1. He made enough effort at excuses for his failures that news like this got out to the public.
2. He might have been aware of Navi tech as a person in high enough of a position within AMD, but he was not the driving force behind it.

Let's hope RDNA2 succeeds and even Intel finds success at their graphics cards for more competitive market but Raja would not be getting credit for either successes in my book even if they happen.
 

Tup3x

Golden Member
Dec 31, 2016
1,086
1,085
136

Another info from Paul. According to his source, the full board power, for reference RX 6900 XT is 280W power drawn, and that this GPU is competing very well with RTX 3090, albeit, some games its faster, in some games its slower.

His source firmly suggests that Reference design is under 300W of power, despite of what Igor suggested yesterday.
The BIOS he had wasn't from reference board.
 

Kuiva maa

Member
May 1, 2014
182
235
116
I think 505mm^2 N21 is nonsense.

That would give you 83 good dies per wafer. Even if they were all 6900XT and the 6900XT is 3080+5% it means AMD can look to charge around $700 per card. That is revenue of $58,100.

OTOH a wafer of Zen3 dies is 782 good CCDs. Even if all are used in 5600X's that would be a revenue of $230k.

A 500+ mm^2 GPU is far far far too costly for AMD when they can generate 4x the revenue by making Zen3 parts instead.

Edit To Add: The other option is that availability is like the 3080/3090, non existent.

If die allocation according to margins was the only factor driving AMD business, they would only be making Epyc. However it doesn't work like that, they require volume and that means they need to be present in all x86 markets. Also if they want to have variance in their portfolio (they do) and to have access to semicustom deals, they absolutely need graphics. And in order to be competitive in the GPU market you need to sell in all market segments.
 

Timorous

Golden Member
Oct 27, 2008
1,748
3,240
136
If die allocation according to margins was the only factor driving AMD business, they would only be making Epyc. However it doesn't work like that, they require volume and that means they need to be present in all x86 markets. Also if they want to have variance in their portfolio (they do) and to have access to semicustom deals, they absolutely need graphics. And in order to be competitive in the GPU market you need to sell in all market segments.

I never said it was the only factor. Point is if you want a better supply of N21 GPUs you better hope it is smaller as the larger it is the less they will make due to that disparity. That disparity also gives AMD a lot more pricing flexibility to try and get OEMs to offer better products with AMD parts in.

Another good note is that if the TBP is 280W like Paul suggests then that is less than R7 and considering R7 used HBM it means a lower % of the TBP is for the GPU die itself so a 330mm² or so N21 will not have an excessive W/mm².
 

beginner99

Diamond Member
Jun 2, 2009
5,233
1,610
136
Fiat for crypto is so much easier. I'll let you do the hassle, I had my turn too briefly too late to be awesome but early enough for actual returns.
Yeah mining is just annyoing especially nowadays. Hardly and profit and you still have to deal with the heat. Tried it a long time ago where in hindsight is now was extremely profitable. Still I stopped and just buyed some ETH with cash.

on-topic:

Yeah it 6900XT is competes with a 3090, $999 is probably minimum price to expect. As I have said and others just repeated, AMD has no incentive to lower prices and gain market share. Every GPU wafer is a waste of money compared to a Zen2/3 wafer and supply is limited. NV was the only hope for sane prices and a lot of supply since no ones else uses Samsung. But well...
 

Kuiva maa

Member
May 1, 2014
182
235
116
I don't think AMD can charge 1.2k just like that for the top model. It needs something to compete against the 3080 too, lest we forget the 3090 is not that faster. If the second best card loses to the 3080, a very expensive 6000 series flagship would only cement the latter's position as the actual realistic flagship. Now if the second best competes favorably against the 3080, things change.
 

kurosaki

Senior member
Feb 7, 2019
258
250
86
I don't think AMD can charge 1.2k just like that for the top model. It needs something to compete against the 3080 too, lest we forget the 3090 is not that faster. If the second best card loses to the 3080, a very expensive 6000 series flagship would only cement the latter's position as the actual realistic flagship. Now if the second best competes favorably against the 3080, things change.
KIND OF SAD YOU CANT GET A REASONABLE TOP GPU AT THE 300 USD PRICE POINT ANY MORE. WHAT HAPPENED?
HOW COME THE 500 IS THE NEW 300. VOTING WITH THE WALLET GIVES NO EFFECT IF EVERYONE ELSE DOESN'T CARE. EVEN THE 3090'S SEEMS TO HAVE SOLD OUT...
 

Kuiva maa

Member
May 1, 2014
182
235
116
I never said it was the only factor. Point is if you want a better supply of N21 GPUs you better hope it is smaller as the larger it is the less they will make due to that disparity. That disparity also gives AMD a lot more pricing flexibility to try and get OEMs to offer better products with AMD parts in.

Another good note is that if the TBP is 280W like Paul suggests then that is less than R7 and considering R7 used HBM it means a lower % of the TBP is for the GPU die itself so a 330mm² or so N21 will not have an excessive W/mm².

That's the problem with that reasoning right there. A 330mm2 N21 can't possibly compete with the 3080, can it now. You may flood the shelves with them from Day 1 but I , and others like me are simply not in the market for a product in that segment. I mean if AMD could magically pull this off, more power to them and I would gladly buy the product, but realistically speaking, they need a bigger die if they want to address that market segment the 3080 targets.