Discussion RDNA4 + CDNA3 Architectures Thread

Page 88 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
Mar 3, 2017
1,747
6,598
136
1655034287489.png
1655034259690.png

1655034485504.png

With the GFX940 patches in full swing since first week of March, it is looking like MI300 is not far in the distant future!
Usually AMD takes around 3Qs to get the support in LLVM and amdgpu. Lately, since RDNA2 the window they push to add support for new devices is much reduced to prevent leaks.
But looking at the flurry of code in LLVM, it is a lot of commits. Maybe because US Govt is starting to prepare the SW environment for El Capitan (Maybe to avoid slow bring up situation like Frontier for example)

See here for the GFX940 specific commits
Or Phoronix

There is a lot more if you know whom to follow in LLVM review chains (before getting merged to github), but I am not going to link AMD employees.

I am starting to think MI300 will launch around the same time like Hopper probably only a couple of months later!
Although I believe Hopper had problems not having a host CPU capable of doing PCIe 5 in the very near future therefore it might have gotten pushed back a bit until SPR and Genoa arrives later in 2022.
If PVC slips again I believe MI300 could launch before it :grimacing:

This is nuts, MI100/200/300 cadence is impressive.

1655034362046.png

Previous thread on CDNA2 and RDNA3 here

 
Last edited:

adroc_thurston

Diamond Member
Jul 2, 2023
3,572
5,155
96
I almost feel like like AMD needs 800mm2 of silicon to have a comfortable margin over whatever Nvidia cooks up because everyone knows Nvidia will have a 600mm2 behemoth. It will be interesting either way because Jensen doesn't like losing and will stop at nothing to win, even if it means 600W GPUs (melting power connectors be damned).
800mm^2 is really low.
Double that.
 

Saylick

Diamond Member
Sep 10, 2012
3,531
7,858
136
800mm^2 is really low.
Double that.
You must be implying an MI300-style configuration for 1600mm^2 of total silicon then, because there's no way a good portion of 1600mm^2 isn't on an older node in the form of an active base die(s). If the compute dies sit over an area that is roughly equal to the base dies, then 800m^2 of cutting edge silicon (read: silicon with actual compute) is in the ballpark of what I was saying.
 

adroc_thurston

Diamond Member
Jul 2, 2023
3,572
5,155
96
You must be implying an MI300-style configuration for 1600mm^2 of total silicon then, because there's no way a good portion of 1600mm^2 isn't on an older node in the form of an active base die(s). If the compute dies sit over an area that is roughly equal to the base dies, then 800m^2 of cutting edge silicon (read: silicon with actual compute) is in the ballpark of what I was saying.
MI300 is 2.3k mm^2, fair bit bigger.
And yeah, when you spam, you spam.
 
  • Like
Reactions: Tlh97 and Saylick

Saylick

Diamond Member
Sep 10, 2012
3,531
7,858
136
yes, that's the only way for MSS to go up.
Shotgun the comp, bribe the shills to sign the songs of amaziness of your products.
Well, hopefully AMD develops a smarter upscaler by 2026 so that RDNA 5 doesn't have some big asterisk next to it in reviews, because we all know Nvidia will market their GPU as beating this purported behemoth due to software "trickery".
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,305
1,218
136
The vibe I get here is that AMD is a quitter in the GPU arms race. I see AMD as the 3rd wheel that has made a ton of money over the last 3 or 4 years. They need to invest some of that money into their GPU drivers. Then they need to stop charging premium prices for product that miss the mark compared to what Nvidia brings generation after generation.

I get that most people here buy the high end cards. The March steam survey shows the 3060 and 2060 as the top two graphics cards with 10.69% of the overall steam survey. Further down the list is the 3070 and 4070.

For AMD it would have made the most sense to have a better performing 7600XT from the beginning. That would have meant a 7600XT on N5 silicon instead of N6. Better GDDR6 memory than Nvidia for each graphics tier. Memory bandwidth is huge when it comes to OCing or having stock GDDR6 with fast speeds. Like 15-20% performance gains just by Ocing the memory. It also seems AMD has some firmware/driver limitations on their GPU's to gimp or cripple performance when it comes to OCing. The 7900GRE card that just had the GDDR6 memory speed limits lifted is just one recent example.

It seems AMD is selling the same GPU in some cases and limiting the performance with firmware vbios limits. Which begs the question on margins.
 

adroc_thurston

Diamond Member
Jul 2, 2023
3,572
5,155
96
The vibe I get here is that AMD is a quitter in the GPU arms race
they're literally building the monstrocity called Navi50 and I don't even talk about their GPGPU efforts (fancy!).
For AMD it would have made the most sense to have a better performing 7600XT from the beginning. That would have meant a 7600XT on N5 silicon instead of N6.
way to miss the point of N33 existence!
 

Mahboi

Golden Member
Apr 4, 2024
1,033
1,897
96
More ray-box intersection capability just means you can get through more box intersection tests per cycle. That's it.
Right, I'm thinking in software ways, not in GPU ways.
I was doing more reading and came across this article. It helped me understand some of the theory.
Right so I wrote a super long response before reading that, you should've posted that first!

So it works differently from game physics.
In game physics, if you have to calculate a dynamic object (I.E, bullet) with every dynamic object in the game, you'll do hundreds of detections per bullet per tick. Say you have 50 players with 50 meshes made out of 20 hitboxes in your Battlefield-like game, you don't want to check all 50 * 20. That is unusable, so physics engines divide the space into smaller bounding spaces that bring down collision detection to a more bearable amount of checks. If you divide through bounding volumes, for a 3D world, you'll create 8 3D spaces from 1, each of which gets divided by 8, which gets divided by 8, and so on 8 times. That's how it would work for physics. But RT doesn't work like that.
2_vs_4_arity_tree-1.png

I didn't pay close attention at first but this structure shows that you're not dividing by 8. Which makes no sense if you're dividing a world based on its center, like with phys engines.
RT divides by whatever seems sensible, so 2 by 2 or 4 by 4. Which again makes little to no sense in the context of a 3D world, but makes perfect sense in the context of a light in a 3D world.

If you visualise the BVH as a division not of the 3D space but of the space in front of a directional light, it makes a ton more sense.
Lighting_1_3.jpg

The light doesn't require full space division. It just needs to have a structure that allows testing multiple ray collisions effectively. The wider you get, the more collisions you do per cycle, at the cost of more computation I expect.
So I got what you were saying, just took me a bit after I had a bad night's sleep.
 
  • Like
Reactions: Tlh97 and cherullo

ToTTenTranz

Member
Feb 4, 2021
163
286
106
All of them. N33 too.

I doubt the Navi33 on N6 was supposed to reach much higher clocks than the RDNA2 predecessors on N7, considering the massive jump in clocks these had already brought compared to RDNA1.
How the RX 7600 XT on N6 averages >2.7GHz in games, almost +1GHz over the RX5700XT in N7, sounds super impressive to me. According to TSMC, N6 vs N7 should provide only 7% higher clocks at ISO power.

The big problem is how the N7->N5 transition should have brought at least +15% clocks and then N31/N32 had ~+0% over N21/N22.


Now if RDNA4 solves the clock problems that RDNA3 had, plus if it's using N4X, then we should be looking at +15%^2 between RDNA2 and RDNA4.
So if RDNA2 at N7 could average at ~2.4GHz with ease, then the RDNA4 chips on N4X should average (not boost) at 3.1GHz.



The vibe I get here is that AMD is a quitter in the GPU arms race.
AMD wasn't a quitter in 2008 when they held back on a top-end product but launched the cost-focused RV770 cards that wouldn't compete on the high-end against the massive Tesla GT200 with a 512bit bus. On the contrary, it was a period when AMD gained a lot of marketshare for releasing a product with much better price/performance than the competition.

They weren't quitters when they launched the Polaris family in 2016 that was actually pretty well received and also never had anything to compete in the high-end.

And they weren't quitters again when they launched the RDNA1 family in 2019 with their highest-end at $400 that had nothing to compete with the $1200 RTX 2080 Ti.


Interestingly save for the R300 / Radeon 9700 era and the crypto craze anomalies, it's whenever AMD focuses on less SKUs without competing at the high-end that they've been able to recoup marketshare.

GPU-Add-in-Board-Market-Share-2002-to-Q2-2023.png


Of course, not competing at the high end brings problems for brand value, product ASP and raw margins. However AMD desperately needs to increase dGPU marketshare at the moment. Being in the 10-15% marketshare probably makes it very hard to be profitable with spending all the money needed to put new chips out there.
 

Mahboi

Golden Member
Apr 4, 2024
1,033
1,897
96
I doubt the Navi33 on N6 was supposed to reach much higher clocks than the RDNA2 predecessors on N7, considering the massive jump in clocks these had already brought compared to RDNA1.
How the RX 7600 XT on N6 averages >2.7GHz in games, almost +1GHz over the RX5700XT in N7, sounds super impressive to me. According to TSMC, N6 vs N7 should provide only 7% higher clocks at ISO power.
For the 100000000000000000000000000000th or so time, the core problem in RDNA 3 is voltage handling, or some part of the power handling within the arch itself.
Meaning every single RDNA 3 product from a 7640hs to the XTX is clocked necessarily about 20% lower than it should've been and consumes way more power.
The announcement was "50% more perf, 50% better power efficiency". It was a bold lie and the real announcement should've been "50% more perf, 0% better power efficiency".

If it's got RDNA 3 in it, you can be sure that it is underclocked to cover for the horrid power draw. If you don't believe me, simply get a 7600 or xt and watt it up to 300W and try and see if you can't get 3.1Ghz easily.
The node is irrelevant in this. The arch itself is clocked to compensate for the electrical problems. If it got factory clocked at 2.5Ghz and 170W, you can be sure that it could do 3Ghz if you were willing to feed it 300W.
AMD wasn't a quitter in 2008 when they held back on a top-end product but launched the cost-focused RV770 cards that wouldn't compete on the high-end against the massive Tesla GT200 with a 512bit bus. On the contrary, it was a period when AMD gained a lot of marketshare for releasing a product with much better price/performance than the competition.
Nobody's saying AMD are quitters. I'm saying AMD's policy of desperately pinching every penny for their products and releasing only when they are sure of comfortable margins isn't getting them any appreciation.
Nvidia releases fat, expensive monolithic dies way bigger than anything AMD has done in over a decade. They don't care if it's "not as economical". This is a company philosophy problem, not a technical one. Nvidia goes big or goes home, AMD goes as small as possible and that's getting tiring.
Interestingly save for the R300 / Radeon 9700 era and the crypto craze anomalies, it's whenever AMD focuses on less SKUs without competing at the high-end that they've been able to recoup marketshare.
That's not surprising since Nvidia's "go big" philosophy tends to build large, expensive products and to then find the markets to pay for them.
Whereas AMD's "maximise value" policy tends to build reasonable, smaller dies that are meant to satisfy 80% of the market instead of pushing the market their way.
The former makes for great top dies, while the latter makes for great midrange and low size stuff.

I should make a complete explanation of why Nvidia keeps winning against AMD because apparently nobody noticed yet: it's not about the product, it's about the way you get it sold. Nvidia selfishly pushes the market and tech where it wants to. Fermi and CUDA, later VR, streaming and AI. The general effort of Jensen has been to provide things to a sometimes non-existant market and then to hype the heck out of it. This is a highly risky strategy because you're basically creating something out of pure will, and it is kind of an obnoxious thing to push everyone to do things your way, but clearly Jensen is very good at it.

AMD meanwhile patiently waits for Nvidia to innovate and follows, or for Sony to make a request, or for a market to present itself. This is a fundamental difference in company culture, and is what thoroughly disvalues adroc or branch_suggestion's opinions on "AMD will just build the biggest, fattest GPU and they'll just win". I have never seen AMD make an unreasonably fat and risky thing unless they were 100% sure that it would sell. This is why they're always N°2, because the competition sees the prey and leaps, while AMD waits to be sure that the prey has been correctly identified in the bush, has been mapped, weighed, geolocated by satellite, genetically tested, and then only jumps after all the securities have been taken.

In clear: whatever AMD will output with RDNA 5, I just expect Jensen to go "go bigger, go harder, go even if it is stupid, but just go".
Of course, not competing at the high end brings problems for brand value, product ASP and raw margins. However AMD desperately needs to increase dGPU marketshare at the moment. Being in the 10-15% marketshare probably makes it very hard to be profitable with spending all the money needed to put new chips out there.
I don't know, last I looked GPU client still brought in some profits. I honestly think that AMD's game right now is to navigate on sight and follow where Nvidia went while penny pinching every single step of the way, outside of a few techs that unfortunately aren't important enough to really move the market.
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,305
1,218
136
I doubt the Navi33 on N6 was supposed to reach much higher clocks than the RDNA2 predecessors on N7, considering the massive jump in clocks these had already brought compared to RDNA1.
How the RX 7600 XT on N6 averages >2.7GHz in games, almost +1GHz over the RX5700XT in N7, sounds super impressive to me. According to TSMC, N6 vs N7 should provide only 7% higher clocks at ISO power.

The big problem is how the N7->N5 transition should have brought at least +15% clocks and then N31/N32 had ~+0% over N21/N22.


Now if RDNA4 solves the clock problems that RDNA3 had, plus if it's using N4X, then we should be looking at +15%^2 between RDNA2 and RDNA4.
So if RDNA2 at N7 could average at ~2.4GHz with ease, then the RDNA4 chips on N4X should average (not boost) at 3.1GHz.




AMD wasn't a quitter in 2008 when they held back on a top-end product but launched the cost-focused RV770 cards that wouldn't compete on the high-end against the massive Tesla GT200 with a 512bit bus. On the contrary, it was a period when AMD gained a lot of marketshare for releasing a product with much better price/performance than the competition.

They weren't quitters when they launched the Polaris family in 2016 that was actually pretty well received and also never had anything to compete in the high-end.

And they weren't quitters again when they launched the RDNA1 family in 2019 with their highest-end at $400 that had nothing to compete with the $1200 RTX 2080 Ti.


Interestingly save for the R300 / Radeon 9700 era and the crypto craze anomalies, it's whenever AMD focuses on less SKUs without competing at the high-end that they've been able to recoup marketshare.

View attachment 97710


Of course, not competing at the high end brings problems for brand value, product ASP and raw margins. However AMD desperately needs to increase dGPU marketshare at the moment. Being in the 10-15% marketshare probably makes it very hard to be profitable with spending all the money needed to put new chips out there.
I had the Radeon 9700pro, 9800pro, Geforce 256 (original Nvidia card) Geforce 2, Geforce 3 4200Ti (capacitors blew on that card) HD 3850 and then the 8800GT that brought me years of gaming performance paired with my Q6600 @ 3.6ghz and 8GB of DD2 1000mhz. My HD 7950 Sapphire still works.
 

branch_suggestion

Senior member
Aug 4, 2023
392
875
96
You can pinpoint the exact moment the mindshare machine won.
Good old Maxwell, it was certainly impressive, but even it didn't quite deserve the success it got.
See, when AMD got the console market monopoly, the whole PCMR movement began... Coincidence? I really don't think so, NV certainly had a hand in it.
Basically a mountain of compounding factors at this time absolutely massacred AMD beyond that.
I should make a complete explanation of why Nvidia keeps winning against AMD because apparently nobody noticed yet
I'll do it for you. When your only competitor has to clean out house to survive and tread water to resurrect their CPU division, things tend to get lopsided.
NV serves fewer markets, but markets they believe in and try to prop up, this was on the brink of collapse at a few points, had they failed to bounce back from the 1-2 punch of Bumpgate/Thermi they would've probably been forced to shelve CUDA development among other things to remain solvent.
They took the risky approach of burning profits for market capture in the hope they blow up, or don't miss a big break.
AMD had no luxury, they serve more markets and chose to prop up CPU at the cost of GPU for 5 years or so. Software was of course kept to what was necessary as they couldn't dedicate enough resources to go on punitive quests, like Mantle was handed over for the basis of current API's, ROCm was really ambitious initially and was then gutted, but is now back to being ambitious again since and is no longer constrained by money and other resources. Raja era Radeon did have a lot of big ideas that failed to make inroads, but they really were trying to find a way to get even. HBM took longer to take off than hoped, stuff like SSD caching was always a bit too niche and would require a bunch of dev support, stuff like that.

What AMD is is years behind in developing some things, but thankfully catching up is way faster than doing the initial pathfinding that their comp has done. Open sourcing up the more problematic stuff has helped a lot over the years. The battleground is set to see the best from both sides duke it out, no more caveats.
 

Mahboi

Golden Member
Apr 4, 2024
1,033
1,897
96
I'll do it for you.
I don't think you're the right person to describe AMD objectively. All you do is justify everything they do.
When your only competitor has to clean out house to survive and tread water to resurrect their CPU division, things tend to get lopsided.
Irrelevant, CUDA is the winning piece for Nvidia since 2008, that's a long time before Bulldozer.
NV serves fewer markets, but markets they believe in and try to prop up, this was on the brink of collapse at a few points, had they failed to bounce back from the 1-2 punch of Bumpgate/Thermi they would've probably been forced to shelve CUDA development among other things to remain solvent.
And that's exactly what I'm saying, they take risks, sometimes very large ones. AMD takes technical risks, not market risks.
They took the risky approach of burning profits for market capture in the hope they blow up, or don't miss a big break.
AMD had no luxury, they serve more markets and chose to prop up CPU at the cost of GPU for 5 years or so. Software was of course kept to what was necessary as they couldn't dedicate enough resources to go on punitive quests, like Mantle was handed over for the basis of current API's, ROCm was really ambitious initially and was then gutted, but is now back to being ambitious again since and is no longer constrained by money and other resources. Raja era Radeon did have a lot of big ideas that failed to make inroads, but they really were trying to find a way to get even. HBM took longer to take off than hoped, stuff like SSD caching was always a bit too niche and would require a bunch of dev support, stuff like that.

What AMD is is years behind in developing some things, but thankfully catching up is way faster than doing the initial pathfinding that their comp has done. Open sourcing up the more problematic stuff has helped a lot over the years. The battleground is set to see the best from both sides duke it out, no more caveats.
Yaddi yadda, I heard all that.

You're justifying AMD's state on circumstances. I'm looking at causes.
The corpo that takes hard chances in a growing market is winning. The corpo that waits for trends and follows is N°2. That's just how an economy works.
AMD has hardware excellence but is holding back on risky investments. They were always like that and outside of trying some really fly things on the technical side, they never really went hard on big ventures. It's just not in their culture.

Let's be real for a second: if Intel hadn't fumbled the ball so hard they somehow shot it down their own throat, if they weren't choking on bloated cores and bloated internal designs since 5+ years, the game wouldn't have changed one bit. Zen would be a cheaper, decent, reasonable and cost effective arch, and Radeon would be the cheaper GPUs vs Geforce. That's it. Nothing has really changed in their attitude since 2010. They got a huge break with Zen/Zen 2 and onwards, and it is currently their breadwinner in every possible way outside of MI300/AI. Everything else is doing pretty decent, nothing explosive.

AMD essentially had the lucky break of Intel getting bloated and careless, and pounced. Since then, everything has been going great for them. But they're not market leader in their heads yet. And certainly not in GPUs. They play it safe mostly, take little risks and hardly invest really heavily.
 
  • Like
Reactions: Saylick

branch_suggestion

Senior member
Aug 4, 2023
392
875
96
AMD meanwhile patiently waits for Nvidia to innovate and follows, or for Sony to make a request, or for a market to present itself. This is a fundamental difference in company culture, and is what thoroughly disvalues adroc or branch_suggestion's opinions on "AMD will just build the biggest, fattest GPU and they'll just win". I have never seen AMD make an unreasonably fat and risky thing unless they were 100% sure that it would sell. This is why they're always N°2, because the competition sees the prey and leaps, while AMD waits to be sure that the prey has been correctly identified in the bush, has been mapped, weighed, geolocated by satellite, genetically tested, and then only jumps after all the securities have been taken.
Build the best and they will come.
Literally the corporate strategy of AMD in a nutshell.
It doesn't matter the market, the intent is the same. AMD through history has been a follower or second source because it is always a sound plan.
It doesn't matter who exposes a new market, nobody is untouchable. NV are very opportunistic because they are always chasing the highest growth possible by throwing stuff at the wall until it sticks, a strategy that can backfire and does leave room to be exploited. And at the end of the day, most of it gets usurped by more focused companies until they are ultimately left with their core business. Hardware is king, forever and always. Third party or open source stuff from ISVs is always more palatable than becoming slaves to a single all in one vendor.

Nobody got fired for buying IBM, Intel or... It is the same cycle, just a new subject. AMD might take a decade, but the goal is to out execute and win the most TAM without burning ISV/OEM/ODM et al. bridges.
 

Mahboi

Golden Member
Apr 4, 2024
1,033
1,897
96
Might I add, RDNA 4 is exactly in line with the typical AMD philosophy.
If Jensen had been told: "Boss, N41, N42 and N43 are all going to be late to market" or "they're all below performance targets", he'd have literally laughed and said "we'll see what marketing can do about it". He wouldn't have cared one second about selling a second grade product and would have focused on whatever strong points it had, even if it was about spamming endless drivel about the Greatness of DLSS for 2 years. Remember Thermi? He sold that. Even made a lot of profits off of it.

Instead, AMD behaved in typical AMD fashion: the products will turn out late or not that good? Just cancel them. Just release smaller stuff. Don't take risks, don't release things that may be difficult to market.
Chad Leather Jacket Man pushes the market against the wall and goes "you want to buy my GPU".
Virgin Advanced Micro Devices silently stares into the middle distance in front of his sales table for the market to come and see what it has to sell.

That's why I don't buy the whole shtick about "RDNA 5 will obliterate Nvidia through sheer force". Whatever it is that AMD will output, Jensen will go harder and dumber just to retain the crown. It's what he's always done, the guy's basically the Terminator, he never stops spamming bigger stuff. And at the game of "who's going for the more insane product", Jensen will always go harder than Lisa.

I don't doubt RDNA 5's engineering, I doubt the company's going to have the nuts to just go out and smash Jensen in the face with enough force that it'll actually do damage.
 

branch_suggestion

Senior member
Aug 4, 2023
392
875
96
I don't think you're the right person to describe AMD objectively. All you do is justify everything they do.
True, but I'm not unique. Look at how some people justify Intel or NV by comparison, I'm quite reasonable.
Irrelevant, CUDA is the winning piece for Nvidia since 2008, that's a long time before Bulldozer.
And it took a decade to actually yield noteworthy revenue from that work. They had the luxury to stick it out, not everyone does.
And that's exactly what I'm saying, they take risks, sometimes very large ones. AMD takes technical risks, not market risks.
I'll get back to this.
Yaddi yadda, I heard all that.

You're justifying AMD's state on circumstances. I'm looking at causes.
The corpo that takes hard chances in a growing market is winning. The corpo that waits for trends and follows is N°2. That's just how an economy works.
AMD has hardware excellence but is holding back on risky investments. They were always like that and outside of trying some really fly things on the technical side, they never really went hard on big ventures. It's just not in their culture.
AMD has tried more ambitious things, but either it fails to gain traction in time or the fail to execute.
Let's be real for a second: if Intel hadn't fumbled the ball so hard they somehow shot it down their own throat, if they weren't choking on bloated cores and bloated internal designs since 5+ years, the game wouldn't have changed one bit. Zen would be a cheaper, decent, reasonable and cost effective arch, and Radeon would be the cheaper GPUs vs Geforce. That's it. Nothing has really changed in their attitude since 2010. They got a huge break with Zen/Zen 2 and onwards, and it is currently their breadwinner in every possible way outside of MI300/AI. Everything else is doing pretty decent, nothing explosive.
At least AMD's CAGR is consistent, not as hypefulled mess like NV's has been through both crypto booms and the like. They will return somewhat to the mean eventually.
AMD essentially had the lucky break of Intel getting bloated and careless, and pounced. Since then, everything has been going great for them. But they're not market leader in their heads yet. And certainly not in GPUs. They play it safe mostly, take little risks and hardly invest really heavily.
They still would've been okay even against an Intel that didn't have overblown node design goals, but just a bit more muted.
Might I add, RDNA 4 is exactly in line with the typical AMD philosophy.
If Jensen had been told: "Boss, N41, N42 and N43 are all going to be late to market" or "they're all below performance targets", he'd have literally laughed and said "we'll see what marketing can do about it". He wouldn't have cared one second about selling a second grade product and would have focused on whatever strong points it had, even if it was about spamming endless drivel about the Greatness of DLSS for 2 years. Remember Thermi? He sold that. Even made a lot of profits off of it.
GP are stupid and someone with an eternal chip on his shoulder sure knows it. RDNA3 has actually performed okay in spite of being meh in every way.
Instead, AMD behaved in typical AMD fashion: the products will turn out late or not that good? Just cancel them. Just release smaller stuff. Don't take risks, don't release things that may be difficult to market.
Chad Leather Jacket Man pushes the market against the wall and goes "you want to buy my GPU".
Virgin Advanced Micro Devices silently stares into the middle distance in front of his sales table for the market to come and see what it has to sell.
The difference between a focused and a diversified company, NV doesn't have a choice but go all in, it is all they have ever known.
That's why I don't buy the whole shtick about "RDNA 5 will obliterate Nvidia through sheer force". Whatever it is that AMD will output, Jensen will go harder and dumber just to retain the crown. It's what he's always done, the guy's basically the Terminator, he never stops spamming bigger stuff. And at the game of "who's going for the more insane product", Jensen will always go harder than Lisa.
NV is behind in what they can cram into a single package, cramming more into a single device has been the #1 focus of AMD R&D ever since Su and Papermaster took over.
I don't doubt RDNA 5's engineering, I doubt the company's going to have the nuts to just go out and smash Jensen in the face with enough force that it'll actually do damage.
It is a GDDR DC accelerator part and a halo client part, so any risk of flopping has been covered. Things in semicon take time, remember a year ago the market tanked badly and anybody who invested hard took a big hit. Patience is a virtue, I have zero doubts that the greater wisdom is not on the side of a guy who cannot help but be envious of what isn't his.
It doesn't work that way. I can only write it so many times.
But it already has worked, just not in every BU yet.

In summary, one company wants to take what people do and do it better than anyone else, and the other wants to people to do what it wants them to do so nobody else can.
It really is the fundamental difference in psychology between the CEOs, my beef will always be with JHH for being a flawed human being, his company has always been an outward reflection of him and so I support anybody who wants to bring them back down to earth, that is where I stand.
His zealots are much the same, losers who so desperately want to be on the winning team, that they will sacrifice all moral and ethical concerns to do so, because that totally never backfires, right?
 
Jul 27, 2020
20,040
13,737
146
Geforce 3 Ti 200 (capacitors blew on that card)
I had the ASUS model. No problems to report, even overclocked. Sold it to upgrade to a Radeon 9000 series relatively cheap card (possibly 9500 something). It was my first Radeon and my foray into the exciting world of DX9 with all those furry animal demos (visuals as good as those demos are still not the minimum that developers aspire their game to have!).
 

Mahboi

Golden Member
Apr 4, 2024
1,033
1,897
96
GP are stupid and someone with an eternal chip on his shoulder sure knows it. RDNA3 has actually performed okay in spite of being meh in every way.
I'm not going to entertain this back and forth forever, but I'll at least entertain this.

RDNA 3 comes after RDNA 2 and I believe there is a post-generation echo effect. Lots of nerds will pay attention to parts and more or less directly shill them to friends and online, but a ton of non-nerds don't buy or don't pay that much attention right away. However the time comes for them to actually buy parts eventually, 2, 3, maybe 4 years later.
I assume that one of the reasons for RDNA 3's mild success is that RDNA 2 got overall really solid press. It's not just LTT and some others shilling AMD, it has been a general movement of positivity. RDNA 1 also mildly participated in this.

The other main reason is that, well, duh. Jensen knew there was nothing to fear from RDNA 3 and didn't want to sell Lovelace. He wanted market retention, not market servicing. We got very cut down cards like the 4070 sold for $600. A 4070 Ti had the gall of selling for $800 and offering as much VRAM as a $350 previous gen 6700 xt. Even for Nvidia's extreme gimping, this is pushing it.
I have a complete NV drone friend who can't stop shilling them, and even he was disgusted when the $1200 and $1600 prices were announced for the 4080/90.

RDNA 3 has enjoyed a very mild success because Nvidia literally didn't want to compete. That's what really happened here. Had RDNA 3 not been power wrecked, Jensen would've lowered all prices by 20%, released a 4090 Ti (admittedly may have had serious power problems cause of the cable tee hee) with 10% more CUs active, and that would have been this gen.
NV is behind in what they can cram into a single package, cramming more into a single device has been the #1 focus of AMD R&D ever since Su and Papermaster took over.
I will concede on that.
I feel like people are blowing out the "muh chipletzzzzzz" side of AMD out of proportion, if AMD can do it at TSMC, then Intel, Nvidia, Qualcomm, anyone can do it at TSMC. But it is undeniable that they are ahead by at least a few years, and the scalability of it will not be matched by any size of monolithic die.
I think NV can probably, if they hit a brick wall with monolithic, go to chiplets pretty quick, but they'll definitely be lacking behind in the same way that AMD is lacking behind them in say, raytracing.

The only thing I find pretty absurd with this claim is that people act as if "Nvidia cannot into chipletzzzz lawl". Friends, it's Nvidia. They can. They're not led by a donkey. And they have way more money than they need to hire away the right people. If Nvidia hasn't started extensively investing into chipletisation yet, it's because they think it's not worth the investment, not because they can't.
Which leads me to believe that the whole "MOAR CHPIPLETS MOAAAAR" idea is probably not necessary for NV's plans, or is seen as a wild goose chase that they can afford to wait on. Please remember that several techs are just rolling out in the space. I've heard more than a bit about how UCIe for example is just not worth the money Intel has poured into it, while AMD was content with sticking to CXL. However, UCIe will evolve into something better and possibly kick out CXL. That could very very well be the thinking at Nvidia, I'd even bet some money on it. I think deep in there, between the Grand Priests of DLSS yapping like mad and the chants to the God Emperor Jensen, there is a 5 year long timetable with the start of chiplets around RDNA 5 to counter whatever AMD's doing.

(oh hey I've finally found again THAT Forrest Norrod interview that I can never find when I need it)
It is a GDDR DC accelerator part and a halo client part, so any risk of flopping has been covered. Things in semicon take time, remember a year ago the market tanked badly and anybody who invested hard took a big hit. Patience is a virtue, I have zero doubts that the greater wisdom is not on the side of a guy who cannot help but be envious of what isn't his.
I don't know.
Nvidia took over the top spot when their software/drivers proved far more invested than ATI's.
Not that much has changed. Yes the parts can be killer. But I wouldn't be shocked at all if there is a general worry about the software side. Likely not so strongly for enterprise, I suppose, Uzzi already mentioned that they're quite solid there, but I have some healthy skepticism. They are years behind CUDA in all the fine little details, and they can't catch up by RDNA 5. It'll be the hammer vs the chisel. And the chisel may yet prove more interesting for a lot of tasks. I honestly don't expect AMD to be at full software parity or near parity until 2028 or beyond.
In summary, one company wants to take what people do and do it better than anyone else, and the other wants to people to do what it wants them to do so nobody else can.
It really is the fundamental difference in psychology between the CEOs, my beef will always be with JHH for being a flawed human being, his company has always been an outward reflection of him and so I support anybody who wants to bring them back down to earth, that is where I stand.
Yes a lot of people hate Nvidia because it is the mirror of Jensen and Jensen loves to be the Great Dictator.
His zealots are much the same, losers who so desperately want to be on the winning team, that they will sacrifice all moral and ethical concerns to do so, because that totally never backfires, right?
As someone that understands marketing and mass psychology quite well, the way Nvidia creates FOMO and weaponizes their shills into attacking anyone that's threatening Nvidia's narrative is sickening.
I'm also in that camp.
I just don't rationalise that AMD is somehow doing it right because of it. Jensen is good. Lisa is good too, but I doubt that she'll be as good as him when it comes to biting down hard and taking the fight frontally.
 

branch_suggestion

Senior member
Aug 4, 2023
392
875
96
I'm not going to entertain this back and forth forever, but I'll at least entertain this.

RDNA 3 comes after RDNA 2 and I believe there is a post-generation echo effect. Lots of nerds will pay attention to parts and more or less directly shill them to friends and online, but a ton of non-nerds don't buy or don't pay that much attention right away. However the time comes for them to actually buy parts eventually, 2, 3, maybe 4 years later.
I assume that one of the reasons for RDNA 3's mild success is that RDNA 2 got overall really solid press. It's not just LTT and some others shilling AMD, it has been a general movement of positivity. RDNA 1 also mildly participated in this.

The other main reason is that, well, duh. Jensen knew there was nothing to fear from RDNA 3 and didn't want to sell Lovelace. He wanted market retention, not market servicing. We got very cut down cards like the 4070 sold for $600. A 4070 Ti had the gall of selling for $800 and offering as much VRAM as a $350 previous gen 6700 xt. Even for Nvidia's extreme gimping, this is pushing it.
I have a complete NV drone friend who can't stop shilling them, and even he was disgusted when the $1200 and $1600 prices were announced for the 4080/90.

RDNA 3 has enjoyed a very mild success because Nvidia literally didn't want to compete. That's what really happened here. Had RDNA 3 not been power wrecked, Jensen would've lowered all prices by 20%, released a 4090 Ti (admittedly may have had serious power problems cause of the cable tee hee) with 10% more CUs active, and that would have been this gen.

I will concede on that.
I feel like people are blowing out the "muh chipletzzzzzz" side of AMD out of proportion, if AMD can do it at TSMC, then Intel, Nvidia, Qualcomm, anyone can do it at TSMC. But it is undeniable that they are ahead by at least a few years, and the scalability of it will not be matched by any size of monolithic die.
I think NV can probably, if they hit a brick wall with monolithic, go to chiplets pretty quick, but they'll definitely be lacking behind in the same way that AMD is lacking behind them in say, raytracing.

The only thing I find pretty absurd with this claim is that people act as if "Nvidia cannot into chipletzzzz lawl". Friends, it's Nvidia. They can. They're not led by a donkey. And they have way more money than they need to hire away the right people. If Nvidia hasn't started extensively investing into chipletisation yet, it's because they think it's not worth the investment, not because they can't.
Which leads me to believe that the whole "MOAR CHPIPLETS MOAAAAR" idea is probably not necessary for NV's plans, or is seen as a wild goose chase that they can afford to wait on. Please remember that several techs are just rolling out in the space. I've heard more than a bit about how UCIe for example is just not worth the money Intel has poured into it, while AMD was content with sticking to CXL. However, UCIe will evolve into something better and possibly kick out CXL. That could very very well be the thinking at Nvidia, I'd even bet some money on it. I think deep in there, between the Grand Priests of DLSS yapping like mad and the chants to the God Emperor Jensen, there is a 5 year long timetable with the start of chiplets around RDNA 5 to counter whatever AMD's doing.

(oh hey I've finally found again THAT Forrest Norrod interview that I can never find when I need it)

I don't know.
Nvidia took over the top spot when their software/drivers proved far more invested than ATI's.
Not that much has changed. Yes the parts can be killer. But I wouldn't be shocked at all if there is a general worry about the software side. Likely not so strongly for enterprise, I suppose, Uzzi already mentioned that they're quite solid there, but I have some healthy skepticism. They are years behind CUDA in all the fine little details, and they can't catch up by RDNA 5. It'll be the hammer vs the chisel. And the chisel may yet prove more interesting for a lot of tasks. I honestly don't expect AMD to be at full software parity or near parity until 2028 or beyond.

Yes a lot of people hate Nvidia because it is the mirror of Jensen and Jensen loves to be the Great Dictator.

As someone that understands marketing and mass psychology quite well, the way Nvidia creates FOMO and weaponizes their shills into attacking anyone that's threatening Nvidia's narrative is sickening.
I'm also in that camp.
I just don't rationalise that AMD is somehow doing it right because of it. Jensen is good. Lisa is good too, but I doubt that she'll be as good as him when it comes to biting down hard and taking the fight frontally.
Well yes, the argument we have at the end of the day is whether AMD's approach is the right one.
Opinions get found out eventually, what really matters is the knowledge people have in the debate.
NV is frankly a very predictable, boring company to analyse, yet so many analysts try to dig deeper into them then is really there. A lot like Apple hehe.
Sure they have a huge amount of software to look at and I'm sure some of that will end up being more important than I credit, but the overall picture is quite simple.
The funny part to me is that there are few NV fanboys who have a proper grasp of what AMD is doing, frustrating at the least.
I wish I could say something about Intel but... I dunno, they are kinda all over the place, outside of their volumes and long term fab plans (go pure-play already) they are kinda just there for now.

I think we can get back on topic now.
 
  • Like
Reactions: Tlh97 and Mahboi