That will be up to each individual board maker.I am curious as to how loud/quiet the default fan curve will be. My 3090 sounds like a vacuum without me changing the fan curve.
Sorry it bored you. For me it was very cool seeing the engineer who basically convinced AMD to do chiplets in the first place talk about it.Not a single question that goes beyond of what is already known.
Pretty useless one.
When Zen originally arrived, there were extensive discussions on these fora about the cost savings and the significant binning benefits of chiplets. We found that the cost benefit increased with increased core counts, the opposite of the norm for the industry.Sorry it bored you. For me it was very cool seeing the engineer who basically convinced AMD to do chiplets in the first place talk about it.
Have we seen those slides before? I haven't seen them. Link if we have?
This is the first time I have seen any solid indication of how much Chiplets save AMD. Basically a monolithic 16 core Ryzen would cost them 2x to build vs the chiplet one. These savings are MUCH higher than I would have expected, and he said it was based on their internal yield models which he said were very accurate. So this isn't some vague marketing slide. The same slide also showed a more recent cost increase per area for smaller processes.
Plus the slide on the different scaling of Memory, Analog, Logic, was interesting and explains how the Memory controller chips work well.
Also very interesting, is what a massive work effort it is to port memory controllers/cache to a new node. I think a lot of the time, people assume porting the same stuff to a new node is trivial. This makes it clear that it's the total opposite of that, and just putting this stuff in a chiplet on the previous node, saves them a massive amount of work, and the kind of work no one really likes. Engineers want to work on new architecture logic, not porting memory controllers.
I also wondered if they were going to use an expensive silicon interposer for the chiplets, but he indicated they are using some much less expensive plastic tech.
In short I got a lot out of it. I hope some others did as well, even if you got nothing.
Sure but did anybody expect the cost was less than half for a 16 core part using chiplets vs monolithic or that an 8 core monolithic part would cost about the same as a 16c chiplet part?When Zen originally arrived, there were extensive discussions on these fora about the cost savings and the significant binning benefits of chiplets. We found that the cost benefit increased with increased core counts, the opposite of the norm for the industry.
Maybe you can do a search.
Exactly everyone talked about, and theory crafted savings. This is the first time I've seen them essentially quantified by AMD, and IMO they are much larger than expected.Sure but did anybody expect the cost was less than half for a 16 core part using chiplets vs monolithic or that an 8 core monolithic part would cost about the same as a 16c chiplet part?
Yup. The MCDs don't gain much from node shrinks. So they can stay on the cheaper node until there is a real reason to move them forward. Really saves a lot of time for things like respins, or porting the GCD to a new process.Now that they have these MCD's, sounds like they can greatly reduce the time and cost of MCD's across generations, and only do a major update on those when it's really needed.
@@ -2220,6 +2220,7 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
case IP_VERSION(10, 3, 6):
case IP_VERSION(10, 3, 7):
case IP_VERSION(11, 0, 1):
+ case IP_VERSION(11, 0, 4):
adev->flags |= AMD_IS_APU;
break;
Phoenix 2AMD X.Org drivers - Patchwork
patchwork.freedesktop.org
[07/19] drm/amdgpu/discovery: set the APU flag for GC 11.0.4 - Patchwork
patchwork.freedesktop.org
New GPU added upstreamGit:@@ -2220,6 +2220,7 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev) case IP_VERSION(10, 3, 6): case IP_VERSION(10, 3, 7): case IP_VERSION(11, 0, 1): + case IP_VERSION(11, 0, 4): adev->flags |= AMD_IS_APU; break;
GC 11.0.4 --> Is this Strix Point? Or Van Gogh Successor?
Strange thing, I thought GC 11.0.1 -->This is indicated as an APU/PHX, I thought it is N32.
Sorry if it was a discovery for you, but I honestly thought for the most of educated people here it's not, at least for those building on knowledge not only from tech-press slides.In short I got a lot out of it. I hope some others did as well, even if you got nothing
Don't know if this is old, but anyway... just curious if 8se asic was ever planned.New GPU added upstream
/**
* GFX11 could support more than 4 SEs, while the bitmap
* in cu_info struct is 4x4 and ioctl interface struct
* drm_amdgpu_info_device should keep stable.
* So we use last two columns of bitmap to store cu mask for
* SEs 4 to 7, the layout of the bitmap is as below:
* SE0: {SH0,SH1} --> {bitmap[0][0], bitmap[0][1]}
* SE1: {SH0,SH1} --> {bitmap[1][0], bitmap[1][1]}
* SE2: {SH0,SH1} --> {bitmap[2][0], bitmap[2][1]}
* SE3: {SH0,SH1} --> {bitmap[3][0], bitmap[3][1]}
* SE4: {SH0,SH1} --> {bitmap[0][2], bitmap[0][3]}
* SE5: {SH0,SH1} --> {bitmap[1][2], bitmap[1][3]}
* SE6: {SH0,SH1} --> {bitmap[2][2], bitmap[2][3]}
* SE7: {SH0,SH1} --> {bitmap[3][2], bitmap[3][3]}
*/
To me it sounded like they might reuse the MCD design for RDNA 4.Yup. The MCDs don't gain much from node shrinks. So they can stay on the cheaper node until there is a real reason to move them forward. Really saves a lot of time for things like respins, or porting the GCD to a new process.
What it means is they won't have to port the design to a new node. Porting a design is WAY more work than updating an existing design for the same node. They can very easily make tweaks to the current design to improve it while keeping it on the same node.To me it sounded like they might reuse the MCD design for RDNA 4.
But they will want to make the 7800 XT quite unattractive because they will have great yields with such a small GCD. So it make most sense to give the cut dies a poor perf/$, so only people who really have a specific budget buy them. That way, they don't have to sell fully functioning chips as a lower tier. So it's better for AMD if the 7900 XTX (full N31), 7800 XT (full N32) and 7600 XT (full N31) are the big sellers.Pricing entirely depends on clocking. with the 7900xt being $899 and a high clocking N32 would come rather close to it in terms of performance, I think even $699 is wishful thinking for a 7800xt. that would be a $200 gap or almost 30% jump. 7900xt is already unattractive right now and a 7800xt with say 15% less performance for 30% less money would make it useless. Therefore I expect it to be priced either higher or not really hitting these 3 ghz clocks as well. I think the perfromance gab will need to be at least 20% for a $699 price.
One part that's news to me is that Naffziger was pushing for chiplets back in 2016. Seemed late to me, but I guess the Zeppelin MCM is not considered as chiplets yet. Still this claim means AMD made the decision to disaggregate compute and I/O only sometime in 2016 and launched it with Zen 2 in 2019 (and keeps the formula essentially unchanged since). That's still quite a speedy TTM for a previously unproven tech.GN talk with a AMD engineer about chiplets:
I'm not sure if that's still the plan, but the original roadmap for RDNA4 was 3nm GCD + 5nm MCD.What it means is they won't have to port the design to a new node. Porting a design is WAY more work than updating an existing design for the same node. They can very easily make tweaks to the current design to improve it while keeping it on the same node.
So not only are the MCDs being manufactured on a really cheap, reliable process, but there is way less overhead when it comes to any required changes.
That'd be a surprise. I'd honestly expect something like N4 GCD + N6 MCD for midrange and N3E GCD + N6 MCD for the high end of RDNA 4. Not sure if we'll see dual GCDs but it would be nice to see that in the next generation.I'm not sure if that's still the plan, but the original roadmap for RDNA4 was 3nm GCD + 5nm MCD.
-Wonder if this will allow for an RDNA3+ gen where we get a tweaked GCD and unchanged MCD (maybe more of them?)I'm not sure if that's still the plan, but the original roadmap for RDNA4 was 3nm GCD + 5nm MCD.
I was referring to the reference card. When it comes to third parties, they nearly always throw efficiency out the Window.That will be up to each individual board maker.
My current thought is this: No, they will not touch MCDs. Yes, they will tweak the GCD. For the *50 versions we will see higher clocks similar to last gen. We won't see significant changes unless AMD moves N32 chips to an N31 die, which I don't think will happen. However, if there is an actual clock frequency "bug", we will see some rather large uplifts in frequency without a similarly large uplift in power consumption, or maybe WITH a large uplift in power consumption. Early rumors for Navi31 were claiming a 400-450W TGP. Maybe THAT part is the *50 part. I'm not going to bother speculating TBH.-Wonder if this will allow for an RDNA3+ gen where we get a tweaked GCD and unchanged MCD (maybe more of them?)
Same here. They may switch to a new node when they switch to GDDR7, but even then it is questionable. Why switch to N5 unless those wafers really drop in price?That'd be a surprise. I'd honestly expect something like N4 GCD + N6 MCD for midrange and N3E GCD + N6 MCD for the high end of RDNA 4. Not sure if we'll see dual GCDs but it would be nice to see that in the next generation.