Info 64MB V-Cache on 5XXX Zen3 Average +15% in Games

Page 54 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Kedas

Senior member
Dec 6, 2018
355
339
136
Well we know now how they will bridge the long wait to Zen4 on AM5 Q4 2022.
Production start for V-cache is end this year so too early for Zen4 so this is certainly coming to AM4.
+15% Lisa said is "like an entire architectural generation"
 
Last edited:
  • Like
Reactions: Tlh97 and Gideon

nicalandia

Diamond Member
Jan 10, 2019
3,331
5,281
136
Pretty dissipated by AMD's 2022 CES keynote.. Would have bought 5950x3d for 1000$ in a heartbeat. (was planning to buy a few so i could bin them myself)
Seen how a single Milan-X 8 core chiplet with that quality is sold for at least $900 and are selling all of them, I doubt they would set two of them for a $1000 59503DX CPU
 

Det0x

Golden Member
Sep 11, 2014
1,043
3,031
136
Seen how a single Milan-X 8 core chiplet with that quality is sold for at least $900 and are selling all of them, I doubt they would set two of them for a $1000 59503DX CPU
Do you expect 5800x3d to cost over 600$ ?
I can live with limited quantity halo product just fine.. And it would have been good PR for AMD in benchmark-charts on different forums aswell.
(i like to complete and benchmark)
 
Last edited:

Makaveli

Diamond Member
Feb 8, 2002
4,736
1,077
136
Do you expect 5800x3d to cost over 600$ ?
I can live with limited quality halo product just fine

Right now this is the question AMD hasn't really done a price reduction on MSRP due to ADL-S which we all know is due.

Will that happen before they launch this? if so I think $499 USD if not maybe $599??
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,692
136
And before anyone says something, my overclocking and computer tweaking hobby is very cheap compared to my other hobbies. (HIFI and cars)

I'm not. I feel the same way, except I have a thing about maximising value and performance per watt and dollar (except it's Krone here). Of course, that's a challenge right now.
 

Det0x

Golden Member
Sep 11, 2014
1,043
3,031
136
A stock Intel 12900K with same MB/RAM configuration used as test bed by AMD, gets about 180 FPS on Far Cry 6(1920x1080), the 5800X3D gets a 10% boost on top of that
Since you choose to quote me on exactly that and link my game gamebenches.
Can you show me where a stock AL 12900k gets ~180 fps in Far Cry 6 @ 1080p Ultra ?

1641327947372.png
(just a hint, that 353fps cpu average in SotTR is the fastest result for Zen3 "in the world")

I'm trying to bench and compete against SP 103 12900k's running something like close to 5.8ghz with D5 @ CL36 ~7000MT/s over at overclockers.net

Lets just say a 5950x3d would make my life(hobby) both much easier and more fun :p
 
Last edited:
  • Like
Reactions: Tlh97 and Joe NYC

Mopetar

Diamond Member
Jan 31, 2011
7,961
6,312
136
5800X3D it is: https://videocardz.com/newz/amd-con...-launches-this-spring-zen4-raphael-in-2h-2022

Other models MIA, I guess they are betting on Zen 4 instead?

If production capacity is limited then you cut it in half by releasing anything that needs two CCDs. Maybe we'll get those later or not at all depending on production capacity, but if you're going to sell what's branded as the best gaming CPU then it's probably better to have more of them for sale.

This single Zen3D is just a formality.
Too little too late, better to save the money for AM5 and Zen 4 at the end of the year.

Unless you were buying into the nonsense of Zen3D being a generational replacement because Zen 4 was delayed then this shouldn't seem surprising. It's not much different from the 3000 XT CPUs that came out a few months before the Zen 3 launch. A nice boost to what was previously available, but nothing beyond that. Frankly this steps it up a lot more than what we'd normally get so it's hard to complain.

If you're on AM4 and using a Zen 2 CPU then this might be a reasonable upgrade because AM5 is probably going to be DDR5 only judging from the other announcements. Zen 3D offers a good upgrade path for customers who don't want to have to make the jump to an entirely new platform. Zen 3 already offered a big leap over Zen 2 and Zen 3D builds on that even more. It's not often you can get such a giant bump in performance on the same platform.
 

nicalandia

Diamond Member
Jan 10, 2019
3,331
5,281
136
Can you show me where a stock AL 12900k gets ~180 fps in Far Cry 6 @ 1080p Ultra ?
Not Ultra

1641330239916.png


This is what AMD used on their testing:

"Intel Core i9-12900K and ROG Maximus Z690 Hero motherboard with BIOS 0702 and 2x16GB DDR5-5200. GeForce RTX 3080 on driver 472.12, Samsung 980 Pro 1TB, NZXT Kraken X62, Windows 11 28000.282. R5K-107"

How is that compared to what you would normally found on most enthusiast's rig?


1641331121275.png
 
  • Like
Reactions: lightmanek

Det0x

Golden Member
Sep 11, 2014
1,043
3,031
136

moinmoin

Diamond Member
Jun 1, 2017
4,988
7,758
136
With V-Cache launch being in Q2 it's ever closer to Zen 4 making it even more of a filler. Its existence is a competitive necessity to beat Alder Lake for which the 8c 5800X3D is sufficient. Also whereas 5800X had kind of a bad image price/performance wise, doing a good job at upselling people to 5900X or letting them drop to 5600X, 5800X3D being the sole V-Cache chip lets it stand out more. Regarding quantity AMD likely planned for a specific limited amount of V-Cache dies for the consumer/DIY market, and limiting the launch to the single CCD 5800X3D chip allows them to make the most of it.

Nevertheless AMD's focus on the server market is kind of painful to watch from the DIY POV.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,667
14,673
136
Yup, that is the hypothesis I had since Zen3d was put on hold on desktop and all of the H2 2021 production was diverted to Milan-X.

(I seem to recall I was harshly criticized by one of the moderators for saying desktop Zen3d was put on hold.)

OTOH, good thing they are primary targeting gaming, with 5800x3d. Hopefully, it will get some high binning parts, rather than worst, as was the case with 5800x.

It will be interesting to see 5800x3d beat 5950x in gaming, to prove that those extra cores were almost useless in gaming.
So, TODAY Zen3d was announced, and not even a definitive date set, just hinted at. So why again, do you say it was put on hold ? And per your previous post, a "non-launch" of a product not announced yet ? It was never announced before. Below is your original post:

 

nicalandia

Diamond Member
Jan 10, 2019
3,331
5,281
136
Would a Full Zen3+(6nm + DDR5) at 5.2 Ghz been enough to match Alder Lake in Gaming? Extrapolating the information we have about rembrandt it would have been very close in gaming, but I think that they don't have enough 6nm capacity
 

jamescox

Senior member
Nov 11, 2009
637
1,103
136
Server CPUs typically take lower leakage (and thus lower max turbo) bins. If it gets rejected from Milan-X then it is usually because it is too leaky. Those too leaky CPUs shouldn't have a need for a reduced single core turbo though, only when lots of cores are loaded.

Maybe Milan-X bins are even tighter than Milan to try and get the most efficient dies to somewhat offset the increased power brought by the larger L3? Even if so, the CCD fabrication is separate from the V-cache so if they have CCDs that don't meat stricter Milan-X bins but still can be used for Milan, then why send them for 3d packaging in the first place? To this effect, there shouldn't be any Milan-X cast offs as the CCD will fail binning for Milan-X before being thinned and stacked. Maybe they aren't binning at all prior to stacking but that would seem very foolish and wasteful.

I think AMD's explanation makes the most sense. With the increased cost to produce, they are using the 5800x3d as a test case to see how the market reacts. This will have minimal disruption to their other lines and give AMD market feedback on 3d cache consumer products.
AFAIK, that is not how it works. The cpu is not thinned after the wafer is diced. The whole wafer is polished down so there is no binning before thinning. In fact, the stacking process is likely at wafer level also. They thin the cpu wafer and then make a “reconstituted” or “carrier wafer” that contains cache chips (diced) and filler silicon. The entire wafer is stacked, bonded, and then diced. I assume that this also why it is limited to wafers made at the same facility. This should work fine for binning since Milan-X should be able to absorb parts with defective cores. They will probably go down to 2 or maybe even 1 core active for special large cache per core versions like the current 72F3 which is 8 cores and 256 MB L3. That one is the full 8 CCDs with just one core active per chip. These can be a little more leaky since the core count is so low, but the rated TDP is also not that high, so there are some limits. They don’t have use for 8 core leaky die in Milan-x, but the lower core count die are mostly still usable for Milan or Milan-x parts of some kind. The 8 core die are mostly used in 32 or 64 core devices, where the power consumption would be limiting so they should have a lot of 8 core die that are not usable in Milan-x.

Only selling the 8 core makes sense in a lot of ways. The number of people who have a 12 or 16 core and who would be willing to upgrade to an X3D version is very small. They would probably rather have those people spend the big money later in the year for a completely new Zen 4 system. Going to dual CCD devices cuts volume in half for something that is likely already low volume. It also might cannibalize other sales if they released such a device. I would definitely consider such a 16 core for a compile machine instead of Epyc or Threadripper / Threadripper pro if it was available. If the device had come out a few months ago, then a more full product range might have made sense. A lot of things have been delayed due to covid shutdowns and such. Even regular Milan availability seemed to be a problem for a while. We are not going to get new threadrippers until the Milan backlog can be resolved.

For most people, it will be wait for Zen 4 unless you need a new system right now or have an older chip that can be upgraded. It is likely not going to be worth it if you already have a 12 or 16 core. I would still consider a 5800X3D because I have dealt with board revision 1.0 before and I have an ancient desktop system at the moment with no upgrade path. Zen 4 will be new everything and expensive. The 5800X3D will be probably be the last and highest performance AM4, except some heavily threaded applications where 12 or 16 cores may still win. Waiting for Zen 4 and DDR5 might be a longer wait than expected due to the inevitable shortages.
 

inf64

Diamond Member
Mar 11, 2011
3,713
4,086
136
Would a Full Zen3+(6nm + DDR5) at 5.2 Ghz been enough to match Alder Lake in Gaming? Extrapolating the information we have about rembrandt it would have been very close in gaming, but I think that they don't have enough 6nm capacity
Maybe but they don't need such a part. 5800X3D will beat all of AL parts except maybe tie the performance of that new furnace intel is advertising on twitter. AMD will do great in both mobile and desktop, they will basically have top/near top performance in each segment, the only missing part that would be ultimate enthusiast chip is the 5900X3D. Come Q3 and Zen4, it will be a reprise of Zen3 vs CometLake due to massive IPC, Vcache and clock uplift Zen4 will bring.
 

Hitman928

Diamond Member
Apr 15, 2012
5,422
8,330
136
AFAIK, that is not how it works. The cpu is not thinned after the wafer is diced. The whole wafer is polished down so there is no binning before thinning. In fact, the stacking process is likely at wafer level also. They thin the cpu wafer and then make a “reconstituted” or “carrier wafer” that contains cache chips (diced) and filler silicon. The entire wafer is stacked, bonded, and then diced. I assume that this also why it is limited to wafers made at the same facility.

When using thinned dies in the past, they were always diced first as getting a flat thinning across the entire wafer is difficult, that's what the foundry told us anyway but it probably comes down to the tools used (there are different methods of thinning) and how thin you want the substrate. I imagine the thinner the substrate you need, the more likely you are that it will have to be diced first as you also lose structural stability after thinning. You are probably right though for HVM like AMD is going for and with only a single stack, the whole wafer is thinned first.

This should work fine for binning since Milan-X should be able to absorb parts with defective cores. They will probably go down to 2 or maybe even 1 core active for special large cache per core versions like the current 72F3 which is 8 cores and 256 MB L3. That one is the full 8 CCDs with just one core active per chip. These can be a little more leaky since the core count is so low, but the rated TDP is also not that high, so there are some limits. They don’t have use for 8 core leaky die in Milan-x, but the lower core count die are mostly still usable for Milan or Milan-x parts of some kind. The 8 core die are mostly used in 32 or 64 core devices, where the power consumption would be limiting so they should have a lot of 8 core die that are not usable in Milan-x.

I don't know about a lot of leaky dies(relative to total number). Modern processes are getting pretty good at constraining process variation though I don't know the specifics of 7 nm. With the node being this mature, though, I imagine they have it pretty well under control. Obviously I'm not privy to the actual data though.

Only selling the 8 core makes sense in a lot of ways. The number of people who have a 12 or 16 core and who would be willing to upgrade to an X3D version is very small. They would probably rather have those people spend the big money later in the year for a completely new Zen 4 system. Going to dual CCD devices cuts volume in half for something that is likely already low volume. It also might cannibalize other sales if they released such a device. I would definitely consider such a 16 core for a compile machine instead of Epyc or Threadripper / Threadripper pro if it was available. If the device had come out a few months ago, then a more full product range might have made sense. A lot of things have been delayed due to covid shutdowns and such. Even regular Milan availability seemed to be a problem for a while. We are not going to get new threadrippers until the Milan backlog can be resolved.

Yes, it makes sense, I just disagreed that the reason we went from AMD implying at least a 5900x used to only a 5800x is because AMD was surprised by the Milan-X demand or by how many leaky dies they were getting through fabrication. Neither of those explanations makes sense to me. I'm still not sure on the market position though. If AMD is accurate in the +15% performance estimate then it's not much added value for someone who would shop at the 5800x levels, especially since they are likely to be GPU limited anyway. I think people who just want the best period are more likely to spend the extra money and be shopping in the 5900x-5950x levels to begin with. We'll have to wait and see what the final cost is though.

For most people, it will be wait for Zen 4 unless you need a new system right now or have an older chip that can be upgraded. It is likely not going to be worth it if you already have a 12 or 16 core. I would still consider a 5800X3D because I have dealt with board revision 1.0 before and I have an ancient desktop system at the moment with no upgrade path. Zen 4 will be new everything and expensive. The 5800X3D will be probably be the last and highest performance AM4, except some heavily threaded applications where 12 or 16 cores may still win. Waiting for Zen 4 and DDR5 might be a longer wait than expected due to the inevitable shortages.

Yes, it still offers a very good upgrade path for a lot of people. It will just come down to how much AMD will charge for it. If it creeps to close to a 5900x, I can see even people who only game going for the 5900x instead to have more cores to be more comfortable about future performance or because they do (or at least think they do) enough in parallel to gaming to justify the additional cores.
 
  • Like
Reactions: lightmanek

Hans Gruber

Platinum Member
Dec 23, 2006
2,176
1,128
136
This looks to be turning into a complete debacle for AMD. I know a lot of you will deny this. The 5800x hit $299 just after the Alder Lake release. Nobody is going to pay $600 for 15% gaming performance. Except the people here I guess. That is not smart investing. It's sort of like buying B-Die for 2-3% performance and a 100%+ premium. It makes the 5900x the CPU of choice if you start throwing around $600 price tags.
 

DrMrLordX

Lifer
Apr 27, 2000
21,734
11,051
136
^ Not ragging on your hobby since I do the same, but knowing Zen4 is coming up in less than a year, switching 5950X for a slightly boosted 5950* sounds like a chore. A 5800X or 5900X, sure, if there are multithread use cases that are noticeable.

Could be some edge cases where the massive L3 will make a major difference. The focus is on gaming, but that is not the only area where cache matters. If you happen to have a workload that benefits then it'll be an amazing CPU.

The fact that the single core frequency is lower mean that diffusion of the heat is too slow

If you will recall, the 5800X was the hardest-to-cool of all Vermeer CPUs. Stacking L3 on top is only going to make things worse for that SKU, especially since they have not changed the process. 12c and 16c Vermeer-X should be easier to cool.

A pity but French guy was right, AMD is at the mercy of the foundries.

One gets the impression that AMD just doesn't care. They are serving their most-lucrative market. They're more than willing to make sacrifices on desktop. Besides, their competition will feel the bite of restricted wafer access very soon.

Would a Full Zen3+(6nm + DDR5) at 5.2 Ghz been enough to match Alder Lake in Gaming? Extrapolating the information we have about rembrandt it would have been very close in gaming, but I think that they don't have enough 6nm capacity

If they had wanted the 6nm, TSMC would probably have made it happen. Probably. That's the upgrade path TSMC wants partners to take anyway. That product would probably have been Warhol. It was cancelled for some reason. If it ever really existed!

This looks to be turning into a complete debacle for AMD.

How? Everything they aren't selling to you and I, they are selling for a significant markup to Microsoft et al. They are crying their way to the bank. It is funny watching people make predictions about how Alder Lake, Raptor Lake, Arrow Lake, etc. will pwn AMD when AMD is getting prepared for a run on server market share. AMD just doesn't care. It's not a debacle for AMD. It's a debacle for us.
 
Jul 27, 2020
17,155
11,022
106
I suspect they had a lot of unsold Ryzen 7 stock that they decided to repurpose by adding V-cache. Seems people with money went for Ryzen 9s and those on budget went for Ryzen 5s and the Ryzen 7 ended up being the least popular in terms of sales. That also seems to be why Zen3+ is going into laptops and not desktops because mobile CPUs are bought by OEMs, not the end-users so they are in a better position to sell available inventory due to their design wins. On the desktop side, they are at our mercy, people who want to pay not that much for the best performance (hey, we are not millionaires!). Same reason why they would rather divert their silicon to the server sector.
 

Doug S

Platinum Member
Feb 8, 2020
2,370
3,787
136
When using thinned dies in the past, they were always diced first as getting a flat thinning across the entire wafer is difficult, that's what the foundry told us anyway but it probably comes down to the tools used (there are different methods of thinning) and how thin you want the substrate.


Thinning, then dicing, then stacking may have the problem you describe. But thinning, stacking (including the substrate) and THEN dicing would not. I have no idea if they are thinning wafers or dies, just pointing out that thinning the wafer would probably not involve dicing as the next step.
 

deasd

Senior member
Dec 31, 2013
528
806
136
If production capacity is limited then you cut it in half by releasing anything that needs two CCDs. Maybe we'll get those later or not at all depending on production capacity, but if you're going to sell what's branded as the best gaming CPU then it's probably better to have more of them for sale.

Unless you were buying into the nonsense of Zen3D being a generational replacement because Zen 4 was delayed then this shouldn't seem surprising.
I would rather guess for AMD there's a plan B of 5950X3D/5900X3D if Zen4 was redacted by some other factors and delayed, such as DDR5 price. Everything could be variable.

We have a zero tolerance policy for profanity in the tech sub-forums.
Hiding profanity with symbols doesn't make it OK.
Consider this your only free pass and don't do it again.

Iron Woode

Super Moderator
 
Last edited by a moderator: