Info 64MB V-Cache on 5XXX Zen3 Average +15% in Games

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Kedas

Senior member
Dec 6, 2018
355
339
136
Well we know now how they will bridge the long wait to Zen4 on AM5 Q4 2022.
Production start for V-cache is end this year so too early for Zen4 so this is certainly coming to AM4.
+15% Lisa said is "like an entire architectural generation"
 
Last edited:
  • Like
Reactions: Tlh97 and Gideon

Joe NYC

Golden Member
Jun 26, 2021
1,928
2,269
106
I'm sure that there is at least some sort of trade off with respect to total package power and peak clocks due to thermals. While the L3 due aren't big power hogs, they are also certainly not free.

Like there was a trade-off with going to chiplets.

If you go from 1 to 2, it may seem like you are barely making up the lost ground. But you can go with 8 chiplets, and blow the doors off everything else on the market.

It seems like people are having hard time believing that the 3D stacking likewise opens a new dimension to performance scaling as the chiplet approach did.

AMD did not hold back with chiplets, went all in and gained performance crown in most important (server) market. Some people may have said that gong with 8 chiplets and 64 cores was an overkill, will use too much power, will have too much overhead there will be diminishing returns and all that. AMD went there.

Why are people thinking that AMD will / should hold back with L3 stacking? When, unlike adding chiplets, the incremental costs of adding extra layers of L3 are ridiculously low, the potential performance gains have fewest trade-offs / drawbacks?

If 4, 8, 12 layers of L3 delivers bigger performance gains on some applications than Zen 3 -> Zen 4 or even Zen 5, there is no reason not to go there....

Apple is certainly not going to hold back. Apple does not owe anything to the rest of the PC market.

Just because Intel and AMD held back performance gains from adding DRAM on CPU package, for benefit of OEMs, Apple trampled over this silly arrangement and went ahead with DRAM in CPU package...

If AMD holds back with going all the way in on SRAM stacking, there are too many competitors out there, and someone will use it fully and market opportunity will be lost.
 

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
Oh, I'm not suggesting that AMD hold back anything. I was just pointing out that, while there is a gain in memory constrained performance with the SRAM stack, there is a real tradeoff to be aware of. The stack will make it more difficult to dissipate heat from the individual cores. It may be only slight, but it's there. In practice, it may only limit peak boost speeds to slightly shorter durations, or shave 100mhz off the top clocks while fully loaded, but there will be an effect. It can be mitigated, but those measures will also not be free.

In the APU world, that tradeoff could be more expensive to overcome than is worth it. You are in an even more thermally constrained environment. However, the performance gain possibilities are more significant. The GPU section lives starved for bandwidth, and a stack of L3 just for an infinity cache would be a big uplift there. The problem in an APU is that there won't be a convenient place to stack that cache because the iGPU doesn't have a big section of just cache that can be covered and interfaced with. That's not to say that an APU couldn't be floorplanned to have a big stripe down the center that covers the CCX L3 and lines up with a section of uncore that isn't a hot spot. That would allow a long cache die to cover both adding L3 to the CCX and an infinity cache to the GPU without causing major thermal issues.

There are a lot of positives to an APU with a cache stack that go beyond the performance uplift that more cache brings, not the least of which is less use of the memory bus. However, it also means that the GPU and the cores will spend less time waiting on memory and more time working, which means more generated heat that now has to pass through a less than ideal die arrangement. It also means more power used by the processor, though, that will be a function of "rush to completion" and allow other power management strategies.
 

Joe NYC

Golden Member
Jun 26, 2021
1,928
2,269
106
Oh, I'm not suggesting that AMD hold back anything. I was just pointing out that, while there is a gain in memory constrained performance with the SRAM stack, there is a real tradeoff to be aware of. The stack will make it more difficult to dissipate heat from the individual cores. It may be only slight, but it's there. In practice, it may only limit peak boost speeds to slightly shorter durations, or shave 100mhz off the top clocks while fully loaded, but there will be an effect. It can be mitigated, but those measures will also not be free.

2 things:
- while on individual CCD level, there may be some power dissipation degradation, using identical CCD die, TSVs also present opportunity to conduct heat up to the lid of the package. Say, the "structured silicon" has TSVs that align with the CCD and also get bonded. There would be cost to it, so it may not be worth it.
- the total package power under load may actually be down, not up when L3 is getting a good hit rate, so the power envelope of the package to do useful work may go up. By useful work, I mean running code, rather than sending requests to I/O die, I/O die sending request out of package to memory, retrieving it from memory, sending result to CCD.

In the APU world, that tradeoff could be more expensive to overcome than is worth it. You are in an even more thermally constrained environment. However, the performance gain possibilities are more significant. The GPU section lives starved for bandwidth, and a stack of L3 just for an infinity cache would be a big uplift there. The problem in an APU is that there won't be a convenient place to stack that cache because the iGPU doesn't have a big section of just cache that can be covered and interfaced with. That's not to say that an APU couldn't be floorplanned to have a big stripe down the center that covers the CCX L3 and lines up with a section of uncore that isn't a hot spot. That would allow a long cache die to cover both adding L3 to the CCX and an infinity cache to the GPU without causing major thermal issues.

There are a lot of positives to an APU with a cache stack that go beyond the performance uplift that more cache brings, not the least of which is less use of the memory bus. However, it also means that the GPU and the cores will spend less time waiting on memory and more time working, which means more generated heat that now has to pass through a less than ideal die arrangement. It also means more power used by the processor, though, that will be a function of "rush to completion" and allow other power management strategies.

Good points on APU.

APU has not benefited, at all from chiplet partitioning, probably because the I/O hop power overhead is just something unaffordable in laptop space.

Benefits of stacked L3 caching in APU would be huge. But getting there is difficult. There is an L3 area in Cezanne, but who knows if it is designed for stacking... It is probably not happening with Zen 3.

I think in APU, the stacked L3 would probably be a win win from power and performance, no tradeoff. AMD may just not have enough design teams to bring every SoC permutation to market.
 

maddie

Diamond Member
Jul 18, 2010
4,738
4,667
136
2 things:
- while on individual CCD level, there may be some power dissipation degradation, using identical CCD die, TSVs also present opportunity to conduct heat up to the lid of the package. Say, the "structured silicon" has TSVs that align with the CCD and also get bonded. There would be cost to it, so it may not be worth it.
- the total package power under load may actually be down, not up when L3 is getting a good hit rate, so the power envelope of the package to do useful work may go up. By useful work, I mean running code, rather than sending requests to I/O die, I/O die sending request out of package to memory, retrieving it from memory, sending result to CCD.



Good points on APU.

APU has not benefited, at all from chiplet partitioning, probably because the I/O hop power overhead is just something unaffordable in laptop space.

Benefits of stacked L3 caching in APU would be huge. But getting there is difficult. There is an L3 area in Cezanne, but who knows if it is designed for stacking... It is probably not happening with Zen 3.

I think in APU, the stacked L3 would probably be a win win from power and performance, no tradeoff. AMD may just not have enough design teams to bring every SoC permutation to market.
Having TSVs purely to conduct heat is not cost effective in my view.

SI - 148 W/m K
Cu - 401 W/m K

1% of your die area ( a big number) being TSVs would only change the number from 148 > 150.5 W/m K
Even if 10% of you die area ( a huge number) consisted of TSVs, you would only increase the conductivity by 17% to 173 W/m K from 148
 

tomatosummit

Member
Mar 21, 2019
184
177
116
Having TSVs purely to conduct heat is not cost effective in my view.

SI - 148 W/m K
Cu - 401 W/m K

1% of your die area ( a big number) being TSVs would only change the number from 148 > 150.5 W/m K
Even if 10% of you die area ( a huge number) consisted of TSVs, you would only increase the conductivity by 17% to 173 W/m K from 148
The reason they're using dummy silicon is because of the thermal expansion properties.
The more copper you put in, the higher the expansion difference is.

Anyway, the zen3 4d cache is thinning the die. Assuming a near perfect bond between the die and dummy silicon then there should be no extra silicon and the thermal properties will be as close to the same as they can make it.
I do expect a drop in max operating frequency though, zen is too hotspot sensitive and the bonds will never be perfect.
 

Joe NYC

Golden Member
Jun 26, 2021
1,928
2,269
106
Having TSVs purely to conduct heat is not cost effective in my view.

SI - 148 W/m K
Cu - 401 W/m K

1% of your die area ( a big number) being TSVs would only change the number from 148 > 150.5 W/m K
Even if 10% of you die area ( a huge number) consisted of TSVs, you would only increase the conductivity by 17% to 173 W/m K from 148

In theory, if the Core area of the CCD had TSVs (to conduct heat and the structural area had aligned TSV, they would create perfect bond, unlike sheets of silicon.

The goal is to quickly lower the temperature difference between the hotspot in silicon and the top of the package. More about peak conductivity than average heat conductivity.

So in theory, the stacked die package can has a potential to improve thermal properties (at some cost).

So the tradeoff may shift.

Say adding stacked silicon:
- improves performance by a lot
- lowers heat conductivity
- which lowers top clock speed a little
- which lowers performance a little

To new situation were structural silicon has TSVs:
- improves performance by a lot
- improves heat conductivity
- which improves clock speed by a little
- which improves performance by a little more
- at some increased $$$ cost (to add more TSVs and bond structural silicon)
 
Last edited:

Joe NYC

Golden Member
Jun 26, 2021
1,928
2,269
106
The reason they're using dummy silicon is because of the thermal expansion properties.
The more copper you put in, the higher the expansion difference is.

But it should be the same as the silicon next to it (with L3 SRAM) and silicon below it.

Anyway, the zen3 4d cache is thinning the die. Assuming a near perfect bond between the die and dummy silicon then there should be no extra silicon and the thermal properties will be as close to the same as they can make it.
I do expect a drop in max operating frequency though, zen is too hotspot sensitive and the bonds will never be perfect.

Good points about the height of the silicon being the same, and the question is just increased resistance between the layers.

But if it is really a problem that needs remedy, then there is a solution to conduct the heat with TSVs.
 
Last edited:

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
This is all great, but I'm getting really weary of there not being any decent value options for Zen3. I guess they know they can sell all of their production at very high prices so they have no interest in offering affordable options with their current gen.

4C/8T, like a modern variant of the nearly impossible to find 3100/3300X in Zen3 would be a fantastic thing to see at $129-$149.

6C/12T is kind of a sweet spot for 95% of users in the real world, but it's $300 to get a 5600X, a full 50% price hike on cost of entry for a 6C/12T part from Zen2. And 8C and above are $$$.

This 3D Cache is impressive, and sure, move the halo parts along, whatever. I'm just a bit disillusioned by the continual consumer squeeze. It doesn't give me a lot of confidence in Zen4 pricing, which if it follows suit will be more like $399 or higher for Ryzen 5 6600, or maybe they drop 6C entirely and the stack begins with $599 Ryzen 7 6700.
 

jpiniero

Lifer
Oct 1, 2010
14,573
5,203
136
4C/8T, like a modern variant of the nearly impossible to find 3100/3300X in Zen3 would be a fantastic thing to see at $129-$149.

Yields are good enough that Epyc demand is more than enough to handle any defective dies. Literally wouldn't be any product to sell.

6C/12T is kind of a sweet spot for 95% of users in the real world, but it's $300 to get a 5600X, a full 50% price hike on cost of entry for a 6C/12T part from Zen2. And 8C and above are $$$.

Intel obviously played a part in the 3600 pricing, given that Skylake and friends are meaningfully faster in games than Zen 2. I think they would have liked higher prices even then and they got it with Zen 3.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
Yields are good enough that Epyc demand is more than enough to handle any defective dies. Literally wouldn't be any product to sell.



Intel obviously played a part in the 3600 pricing, given that Skylake and friends are meaningfully faster in games than Zen 2. I think they would have liked higher prices even then and they got it with Zen 3.

Haha I'm sure THEY like it 😂 I'm just disappointed that we're seeing a combo of higher prices and a crippling in terms of available models in the current gen. It's the same with GPUs of course. If we go back to the Nvidia 700/900/1000 series we saw much more complete and diverse product stacks.

750/750ti
760
770
780
780ti

Along with a handful of 'just a card' type 710/730 kind of stuff.

Same largely with 950/960/970/980/980ti, and 1030/1050/1050ti/1060-3GB/1060-6GB etc.

Now, it's all 3060+ for $600+ if you can even find something. A 4GB 3050 non-Ti for $179 with ~1660ti performance would be the kind of thing we could have gotten had the prices and availability not kind of been vastly derailed. TSMC's capacity limitations, COVID supply chain woes, crypto hoarders, trade hostility, it's starting to really hurt PC building.

Corporations don't give half a crap about consumers, and it's important to remember that of course, but still disappointing to see. It's almost wistful to remember 4770k replacing 3770k for the same price + or - a few %, and Zen/Zen+ pricing lol.
 

Joe NYC

Golden Member
Jun 26, 2021
1,928
2,269
106
4C/8T, like a modern variant of the nearly impossible to find 3100/3300X in Zen3 would be a fantastic thing to see at $129-$149.

6C/12T is kind of a sweet spot for 95% of users in the real world, but it's $300 to get a 5600X, a full 50% price hike on cost of entry for a 6C/12T part from Zen2. And 8C and above are $$$.

This 3D Cache is impressive, and sure, move the halo parts along, whatever. I'm just a bit disillusioned by the continual consumer squeeze. It doesn't give me a lot of confidence in Zen4 pricing, which if it follows suit will be more like $399 or higher for Ryzen 5 6600, or maybe they drop 6C entirely and the stack begins with $599 Ryzen 7 6700.

Everything is affected by shortage, not just graphics cards. CPUs for consumers, consumer graphics cards use the same capacity as CPUs for servers and compute units for AI deployments.

So there is just no capacity to offer a chip lower than 5600x. The shame chiplet that goes into 5600x can also go to EPYC server chips selling at higher prices. AMD has a different market for chips where number of cores may not be working - the server "F" chips, where only 1, 2 or 4 cores are enabled..

I don't think AMD is going to come out with a 4 core chiplet this late in the game for Zen 3. Only possibly some lower end APU.
 

Thibsie

Senior member
Apr 25, 2017
743
795
136
It doesn't give me a lot of confidence in Zen4 pricing, which if it follows suit will be more like $399 or higher for Ryzen 5 6600, or maybe they drop 6C entirely and the stack begins with $599 Ryzen 7 6700.

Well, I understand that but at the same time Intel has played for so long the same game (and way better), can we really be that annoyed by AMD attitude ?

If it's Intel it is normal 'cos it's Intel
If it's AMD they look bad. Really?
 
  • Love
Reactions: spursindonesia

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
Well, I understand that but at the same time Intel has played for so long the same game (and way better), can we really be that annoyed by AMD attitude ?

If it's Intel it is normal 'cos it's Intel
If it's AMD they look bad. Really?

Intel largely avoided model replacement price hikes. Of course, their gen to gen leaps were usually small as well, aside from a few standouts. Intel's big sin was never really pricing (unless you bought weird EEs or HEDT platform), it was sandbagging their core counts for years.

What this is like is Intel doing a 50% price hike on a tock/tick, and deciding not to release anything but i5 and above. It's not even unprecedented, when AMD commanded the field with the awesome Athlon 64 and X2, they were only too happy to make $1000 dual cores and their cheapest X2-3800 took a while to come out and was ~$300.

It's understandable to some extent due to the shortages and fiduciary duties to maximize shareholder profit, but it's also more or less abandoning anyone in the budget space. Their upcoming 'cheap' 5700G is $260, as of now all Zen3 are essentially $300+.

Ditto Nvidia, AMD GPUs, and a ton of other stuff right now. Profiteering while the supply and demand are messed up.

I just hope this is the peak of the trend. AMD might decide to justify offering only $599 and up CPUs for Zen4. If they estimate they can sell their entire inventory that way, there's really no reason for them not to other than bad PR.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
I'd add that my other main complaint on Intel is their often pointless socket changes and needless crippling of so many SKUs. All consumer i3+ should be unlocked, K should be better binning only. And capping ram speeds to 2666 and 2933 for so long was completely asinine. These deliberate ways of crippling their own products makes them look even weaker than they need to be lol. It's like a second place runner punching himself in the nuts before every race 😆
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
Keep in mind AMD has a a reasonable sku limit. They while selling most if their products can only sell so many different skus before the stock per sku makes having the variant cost to much and create a possible over stick issue when production ends. This is worse when you are selling basically everything right now, all it would do is shift more product over to a cheaper CPU and take significant hit on volume on all other products.

It's a bit of a good example of too successful for you own good. But honestly AMD doesn't generally kill production after a new release and has always used previous products to fill in price points and spots in their lineup. To them the sub 300 CPU spot is filled with the 3600, 3300x and 3100.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
Keep in mind AMD has a a reasonable sku limit. They while selling most if their products can only sell so many different skus before the stock per sku makes having the variant cost to much and create a possible over stick issue when production ends. This is worse when you are selling basically everything right now, all it would do is shift more product over to a cheaper CPU and take significant hit on volume on all other products.

It's a bit of a good example of too successful for you own good. But honestly AMD doesn't generally kill production after a new release and has always used previous products to fill in price points and spots in their lineup. To them the sub 300 CPU spot is filled with the 3600, 3300x and 3100.

Agreed. I don't blame them honestly, it all makes business sense, it's just unfortunate for the consumer.

The 3100 and especially 3300X are essentially vaporware. I did see an Asus prebuilt at Best Buy with a 3100 though lol.

The 3600 is flat out the best deal for a commonly available SKU IMHO, though the 10100, 10400, and 10500 and their 11th gen counterparts are respectable as well. The K series are overpriced, and the 10/12 core Intel parts are frankly stupid buys under any condition.


They're perpetually 'in stock soon', but at this point I doubt I'll ever see one. The Micro Center manager said he hasn't seen any since the release trickle.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
The 3300x was briefly posted on a sales deal on reddit Monday. It was the first time that I've seen one available anywhere since last year.

Good catch! I feel like it is a fantastic part and could have been a big hit. The market has really missed out on nice parts in that price range. The only things Intel typically offer in that range are crippled/locked stuff.
 

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
The sad part is that AMD could have produced a "half Renoir/cezanne" with 4 cores and 8 CUs of Vega iGPU that would have been roughly 2/3rds the size that the 5400g die is now. They could have produced the heck out of them and covered a lot of volume. It's too bad that they are so capacity constrained.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
The sad part is that AMD could have produced a "half Renoir/cezanne" with 4 cores and 8 CUs of Vega iGPU that would have been roughly 2/3rds the size that the 5400g die is now. They could have produced the heck out of them and covered a lot of volume. It's too bad that they are so capacity constrained.
It's not just capacity. It's design cost and waste management. It would cost them tons of money to design sample and release another die and as we saw with the 3k series when they are selling everything, they weren't going to sacrifice better dies to get the 3100 and 3300x numbers up. They probably don't really care if the 5400G is using a larger die if all the chips getting sold were unusable a higher end chip. It's all reclaimed waste at this point.

All a smaller die does is lower the count of higher end dies available and force them into even smaller skus to lower waifer waste.
 
  • Like
Reactions: Tlh97 and moinmoin

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
It's not just capacity. It's design cost and waste management. It would cost them tons of money to design sample and release another die and as we saw with the 3k series when they are selling everything, they weren't going to sacrifice better dies to get the 3100 and 3300x numbers up. They probably don't really care if the 5400G is using a larger die if all the chips getting sold were unusable a higher end chip. It's all reclaimed waste at this point.

All a smaller die does is lower the count of higher end dies available and force them into even smaller skus to lower waifer waste.

This is all true, but it begins to beg the question of if they want to be in the regular consumer CPU space if the trend continues.

Say they go to 5nm or 6nm with new TSMC contract, and project they can sell their entire allotment in the HPC/Server side of things with perhaps a handful of R9 $1500 CPUs. Should they simply stop making everything else because it's less profitable? In terms of stock value and profit, the answer would be yes.

It would I believe come at a cost in consumer perception of the brand though. They would virtually cease to exist outside of back-office/b2b.
 

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
It's not just capacity. It's design cost and waste management. It would cost them tons of money to design sample and release another die and as we saw with the 3k series when they are selling everything, they weren't going to sacrifice better dies to get the 3100 and 3300x numbers up. They probably don't really care if the 5400G is using a larger die if all the chips getting sold were unusable a higher end chip. It's all reclaimed waste at this point.

All a smaller die does is lower the count of higher end dies available and force them into even smaller skus to lower waifer waste.

It has been stated over and over that their yields on N7 are excellent. At this point, they are already binning perfectly functional die down to lower levels for no other reason than contract obligations.

The point of a smaller die is the increased yields per wafer. Assuming that yields are constant, and that they are not packaging constrained, they will yield over 50% more usable dies from the same wafer while also relieving them from wasting functional six/eight core dies to meet their split obligations. This gives them a higher ASP on those wafers and more total volume per month.
 

Mopetar

Diamond Member
Jan 31, 2011
7,826
5,971
136
Having TSVs purely to conduct heat is not cost effective in my view.

SI - 148 W/m K
Cu - 401 W/m K

1% of your die area ( a big number) being TSVs would only change the number from 148 > 150.5 W/m K
Even if 10% of you die area ( a huge number) consisted of TSVs, you would only increase the conductivity by 17% to 173 W/m K from 148

The overall amount doesn't change, but if you used it strategically to help dissipate heat from the hottest spots it would be a bigger help. Whether that's possible or even worth doing for any benefits conferred is another matter entirely though.
 

tomatosummit

Member
Mar 21, 2019
184
177
116
It has been stated over and over that their yields on N7 are excellent. At this point, they are already binning perfectly functional die down to lower levels for no other reason than contract obligations.

The point of a smaller die is the increased yields per wafer. Assuming that yields are constant, and that they are not packaging constrained, they will yield over 50% more usable dies from the same wafer while also relieving them from wasting functional six/eight core dies to meet their split obligations. This gives them a higher ASP on those wafers and more total volume per month.
Cutting functional dies is probably far far more likely than not with these dies. I think it's only when you get to the gargantuan 500mm+ dies that it really kicks in, see the nvidia big dies where I don't think there's even a full fat a100 available yet and titans are rare halo parts for consumers and even they're cut down parts half the time.

It's about what the market will sustain and right now it's a sellers market.
Zepplin was reported to have incredible yields, sources saying 88%+ of perfect dies yet the 1600/2600 were by far the most popular parts for the consumer market, same for zen2 with the 3600 and in part the 3900x where amd even underestimated the demand.
Now they're selling the 5600x at $300 and 5900x at $500 which I find line up pretty well with the 60% and 100% costs of single and dual chiplet package cost ratios. So amd can and will sell their cutdown performance part at $300 which has been for a long time the upper limit for mainstream buyers, the i7 sat around that price point for almost a decade and someone brought up even the old a64x2 3800+ was also at that cost. The 8core parts cost the same to manufacture and are just going to be free margin for them, despite being terrible value still sold because of the supply of everything else.

This raises another point about the silicon costs with 3d cache. I fear it might end up being a huge cost increase for the consumer, it'd be far too wishful thinking that the 3dvcache parts will slot in at the top and bring non-stack skus down in price.
AMD's paper from a while back basically breaks down the cost of a ryzen 3k cpu (assuming similar ratios for 5k zen3) to 100% for dual chiplet and 60% for single chiplet. At a stretch that puts package and IO die at 20% and each chiplet ~40% each. What kind of extra cost will the extra die stack cost? It's not extactly the same n7 process but it's still 50% extra more silicon, very rough math puts that at 80% and 140% of the current dual chiplet costs.
I don't see that hitting the consumer market south of $500 with today's AMD.

Apologies for going off on one.
 
  • Like
Reactions: Tlh97 and Arkaign

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
I suspect that, on a real cost basis, the Vcache is more than just the cost of the silicon. Remember that there is extra packaging costs associated with combining the two dies and bonding them, and I suspect that it's not a faultless process, further increasing cost per package to deal with loss. A hypothetical large cache 5900x would end up having an ASP that's very close to the 5950x, and the 5950x would have an ASP that's several hundred dollars above what it currently is. The knock on effect would carry to Threadripper, lifting it's prices as well.

On a related note, I'm looking forward to seeing Saphire Rapids benchmarked against VCache equipped EPYC processors.
 
  • Like
Reactions: Arkaign