- Mar 3, 2017
- 1,747
- 6,598
- 136
Are there significant benefits to be had with more V-cache? Isn't 64 MB good enough for most games?
No one knows. If it doesn't happen then you may have the answer.
My uneducated guess is that it would help *more* games but wouldn't help much more for the games that already benefit from it.
It seems I misunderstood the question. I assumed FlameTail was asking about a single CCD with 2V-cache or more cache in general (since the L3 cache on Zen 5 is more dense than before, it stands to reason they could include more - possibly up 96MB).IIRC, in this GN video they were shown a functioning 5950X3D x 2 if you will. I'm not about to rewatch it to try to find it though. And no there certainly weren't benchmarks. However, it does show that AMD knows how much games would benefit vs the loss of frequency.
And then there was the first annoucement where they demonstrated a 5900X3D x 2. They've made them. For some reason or another they decided they weren't worth it. Back then the frquency hit was significant so that likely played a role.
It seems I misunderstood the question. I assumed FlameTail was asking about a single CCD with 2V-cache or more cache in general (since the L3 cache on Zen 5 is more dense than before, it stands to reason they could include more - possibly up 96MB).
I don't see any point to both CCDs having V-cache unless they solve the frequency problem.
the biggest bottleneck right now is DIMMs. We need LPCAMM2 fir Zen 6
Power efficiency shouldn't matter for desktop chipsI'm sure that AMD knew about how mediocre Zen 5 was when they were deciding TDPs. Why did they choose 65W for them? To cripple them even more? It lost in a lot of benchmarks to 7600X/7700X because of that low TDP. I have a couple ideas why:
1. They wanted at least something good out of this gen, so leaned on "power efficiency" story (forgetting that 65W Zen4 parts exist).
2. They wanted X3D to appear a lot more powerful than regular chips.
3. OEMs and SIs asked for 65W chips to be right there on launch, so they can put them in crappy B840 motherbaords.
4. They drank their own kool aid and decided TDPs before knowing performance?
Most likely the first one, but it was interesting to think about.
I don't really see it either. Performance increase, especially in games, is basically nothing. Only heavy multi core workload benefits and efficiency tanks. I think AMD made the right choice here.Can easily bypass the power limit via BIOS don't see what the fuss is.
I strongly disagree especially since GPUs have become power pigs the last few years I'll take as much power savings as I can get.Power efficiency shouldn't matter for desktop chips
Why? Have you looked at their benchmark choices?That's not how it works. A sufficiently large sample size of "bizarre" performers is more robust than a very small sample size of "looks about right" performers.
While this sounds plausible I don't suspect they can't adjust their roadmap accordingly.X86-64 basic fp instruction set is SSE2, x87 could be used from x64 but ain't recommended and also not normally used at all. AVX/AVX2 has some support but as it's not supported on all cpu's even sold today support is quite minimally. AVX512 ain't supported pretty much on anything. AMD probably didn't know SIMD workload distribution when they started Zen5 design - Intel did back up AVX512 then pretty strongly. But even with AVX512 main desktop performance priority is on 128 bit SIMD - giving up 128 bit performance for wider vectors is just wrong bet from AMD. Intel goes to opposite direction - their E-core straight doubled 128 fp resources and Lion cove increased 256 bit fp units. Zen5 seems to face quite tough competition from Intel.
These CPU's seems to be targeted to OEM and work-related tasks, where efficiency holds more value than a little extra performance.I'm sure that AMD knew about how mediocre Zen 5 was when they were deciding TDPs. Why did they choose 65W for them? To cripple them even more? It lost in a lot of benchmarks to 7600X/7700X because of that low TDP. I have a couple ideas why:
1. They wanted at least something good out of this gen, so leaned on "power efficiency" story (forgetting that 65W Zen4 parts exist).
2. They wanted X3D to appear a lot more powerful than regular chips.
3. OEMs and SIs asked for 65W chips to be right there on launch, so they can put them in crappy B840 motherbaords.
4. They drank their own kool aid and decided TDPs before knowing performance?
Most likely the first one, but it was interesting to think about.
Probably still being validated. AMD development is SLOW, both hardware and software. I thought Zen 5 AGESA would be in good shape since they were releasing Zen 5 almost a year after it was ready yet it seems they are not done with their AGESA updates and will keep refining them. Any bets that we will see one released on 14th or 15th Aug for the 9950X?And that fabric and IOD, seems they stopped working on it because that one single guy went on paternity leave.
TBH we already know the design rationale. They wanted a new forward-looking base design - >4-wide decode, 6 ALUs, 8-wide dispatch, and 512b-compatible bandwidth.We will know their design rationale, at least some idea, in the upcoming Hot Chips.
How many times does this idea come up that clearly doesn't work. To go higher core counts you need quad channel ram. That's too expensive for a mainstream platform.AMD would have been better off doing their core wars thing they promised generations ago. 8 core, 16core, 24core and 32core Zen 5 would have solved things this generation. Ultra low power for efficiency. I think AMD will fix a lot of the problems with Zen 5 via bios updates. Zen 5 will definitely see a Zen 5+ silicon upgrade to N3P or some variant in 2025.
The reviews out there are very very bad for Zen 5. Add to it that Arrow Lake is all new silicon that will dramatically reduce power consumption with 20A.
I think it might be to incentivise the higher core count parts this gen, lower prices per core but a better ASPI'm sure that AMD knew about how mediocre Zen 5 was when they were deciding TDPs. Why did they choose 65W for them? To cripple them even more? It lost in a lot of benchmarks to 7600X/7700X because of that low TDP. I have a couple ideas why:
1. They wanted at least something good out of this gen, so leaned on "power efficiency" story (forgetting that 65W Zen4 parts exist).
2. They wanted X3D to appear a lot more powerful than regular chips.
3. OEMs and SIs asked for 65W chips to be right there on launch, so they can put them in crappy B840 motherbaords.
4. They drank their own kool aid and decided TDPs before knowing performance?
Most likely the first one, but it was interesting to think about.
The least they can do is give dissimilar CCDs to 9900X, as in one 8 core CCD + one 4 core CCD so at least it wouldn't get ignored so much by gamers. There is only one local retailer in UAE who got Ryzen 9000 CPUs and they are already out of 9700X. 9600X and 9900X are still in stock.How many times does this idea come up that clearly doesn't work. To go higher core counts you need quad channel ram. That's too expensive for a mainstream platform.
Besides, I bet that many who keep bringing this up simply don't realize how little parallelism there is in the vast majority of client workloads.How many times does this idea come up that clearly doesn't work. To go higher core counts you need quad channel ram. That's too expensive for a mainstream platform.
Hey, you underestimate the number of people who run Cinebench R23 continuously on cold winter days and nights especially to stay warm!Besides, I bet that many who keep bringing this up simply don't realize how little parallelism there is in the vast majority of client workloads.
Hey, you underestimate the number of people who run Cinebench R23 continuously on cold winter days and nights especially to stay warm!
I don't think that works. How does that work with binning strategy? How many chips need to be binned down to 4 cores? It also doesn't help with cross cc'd talk, you still have that same issueThe least they can do is give dissimilar CCDs to 9900X, as in one 8 core CCD + one 4 core CCD so at least it wouldn't get ignored so much by gamers. There is only one local retailer in UAE who got Ryzen 9000 CPUs and they are already out of 9700X. 9600X and 9900X are still in stock.
I'm sure that AMD knew about how mediocre Zen 5 was when they were deciding TDPs. Why did they choose 65W for them? To cripple them even more? It lost in a lot of benchmarks to 7600X/7700X because of that low TDP. I have a couple ideas why:
1. They wanted at least something good out of this gen, so leaned on "power efficiency" story (forgetting that 65W Zen4 parts exist).
2. They wanted X3D to appear a lot more powerful than regular chips.
3. OEMs and SIs asked for 65W chips to be right there on launch, so they can put them in crappy B840 motherbaords.
4. They drank their own kool aid and decided TDPs before knowing performance?
Most likely the first one, but it was interesting to think about.
You are being too generous with the kind of questions being asked by "tech" journalist who got access to Mike Clark, Mahesh Subramony et al. What a waste.TBH we already know the design rationale. They wanted a new forward-looking base design - >4-wide decode, 6 ALUs, 8-wide dispatch, and 512b-compatible bandwidth.
That's about it. A new base with a lot of cut corners and odd features.
Networked PCs are supporting this type of applications inexpensively.Depending on workflow for some use cases even if a piece of software is not very well threaded you can just run multiple instances of it or you can have a rendering / encoding task going while working on some other task and having more cores allows you to dedicate more resources to running those background tasks for what might be a single threaded primary app that you are currently working in.
