Discussion Speculation: Zen 4 (EPYC 4 "Genoa", Ryzen 7000, etc.)

Page 440 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Vattila

Senior member
Oct 22, 2004
809
1,412
136
Except for the details about the improvements in the microarchitecture, we now know pretty well what to expect with Zen 3.

The leaked presentation by AMD Senior Manager Martin Hilgeman shows that EPYC 3 "Milan" will, as promised and expected, reuse the current platform (SP3), and the system architecture and packaging looks to be the same, with the same 9-die chiplet design and the same maximum core and thread-count (no SMT-4, contrary to rumour). The biggest change revealed so far is the enlargement of the compute complex from 4 cores to 8 cores, all sharing a larger L3 cache ("32+ MB", likely to double to 64 MB, I think).

Hilgeman's slides did also show that EPYC 4 "Genoa" is in the definition phase (or was at the time of the presentation in September, at least), and will come with a new platform (SP5), with new memory support (likely DDR5).

Untitled2.png


What else do you think we will see with Zen 4? PCI-Express 5 support? Increased core-count? 4-way SMT? New packaging (interposer, 2.5D, 3D)? Integrated memory on package (HBM)?

Vote in the poll and share your thoughts! :)
 
Last edited:
  • Like
Reactions: richardllewis_01

Exist50

Platinum Member
Aug 18, 2016
2,452
3,102
136
The 7600X has been shown to be faster than the 5800X3D(both using 4090 GPU)
First of all, your link doesn't even include the 13900k. Beyond that, it's been noted on many occasions that Techspot/Hardware Unboxed's results are highly anomalous. You've already been linked a meta analysis, so I'll avoid the redundancy of posting it again.
So who is delusional now? The guy who say that the Stack 3D V cache will not matter because Intel enjoys a hefty 10% Gaming lead or me who is saying that Zen4(7600X) and Raptor Lake(13900K) are basically a match and that the 7800X3D will more than likely beat it soundly?
No one is claiming that 3D V-Cache "will not matter", nor that it will fail to catch up and eclipse Raptor Lake's gaming performance. The claim in question is that the Zen 4 implementation of 3D V-Cache will provide a substantially greater relative benefit compared to the Zen 3 one, and there's every reason to question the numbers presented.

+15-20% would easily be enough for Zen 4 w/ V-Cache to take the crown, but the claims are about 30%+, and that requires more justification.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
26,171
15,324
136
The link in question was literally about gaming. But what I said applied identically to any workload. Cross-CCX cache snooping makes no sense with the latencies involved, so there's going to be no gains there, and you're presumably loading each core anyway. Why would you expect a dual die config to provide more advantage from V-Cache than a single die?
Yes, with lasso or other software like it to keep 8 threads of a task on one ccx. And for EPYC, we use 8 threads and enable a bios option that makes each a numa node, and that works the same. (essentially). If curious, see the DC forum.

And again, speaking for regular software, why would Milan-x do so well on quite a few different pieces of software ? Yes, its not always true that it makes a big difference, but it is certainly more than games that benefit.
 

nicalandia

Diamond Member
Jan 10, 2019
3,331
5,282
136
No one is claiming that 3D V-Cache "will not matter", nor that it will fail to catch up and eclipse Raptor Lake's gaming performance.
It doesn't need to as Zen4 is already at virtual game parity with Raptor Lake. Whoever thinks otherwise is just clearly Biased.

The 7800X3D Will be a game changer as much as the 5800X3D has been(within 1.3% from the 13900K) and if you think otherwise you are beyond hope.
 

Exist50

Platinum Member
Aug 18, 2016
2,452
3,102
136
Yes, with lasso or other software like it to keep 8 threads of a task on one ccx. And for EPYC, we use 8 threads and enable a bios option that makes each a numa node, and that works the same. (essentially).
That explains how you'd see the same relative improvement from a 2x version as a 1x version. It does not explain how you'd see greater improvement.
And again, speaking for regular software, why would Milan-x do so well on quite a few different pieces of software ? Yes, its not always true that it makes a big difference, but it is certainly more than games that benefit.
I never said that gaming is the only use case that benefits, but that's irrelevant to the question.
 

Exist50

Platinum Member
Aug 18, 2016
2,452
3,102
136
It doesn't need to as Zen4 is already at virtual game parity with Raptor Lake. Whoever thinks otherwise is just clearly Biased.

The 7800X3D Will be a game changer as much as the 5800X3D has been(within 1.3% from the 13900K) and if you think otherwise you are beyond hope.
You were the one claiming a 7600X beats the 13900k in gaming. Now you're back to "Zen4 is virtual parity"? Pick one. I responded to what you wrote verbatim, and the data is very clear on the results.

And in the very comment you responded to, I explicitly highlighted that Zen 4 X3D will be the leading gaming chip, and I even gave a range slightly above that which we see for Zen 3. If you cannot be bothered to read my comments, don't waste my time responding to them.
 
  • Like
Reactions: BorisTheBlade82

nicalandia

Diamond Member
Jan 10, 2019
3,331
5,282
136
What's up with Raptor Lake on Battlefield V? That's such a massive outlier that it practically accounts for half of the result at 4K.
Not really, from the review

"In our day-one review data, which is based on a 12 game sample, we had the 7600X leading the 13600K by a 3% margin. With that testing expanded to 54 games, the 7600X is 5% faster. If we remove the potentially bugged Battlefield V data (issue with the E-cores?), the 7600X was just 4% faster. Either way as we noted before, we always deem margins of 5% or less to be insignificant, or in other words a tie.
 

biostud

Lifer
Feb 27, 2003
18,700
5,434
136
There could be a trend that modern AAA titles with RT enabled actually also taxed the CPU quite heavily like Spiderman: Miles Morales, in which case Intel would probably gain an advantage with their e-cores, in the lower segment with AMD's 6-8 core CPU's.

But no matter if you decide to buy Intel or AMD, you will probably not notice the difference in games, when you're not benchmarking if you buy intel or AMD at the same pricepoint.
 
  • Like
Reactions: Elfear and Saylick

nicalandia

Diamond Member
Jan 10, 2019
3,331
5,282
136
There could be a trend that modern AAA titles with RT enabled actually also taxed the CPU quite heavily like Spiderman: Miles Morales, in which case Intel would probably gain an advantage with their e-cores, in the lower segment with AMD's 6-8 core CPU's.
Currently those e cores are pretty useless at gaming.

1670358879297.png
 
  • Like
Reactions: DAPUNISHER

ondma

Diamond Member
Mar 18, 2018
3,005
1,528
136
There could be a trend that modern AAA titles with RT enabled actually also taxed the CPU quite heavily like Spiderman: Miles Morales, in which case Intel would probably gain an advantage with their e-cores, in the lower segment with AMD's 6-8 core CPU's.

But no matter if you decide to buy Intel or AMD, you will probably not notice the difference in games, when you're not benchmarking if you buy intel or AMD at the same pricepoint.
True, but problem is whether the games can effectively use the E cores. I dont know why someone hasnt done streaming or other heavy tasks while gaming to see if the E cores can help offload the cpu when other tasks are active during gaming.
 

biostud

Lifer
Feb 27, 2003
18,700
5,434
136
Currently those e cores are pretty useless at gaming.

View attachment 72402

I know (and so is most cores above the first six) , that is why I mentioned a single game, and only with RT on. It would be interesting to see how it behaved with e-cores disabled.

There are some outlier games that can tax the CPU quite heavily, and small amount of these can utilize more than 6-8 cores and see a performance increase. But before we see this as normal, Zen 6 has probably been released.
 

Kaluan

Senior member
Jan 4, 2022
504
1,074
106
The frequency difference between the 5800X and 5800X3D is what, like 5% if that? There's still a large gap unaccounted for. And then consider that closing that frequency gap will probably be the main improvement from Zen4 X3D, so where're the additional gains coming from?

If that? Data is out there if one wants to check it
~5% higher clock in mixed/light workloads/typical gaming
~6% higher in heavy multithreaded (eg Blender)
~9% higher in single threaded
Not bad considering it's barely any slower (2%?) than vanilla in productivity, something they never marketed it for (this may change with Zen4 X3D).

As to where other potential gains could be coming from? Who know, they may lower the L3 latency penalty from 3-4ns to 3ns or less somehow. We also have no idea how changes (cache or otherwise) in Zen4 react to increased L3 pools. Seeing how forward thinking Zen3 design was, at least 1 year post launch, with revolutionary stacking tech obviously not being a afterthought, I doubt Zen4 would be just a repeat. Pretty smug to think we know everything and won't be surprised TBH.

Reminder to people that this exists:
 

ondma

Diamond Member
Mar 18, 2018
3,005
1,528
136
These results seen a bit low for intel, compared to other tests I have seen, especially the 7700k on average being more than 10% faster than 13700k, and if I read the charts correctly that is with BF V removed.
 
  • Like
Reactions: Carfax83

Exist50

Platinum Member
Aug 18, 2016
2,452
3,102
136
If that? Data is out there if one wants to check it
~5% higher clock in mixed/light workloads/typical gaming
~6% higher in heavy multithreaded (eg Blender)
~9% higher in single threaded
Not bad considering it's barely any slower (2%?) than vanilla in productivity, something they never marketed it for (this may change with Zen4 X3D).

As to where other potential gains could be coming from? Who know, they may lower the L3 latency penalty from 3-4ns to 3ns or less somehow. We also have no idea how changes (cache or otherwise) in Zen4 react to increased L3 pools. Seeing how forward thinking Zen3 design was, at least 1 year post launch, with revolutionary stacking tech obviously not being a afterthought, I doubt Zen4 would be just a repeat. Pretty smug to think we know everything and won't be surprised TBH.

Reminder to people that this exists:
Where are you seeing those percentages? For instance, the 1T frequency should be 4700MHz vs 4500MHz, or a 4.4% increase.
 

DrMrLordX

Lifer
Apr 27, 2000
22,065
11,693
136
Another grain of salt, but there is a graph (scroll down to Quasarzone), it compares the 7950x to the 7950x3d. "supposedly" 37% faster. Completely murderlizes the 13900k, as well.

13900KS is dead before it launches.

Well ye, but the 7950x is still ~10% slower than the 13900k

In applications? Definitely not. I'll leave the gaming discussion to others but in apps the 7950X humiliated the 13900k. Despite doubling e-core count vs 12900k, it still couldn't unseat the 7950X in MT workloads. Everyone who was expecting the 13900k to square the circle and become Intel's next god tier chip is either disappointed or in denial. Raphael showed up well and is very competitive despite being a repurposed server/workstation design.
 

nicalandia

Diamond Member
Jan 10, 2019
3,331
5,282
136
In applications? Definitely not. I'll leave the gaming discussion to others but in apps the 7950X humiliated the 13900k.
They are actually pretty even at MT in general tasks(GB, CBR23, Linux), on apps that take full advantage of AVX/2/256/512 then yes, the 7950X is much faster. In Gaming whoever say that the 7950X is 10% Slower is really not being serious are cherry picking games. Overall(50+ games tests) it's within 5% of 13900K
 

Kaluan

Senior member
Jan 4, 2022
504
1,074
106
These results seen a bit low for intel, compared to other tests I have seen, especially the 7700k on average being more than 10% faster than 13700k, and if I read the charts correctly that is with BF V removed.
Hm... wat?
I think you're confusing HUB with Jarrod's. Jarrod's didn't test BF V (or HZD *wink-wink*).
And yes, I think Jarrod's got the text for the 13700K 1080p average wrong. His round-up percentage avg says 7700X is ~2,6% faster @1080p, not ~10%.
 

Kaluan

Senior member
Jan 4, 2022
504
1,074
106
Where are you seeing those percentages? For instance, the 1T frequency should be 4700MHz vs 4500MHz, or a 4.4% increase.
I thought it's a well known fact 5800X3D tops out at a hard 4450MHz and that 5800X boosts to 4850MHz on it's best core (+8.98%)...

Not sure how Ryzen 5000 max 1C boost = fmax (and not advertised boost) is new information after 2 years in a tech forum, but oh well.

5800X3D is a strange case tho, half surprised no class action lawsuits have been started over that missing 50MHz 😂

But in both cases, advertised boost =/= real world boost
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
26,171
15,324
136
I don't understand why some here are all about proving the 13900k is the fastest cpu for gaming, when we were derailed in the middle about talking about the 78xx3d upcoming cpu's and how well the 3dcache will benefit it. This is a Zen 4 thrread, but I guess there are those who want to detail this with talk of Raptor lake gaming.

I will try to get us back by saying that I think that all 3 that are planned will beat everything at gaming. and the 2 with 2 ccx's and double the cache will go one further, and in a few applications will totally dominate everything due to the cache.
 

In2Photos

Platinum Member
Mar 21, 2007
2,026
2,054
136
Talk about cherry picking.
How is choosing a 54 game review "cherry picking" exactly? Why don't you ever throw out the games where Intel is significantly ahead?

Also, another factor to consider is that turning on RT can change the results in those games that support RT and make it more CPU intensive, which favors Raptor Lake.

And that's clearly what you're after right?
 

nicalandia

Diamond Member
Jan 10, 2019
3,331
5,282
136
According to the launch review meta analysis conducted by 3D center, the 13900K is 15% faster than the 7600x, which is substantial.
This type of Baseless Statements is what make some people believe that the 7800X3D will be barely faster than the 13900K. When in fact the 13900K is 1.3% faster than OG 5800X3D.
 
  • Like
Reactions: scineram

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
How is choosing a 54 game review "cherry picking" exactly? Why don't you ever throw out the games where Intel is significantly ahead?

By cherry picking I mean using obviously erroneous results. I mean come on, 50% faster in BF5 at 4K? I don't know what HWU did to obtain such a result, but anyone that knows hardware should raise the B.S flag on that one.

Also, it's not as though we don't have the available data. The majority of reviews show the 13600K faster than the 7600x, and the 13700K faster than the 7700x.

Doing proper benchmarking is harder than most would think. There are a lot of mistakes that can be made by accident, due to the sheer amount of data involved. So whenever we see outliers that don't have a good explanation, we should question them.

It's nonsensical for a 7600x to be 50% faster than its competition at a GPU limited resolution, so it requires a good explanation. If the 13600K was 50% faster than the 7600x in BF5, I would say the same thing because it's abnormal for a CPU to produce such a result at that resolution.

And that's clearly what you're after right?

I like truth. Don't B.S me by using two reviews that conveniently support your narrative, when the bulk of the data says otherwise. As I said before, benchmarking can be problematic due to the great propensity for errors and inaccuracies; usually due to human error, but sometimes also because of game bugs and OS issues.

I'm willing to accept that Zen 4 3D may very well end up being faster than Raptor Lake in gaming, but base Zen 4? Nope...