Discussion Speculation: Zen 4 (EPYC 4 "Genoa", Ryzen 7000, etc.)

Page 441 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Vattila

Senior member
Oct 22, 2004
809
1,412
136
Except for the details about the improvements in the microarchitecture, we now know pretty well what to expect with Zen 3.

The leaked presentation by AMD Senior Manager Martin Hilgeman shows that EPYC 3 "Milan" will, as promised and expected, reuse the current platform (SP3), and the system architecture and packaging looks to be the same, with the same 9-die chiplet design and the same maximum core and thread-count (no SMT-4, contrary to rumour). The biggest change revealed so far is the enlargement of the compute complex from 4 cores to 8 cores, all sharing a larger L3 cache ("32+ MB", likely to double to 64 MB, I think).

Hilgeman's slides did also show that EPYC 4 "Genoa" is in the definition phase (or was at the time of the presentation in September, at least), and will come with a new platform (SP5), with new memory support (likely DDR5).

Untitled2.png


What else do you think we will see with Zen 4? PCI-Express 5 support? Increased core-count? 4-way SMT? New packaging (interposer, 2.5D, 3D)? Integrated memory on package (HBM)?

Vote in the poll and share your thoughts! :)
 
Last edited:
  • Like
Reactions: richardllewis_01

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
This type of Baseless Statements is what make some people believe that the 7800X3D will be barely faster than the 13900K. When in fact the 13900K is 1.3% faster than OG 5800X3D.

Repeating something over and over doesn't make it true. We all have eyes and can read the reviews and interpret the data as we see fit.
 
  • Like
Reactions: TESKATLIPOKA

Joe NYC

Platinum Member
Jun 26, 2021
2,539
3,469
106
Another grain of salt, but there is a graph (scroll down to Quasarzone), it compares the 7950x to the 7950x3d. "supposedly" 37% faster. Completely murderlizes the 13900k, as well.


Seems like a lot of conflicting info on availability of the V-Cache models. The last line of this article says January 23rd availability, while some others say March availability...
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,523
3,037
136
Currently those e cores are pretty useless at gaming.

View attachment 72402
Is It because of E-cores being useless, or because those games are limited to 8 cores?
If TPU also compared 7700X vs 7950x then we would have an answer to this question.
 
Last edited:

moinmoin

Diamond Member
Jun 1, 2017
5,064
8,032
136
Seems like a lot of conflicting info on availability of the V-Cache models. The last line of this article says January 23rd availability, while some others say March availability...
The former date is for a launch around CES, the latter date was based on the supposed timeline precedence set by 5800X3D's launch suggested by Greymon who subsequently deleted themselves. We will see which one applies, if any.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,523
3,037
136
This type of Baseless Statements is what make some people believe that the 7800X3D will be barely faster than the 13900K. When in fact the 13900K is 1.3% faster than OG 5800X3D.
That's only at 4K! From when do we compare which is the better gaming CPU in that resolution?
If you want to compare them at that resolution, then yes, It's highly likely that 7800X3D will be barely faster.

TPU tested 53 games 13900K vs 5800X3D with RTX 4900,Link
13900K is faster by
1080p: 6.2% (this should be truthfully higher for Intel, because DMC5 had a massive gain of +40.6% in favor of AMD in this single resolution, in 1440p It was only +5.9%)
1440p: 4.7%
2160p: 1.3%

How much better 7800X3D ends up on average is still unknown, we will see at launch.
 
  • Like
Reactions: krawcmac

Joe NYC

Platinum Member
Jun 26, 2021
2,539
3,469
106
The former date is for a launch around CES, the latter date was based on the supposed timeline precedence set by 5800X3D's launch suggested by Greymon who subsequently deleted themselves. We will see which one applies, if any.

Upcoming CES is between January 5th and January 8th, 2023. His January 23rd retail availability - I don't know what his source of that is.

Recent AMD launches had ~ month + from intro to retail availability, but in those cases (Zen4 and RDNA3), AMD was free to set the intro dates, unlike CES, which is outside of AMD control.

5800x3d was mentioned at CES 2022, and retail availability was said to be "Spring", which then turned out to be April 20.

Hopefully, less of a lag for Zen 4. But it would be good to know what the bottleneck is, if it, by any chance, TSMC. AMD is likely the first company to introduce SoIC stacking on N5 node...
 

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146
Where are you seeing those percentages? For instance, the 1T frequency should be 4700MHz vs 4500MHz, or a 4.4% increase.
Actual ST boost is kinda variable with Zen 2 and on. Zen 2 used to clock in a little below rated max frequency, Zen 3 is usually at rated max frequency or even beyond it (up to 150MHz is technically allowed by firmware) but it varies based on the actual silicon you have. So some variance between samples is possible.
 

Abwx

Lifer
Apr 2, 2011
11,557
4,349
136
Also, another factor to consider is that turning on RT can change the results in those games that support RT and make it more CPU intensive, which favors Raptor Lake.

In Spiderman once you activate RT the 7700X and 7600X are faster than the 7950X, so RT has apparently nothing to do with CPU throughput capability when it comes to FPS.
 
  • Like
Reactions: scineram

MarkPost

Senior member
Mar 1, 2017
322
616
136
Talk about cherry picking. The BF5 results is an anomaly and clearly either a bug or a configuration problem as there is no way on Earth that the 7600x is 50% faster than the 13600K.......at 4K. That's absurd and practically impossible given how GPU bound 4K is.

Case in point, another review shows that Raptor Lake doesn't have some magical vulnerability to BF5 performance. Apologize for the overly bright screen shot, but this monitor has FALD and so windows are brightened automatically.

y3Kq9T.png

I said it already, they never should have uploaded that result as something is clearly off. In BF 2042 (on the same or a more advanced form of the engine) the 13600K is ahead, which makes you wonder what the hell is happening. HWU has a penchant for anomalous results lately it seems. That said, we don't have to look for cherry picked reviewers to make a point , as 3D center already did a review meta analysis and in it they found the 13600K faster than the 7600x, and the 13700K is faster than the 7700x.

Also, another factor to consider is that turning on RT can change the results in those games that support RT and make it more CPU intensive, which favors Raptor Lake.

The issue with your try is that without that game, basically nothing change. From the review. It seems you missed it:

"In our day-one review data, which is based on a 12 game sample, we had the 7600X leading the 13600K by a 3% margin. With that testing expanded to 54 games, the 7600X is 5% faster. If we remove the potentially bugged Battlefield V data (issue with the E-cores?), the 7600X was just 4% faster."

On the other hand, a review meta data analysis like 3D Center does, is useless. Reason is that a lot of the games tested are common among reviewers, so if some of those 6-8-10 games usually used by reviewers favour for any reason to one or another architecture, you are counting those games over and over again, so the final conclusion is misleading.

But if you take a massive games review like those I posted (Techspot or Jarred), you have the real picture.

So give me a meta data analysis of a bunch of apps/games. Thats the real deal.

A simple meta data analysis of a bunch of reviews that share and count over and over again a lot of the same apps/games, is just pointless.
 

In2Photos

Platinum Member
Mar 21, 2007
2,026
2,054
136
By cherry picking I mean using obviously erroneous results. I mean come on, 50% faster in BF5 at 4K? I don't know what HWU did to obtain such a result, but anyone that knows hardware should raise the B.S flag on that one.

Also, it's not as though we don't have the available data. The majority of reviews show the 13600K faster than the 7600x, and the 13700K faster than the 7700x.

Fine, throw out that result. The 7600X is still faster in 34 of the 54 games and equal in 5 more titles.
Doing proper benchmarking is harder than most would think. There are a lot of mistakes that can be made by accident, due to the sheer amount of data involved. So whenever we see outliers that don't have a good explanation, we should question them.
The problem is that you throw out the entire review and not just that one result unless the review fits your narrative. If benchmarking is so hard why has it never occurred to you that the reviews that favor Intel might have done something wrong?

It's nonsensical for a 7600x to be 50% faster than its competition at a GPU limited resolution, so it requires a good explanation. If the 13600K was 50% faster than the 7600x in BF5, I would say the same thing because it's abnormal for a CPU to produce such a result at that resolution.
Would you though, honestly? Or would you try and tell us it has something to do with some process in the game that Intel does better than AMD? Oh wait, you've already done that.

I like truth. Don't B.S me by using two reviews that conveniently support your narrative, when the bulk of the data says otherwise. As I said before, benchmarking can be problematic due to the great propensity for errors and inaccuracies; usually due to human error, but sometimes also because of game bugs and OS issues.
You like "truth" when it supports your narrative! You're willing to use results when they back up your agenda, but also want to throw them out when they go against it. There's a clear bias to your "truth".
I'm willing to accept that Zen 4 3D may very well end up being faster than Raptor Lake in gaming, but base Zen 4? Nope...
We'll see what the reviews say when they become available.
 

In2Photos

Platinum Member
Mar 21, 2007
2,026
2,054
136
The issue with your try is that without that game, basically nothing change. From the review. It seems you missed it:

"In our day-one review data, which is based on a 12 game sample, we had the 7600X leading the 13600K by a 3% margin. With that testing expanded to 54 games, the 7600X is 5% faster. If we remove the potentially bugged Battlefield V data (issue with the E-cores?), the 7600X was just 4% faster."

On the other hand, a review meta data analysis like 3D Center does, is useless. Reason is that a lot of the games tested are common among reviewers, so if some of those 6-8-10 games usually used by reviewers favour for any reason to one or another architecture, you are counting those games over and over again, so the final conclusion is misleading.

But if you take a massive games review like those I posted (Techspot or Jarred), you have the real picture.

So give me a meta data analysis of a bunch of apps/games. Thats the real deal.

A simple meta data analysis of a bunch of reviews that share and count over and over again a lot of the same apps/games, is just pointless.
Oh, he didn't miss it, it's been pointed out to him several times. He just chooses to ignore it.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I don't want to derail this thread anymore than it already has been, but HWU's 7600x vs 13600K has garnered a lot of criticism from lots of people, not just me. In fact, there's a large Reddit thread on r/hardware which discusses the results and Steve from HWU actually chimes in.


A poster in the Reddit thread brought up something interesting that I never noticed about the review. In the HWU roundup, A Plague Tale Requiem has the 13600K rig using nearly 120w more than the 7600x rig, yet it loses in that game by 10% at 1440p. How is that even possible?

I have that game, and it really only becomes moderately CPU intensive when the rats are on screen or if it's in a populated area. But if the 13600K rig is pulling that much extra power draw over the 7600x, it's not going to be from the CPU as a gaming workload won't stress the CPU that much. So assuming the power consumption figures were correct, the GPU must be drawing extra power with the 13600K......but where is the performance if that is the case? I guess all of that extra power is going into a black hole LOL! :D

In the Gamers Nexus 13600K review, Tech Jesus found that the 13600K drew about 44w more than the 7600x in Blender, a workload that maxes out all cores.

A Plague Tale requiem benchmarks are hard to find, but Computerbase.de did benchmark it at 720p with an RTX 4090 and a 13900K vs a 7950x and they found the 13900K to be 18% faster:

r36SQ8.png


Other posters also took him to task for not including ray tracing in the benchmarks, when everyone knows that it increases CPU load significantly. Especially as he tested with an RTX 4090. These are legitimate criticisms whether you want to deny them or not.

In fact, the more I think about it, the more I vastly prefer Computerbase.de's approach to testing in that they are very specific in the details about their testing methodology and actually provide the settings used for each individual game.

HWU is still a relatively good source for bulk data points, but their testing methodology leaves much to be desired and has plenty of unexplained anomalies.
 

Hail The Brain Slug

Diamond Member
Oct 10, 2005
3,513
2,464
136
In my testing, my 7950X has considerably higher L3 bandwidth and lower L3 access latency than my 5950X. These two factors alone may explain better performance scaling to 3D cache models. However, we still have no idea what the cache size of the stacked die for Zen 4 3D will be. I have seen rumors of 64MB, 96MB, and 128MB. If the cache size is increased, that in addition to better performing L3 cache would absolutely explain better performance scaling over Zen 3 3D.
 
  • Like
Reactions: Tlh97 and Joe NYC

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146
I don't want to derail this thread anymore than it already has been, but HWU's 7600x vs 13600K has garnered a lot of criticism from lots of people, not just me. In fact, there's a large Reddit thread on r/hardware which discusses the results and Steve from HWU actually chimes in.


A poster in the Reddit thread brought up something interesting that I never noticed about the review. In the HWU roundup, A Plague Tale Requiem has the 13600K rig using nearly 120w more than the 7600x rig, yet it loses in that game by 10% at 1440p. How is that even possible?

I have that game, and it really only becomes moderately CPU intensive when the rats are on screen or if it's in a populated area. But if the 13600K rig is pulling that much extra power draw over the 7600x, it's not going to be from the CPU as a gaming workload won't stress the CPU that much. So assuming the power consumption figures were correct, the GPU must be drawing extra power with the 13600K......but where is the performance if that is the case? I guess all of that extra power is going into a black hole LOL! :D

In the Gamers Nexus 13600K review, Tech Jesus found that the 13600K drew about 44w more than the 7600x in Blender, a workload that maxes out all cores.

A Plague Tale requiem benchmarks are hard to find, but Computerbase.de did benchmark it at 720p with an RTX 4090 and a 13900K vs a 7950x and they found the 13900K to be 18% faster:

r36SQ8.png


Other posters also took him to task for not including ray tracing in the benchmarks, when everyone knows that it increases CPU load significantly. Especially as he tested with an RTX 4090. These are legitimate criticisms whether you want to deny them or not.

In fact, the more I think about it, the more I vastly prefer Computerbase.de's approach to testing in that they are very specific in the details about their testing methodology and actually provide the settings used for each individual game.

HWU is still a relatively good source for bulk data points, but their testing methodology leaves much to be desired and has plenty of unexplained anomalies.
> r/hardware

And on that point alone everything in this message is instantly not even worth discussing.
 

Hitman928

Diamond Member
Apr 15, 2012
6,187
10,694
136
I don't want to derail this thread anymore than it already has been, but HWU's 7600x vs 13600K has garnered a lot of criticism from lots of people, not just me. In fact, there's a large Reddit thread on r/hardware which discusses the results and Steve from HWU actually chimes in.


A poster in the Reddit thread brought up something interesting that I never noticed about the review. In the HWU roundup, A Plague Tale Requiem has the 13600K rig using nearly 120w more than the 7600x rig, yet it loses in that game by 10% at 1440p. How is that even possible?

I have that game, and it really only becomes moderately CPU intensive when the rats are on screen or if it's in a populated area. But if the 13600K rig is pulling that much extra power draw over the 7600x, it's not going to be from the CPU as a gaming workload won't stress the CPU that much. So assuming the power consumption figures were correct, the GPU must be drawing extra power with the 13600K......but where is the performance if that is the case? I guess all of that extra power is going into a black hole LOL! :D

In the Gamers Nexus 13600K review, Tech Jesus found that the 13600K drew about 44w more than the 7600x in Blender, a workload that maxes out all cores.

A Plague Tale requiem benchmarks are hard to find, but Computerbase.de did benchmark it at 720p with an RTX 4090 and a 13900K vs a 7950x and they found the 13900K to be 18% faster:

r36SQ8.png


Other posters also took him to task for not including ray tracing in the benchmarks, when everyone knows that it increases CPU load significantly. Especially as he tested with an RTX 4090. These are legitimate criticisms whether you want to deny them or not.

In fact, the more I think about it, the more I vastly prefer Computerbase.de's approach to testing in that they are very specific in the details about their testing methodology and actually provide the settings used for each individual game.

HWU is still a relatively good source for bulk data points, but their testing methodology leaves much to be desired and has plenty of unexplained anomalies.

Computerbase restricts their memory to officially supported speeds only which means their 13th gen results will naturally be better than Zen 4 compared to HWU which uses at least decently tuned speed and timings. This makes Computerbase’s results valid but ultimately nearly pointless for gamers.

HWUB results are far more relevant for gamers, even if you toss out the seemingly anomalous results which HWUB themselves have identified and provided averages without them included.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Computerbase restricts their memory to officially supported speeds only which means their 13th gen results will naturally be better than Zen 4 compared to HWU which uses at least decently tuned speed and timings. This makes Computerbase’s results valid but ultimately nearly pointless for gamers.

This is fair, but Computerbase.de has very accurate results and we can go back and forth about memory speed all day because Raptor Lake can use much higher memory frequencies than Zen 4 can, yet most reviewers tend to run the Raptor Lake CPUs at the same memory frequencies as Zen 4, ie DDR5 6000.

HWUB results are far more relevant for gamers, even if you toss out the seemingly anomalous results which HWUB themselves have identified and provided averages without them included.

Yes, but HWUB has too many anomalies for my tastes that just flat out don't make any sense. The latest one being the power consumption for Plague Requiem, that I just noticed. How can the 13600K pull that much extra power out of the GPU, yet underperform compared to the 7600x system? Presumably, the GPU drawing more power would mean the performance increased, but in this case the 13600K falls behind at 1080p, 1440p and even 4K!?!

Makes you wonder...... You can extend that line of thinking to some of the other games in the lineup as well.
 
  • Like
Reactions: Exist50

Hitman928

Diamond Member
Apr 15, 2012
6,187
10,694
136
This is fair, but Computerbase.de has very accurate results and we can go back and forth about memory speed all day because Raptor Lake can use much higher memory frequencies than Zen 4 can, yet most reviewers tend to run the Raptor Lake CPUs at the same memory frequencies as Zen 4, ie DDR5 6000.



Yes, but HWUB has too many anomalies for my tastes that just flat out don't make any sense. The latest one being the power consumption for Plague Requiem, that I just noticed. How can the 13600K pull that much extra power out of the GPU, yet underperform compared to the 7600x system? Presumably, the GPU drawing more power would mean the performance increased, but in this case the 13600K falls behind at 1080p, 1440p and even 4K!?!

Makes you wonder...... You can extend that line of thinking to some of the other games in the lineup as well.

HWUB used higher memory speeds for Intel.

You’re assuming the higher power consumption was used by the GPU. I agree, it’s something they should look into, but it’s not crazy enough to just throw the results out on face value. Being a system level measurement, there are a lot of variables that come into play.
 
  • Like
Reactions: Tlh97 and Kaluan

Kaluan

Senior member
Jan 4, 2022
504
1,074
106
HWUB used higher memory speeds for Intel.

You’re assuming the higher power consumption was used by the GPU. I agree, it’s something they should look into, but it’s not crazy enough to just throw the results out on face value. Being a system level measurement, there are a lot of variables that come into play.

There's plenty of incobsitencies with CB data as well (beyond strange choice for RAM)... but we don't see people like him ever nitpick on those for 30 forum pages in a row, now, do we? 😅

Half of the post-launch comments on this thread are about him and his HUB bickering.

Anyway, the moment they started gibbering about reddit was the last confirmation I needed. They're not a troll, they're mental.
 
  • Haha
Reactions: DAPUNISHER

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
You’re assuming the higher power consumption was used by the GPU. I agree, it’s something they should look into, but it’s not crazy enough to just throw the results out on face value. Being a system level measurement, there are a lot of variables that come into play.

What else could be causing the additional power draw? Raptor Lake is very power efficient in gaming workloads, and when Computerbase.de tested the 13600K, the average power draw was 88w. The 7700x's average power draw was 72w, so I'm going to guess the average power draw for the 7600x may be in the 60w range, although they never tested the power draw for the 7600x.

In any case, the only other part that could account for an extra 100w is the GPU as that is the only thing other than the CPU that draws a significant amount of power. But yet the performance that you would expect to come from an additional 100w power draw isn't there.

And Plague Requiem doesn't support RT at this time, so the CPU usage ranges from small to moderate depending on what's on the screen.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
There's plenty of incobsitencies with CB data as well (beyond strange choice for RAM)... but we don't see people like him ever nitpick on those for 30 forum pages in a row, now, do we? 😅

Official supported memory standard = strange choice of RAM got it!

Anything beyond the official memory standard is technically overclocked and not stock. While I myself never ever run purely stock settings, I respect the reviewer's decision to stick to standardized settings.

Anyway, the moment they started gibbering about reddit was the last confirmation I needed. They're not a troll, they're mental.

You like to snipe from the shadows, but not one of you has attempted to explain any of these anomalies. Your take is, "just accept it." Well, maybe you're that naive and want to assume that HWUB is infallible but I'm sure as hell not.
 

Hitman928

Diamond Member
Apr 15, 2012
6,187
10,694
136
What else could be causing the additional power draw? Raptor Lake is very power efficient in gaming workloads, and when Computerbase.de tested the 13600K, the average power draw was 88w. The 7700x's average power draw was 72w, so I'm going to guess the average power draw for the 7600x may be in the 60w range, although they never tested the power draw for the 7600x.

In any case, the only other part that could account for an extra 100w is the GPU as that is the only thing other than the CPU that draws a significant amount of power. But yet the performance that you would expect to come from an additional 100w power draw isn't there.

Faster memory with tighter timings takes power both from the memory and on the CPU. Different motherboards can have significantly different power usage. You also have different efficiency curves for the CPUs, motherboards, and PSUs which all come into play when taking a wall measurement. All of these things individually may seem small, but added up, they can be a large amount of power. How does resizable bar effect system power usage? How about with different levels of CPU cache? You can't just assume it's all GPU power making up the difference as there are so many options with no data to tell how the power is being used.

It's also possible that there's some kind of driver bug with the Intel platform and Nvidia cards in some cases where it increases GPU power without increasing performance. I tend to doubt it but it is possible and would still make HWUB's results valid. Hopefully HWUB can look into it further but without any kind of obvious flaw in their setup or how they are taking their data, I don't see any reason to ignore the results on its face.
 
  • Like
Reactions: Elfear

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Faster memory with tighter timings takes power both from the memory and on the CPU. Different motherboards can have significantly different power usage. You also have different efficiency curves for the CPUs, motherboards, and PSUs which all come into play when taking a wall measurement. All of these things individually may seem small, but added up, they can be a large amount of power. How does resizable bar effect system power usage? How about with different levels of CPU cache?

You can't just assume it's all GPU power making up the difference as there are so many options with no data to tell how the power is being used.

Although you're absolutely correct that we have limited data to truly pinpoint what's happening, logic and process of elimination points to the GPU, because none of those things you mentioned can account for that much power draw. Both memory modules use the same voltage (1.10v) and probably use less than 10w as DDR5 has very low power draw. Everything else in the system was the same except for the CPU, motherboard and memory. Also, since I have that particular game, I know for a fact that it is heavily GPU bound and an RTX 4090 can consume up to 450w by itself at 4K maxed settings. I can post screenshots on demand if you want.

Even though HWUB's system power draw test is done at 2K, the Zen 4 system's power consumption is absurdly low and the Raptor Lake system is closer to what I would expect being an owner of the game and an RTX 4090.

In their 13600K review, HWUB measures the system power draw at 315w for the 13600K and 226w for the 7600x with blender. 90w difference, which is still less than the much lower power draw gaming workload LOL!

Powerr.png


It's also possible that there's some kind of driver bug with the Intel platform and Nvidia cards in some cases where it increases GPU power without increasing performance. I tend to doubt it but it is possible and would still make HWUB's results valid. Hopefully HWUB can look into it further but without any kind of obvious flaw in their setup or how they are taking their data, I don't see any reason to ignore the results on its face.

I have the same platform and I've never noticed anything of the sort myself personally. Since I built my system, it has performed beyond my expectations even when it's underclocked and behaves as expected.

That said, I'm not saying we should discount HWUB's results completely. I'm only saying there are anomalous results in their round up which may point to problems with their methodology.
 
  • Like
Reactions: Exist50

MarkPost

Senior member
Mar 1, 2017
322
616
136
In fact, the more I think about it, the more I vastly prefer Computerbase.de's approach to testing

errr no. A site testing ST perfomance with 4 benchmarks, three of them from the same tool (Cinebench), and the other (Pov-Ray) hurting Ryzen perfomance deactivating AVX2 when a Ryzen CPU is detected (*). Yeah great approach to testing.


(*) Fortunately Pov-Ray is open source so its easy to activate AVX2 use for Ryzen, just compiling binary removing that limitation. And this is what happens (my 7950X, stock):

binary with just AVX for Ryzen (ST)
povray7950avxst.png

binary with AVX2 ON for Ryzen (ST)
povray7950avx2st.png

binary with just AVX for Ryzen (MT)
povray7950avxmt.png

binary with AVX2 ON for Ryzen (MT)
povray7950avx2mt.png

So when you guys see pov-ray results (like the ComputerBase one), you have to know Ryzen perf is being damaged avoiding the use of AVX2 instruction set (15-16% dif). It wouldn't be slower than Raptor Lake, but actually faster:

povrayst.PNG

povraymt.PNG