Discussion Speculation: Zen 4 (EPYC 4 "Genoa", Ryzen 7000, etc.)

Page 449 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Vattila

Senior member
Oct 22, 2004
799
1,351
136
Except for the details about the improvements in the microarchitecture, we now know pretty well what to expect with Zen 3.

The leaked presentation by AMD Senior Manager Martin Hilgeman shows that EPYC 3 "Milan" will, as promised and expected, reuse the current platform (SP3), and the system architecture and packaging looks to be the same, with the same 9-die chiplet design and the same maximum core and thread-count (no SMT-4, contrary to rumour). The biggest change revealed so far is the enlargement of the compute complex from 4 cores to 8 cores, all sharing a larger L3 cache ("32+ MB", likely to double to 64 MB, I think).

Hilgeman's slides did also show that EPYC 4 "Genoa" is in the definition phase (or was at the time of the presentation in September, at least), and will come with a new platform (SP5), with new memory support (likely DDR5).

Untitled2.png


What else do you think we will see with Zen 4? PCI-Express 5 support? Increased core-count? 4-way SMT? New packaging (interposer, 2.5D, 3D)? Integrated memory on package (HBM)?

Vote in the poll and share your thoughts! :)
 
Last edited:
  • Like
Reactions: richardllewis_01

maddogmcgee

Senior member
Apr 20, 2015
384
303
136
Bought myself a 5800x3d upgrade to my current mobo based on the super cheap cost compared to the alternatives. Tested a couple of games and saw a huge increase from what I had before (literally turned one game from sub 30 chop fest to 60fps beast). Less than a week later, I am looking at this thread and wondering about the 7000 series x3d release date......:weary: It's hard to be content as a nerd.
 
Jul 27, 2020
16,165
10,240
106
The POV-Ray controversy goes back to the Zen days:


POV-Ray 3.7.1 beta 3 favors Intel.

What gives? I spoke with POV-Ray coordinator Chris Cason who told me the current beta adds AVX2 support, and while 3.7 was compiled with Microsoft’s Visual Studio 2010, the beta version is compiled with Visual Studio 2015.

POV-Ray was fingered by Extremetech.com’s Joel Hruska a few years ago for appearing to favor Intel. POV-Ray officials, though, have long denied it. Cason told PCWorld: “We don’t care what hardware people use, we just want our code to run fast. We don’t have a stake in either camp—in fact, for the past few years, I’ve been running exclusively AMD in my office. I’m answering this email from my dev system, which has an AMD FX-8320.”

So conspiracy theorists, go at it.
 

MarkPost

Senior member
Mar 1, 2017
233
326
136
If anyone with a Ryzen CPU is interested to test it, Ive uploaded here https://drive.google.com/file/d/13mfiQup_disHpgmb7TekN2Ge0FWq3I3Q/view?usp=sharing the povray binary compiled by me with AVX2 ON for all CPUs with AVX2 support (Ryzen included). And here can be downloaded the program itself to compare perfomance: Release POV-Ray Beta Release v3.8.0-beta.2 · POV-Ray/povray · GitHub

Test it with the "official" version, and then just replace the binary to test with the recompiled one

EDIT: forgot to mention. Pass to extract the binary: AnandTechForums
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I thought I was pretty clear: when binary detects a Ryzen CPU, it only uses AVX, avoiding AVX2 despite Ryzen as we all know includes AVX2. When binary detects an Intel CPU with AVX2 support, uses AVX2.

So for Ryzen to use AVX2, its necessary to activate it in source code (as I said, its open source) and compiling a new binary (i did that, and tested it, as can be seen above).

Interesting. Of course, I have no way of verifying whether what you say is accurate. Pov-Ray has been around for decades. I remember Pov-Ray from the Pentium IV era, or even possibly before then.

AVX2 has been supported by AMD for a long time now as well, so I have no idea why it would be deactivated on Ryzen CPUs assuming what you say is true.

Doing a quick Google search seems to indicate that AVX2 acceleration IS SUPPORTED by Pov-Ray for AMD CPUs. There are even posts on PovRay.org that speak of AMD CPUs using AVX2.

At any rate, I'm not a programmer so I can't investigate this problem properly. But I do know that some compilers are faster than others due to better vectorization. So maybe the compiler you're using has better vectorized code output perhaps.

Just a guess.
 

jamescox

Senior member
Nov 11, 2009
637
1,103
136
Interesting. Of course, I have no way of verifying whether what you say is accurate. Pov-Ray has been around for decades. I remember Pov-Ray from the Pentium IV era, or even possibly before then.

AVX2 has been supported by AMD for a long time now as well, so I have no idea why it would be deactivated on Ryzen CPUs assuming what you say is true.

Doing a quick Google search seems to indicate that AVX2 acceleration IS SUPPORTED by Pov-Ray for AMD CPUs. There are even posts on PovRay.org that speak of AMD CPUs using AVX2.

At any rate, I'm not a programmer so I can't investigate this problem properly. But I do know that some compilers are faster than others due to better vectorization. So maybe the compiler you're using has better vectorized code output perhaps.

Just a guess.

I remember POV-ray being around a very long time ago. Wikipedia says initial release in 1991; I probably didn't see anything about it until the late 1990s. The first pentium was 1993, so we are talkign 386 or 486 days. The wikipedia article list the last stable release as 2018, so it wouldn't be surprising that that release doesn't have proper support for Zen processors. They should check the feature flag such that things like this does not happen. There probably are non-official versions with it fixed for Ryzen processors. I don't know if POV-ray is actually used much these days if it hasn't had a release in 4 years.
 
  • Like
Reactions: Carfax83

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
I remember POV-ray being around a very long time ago. Wikipedia says initial release in 1991; I probably didn't see anything about it until the late 1990s. The first pentium was 1993, so we are talkign 386 or 486 days. The wikipedia article list the last stable release as 2018, so it wouldn't be surprising that that release doesn't have proper support for Zen processors. They should check the feature flag such that things like this does not happen. There probably are non-official versions with it fixed for Ryzen processors. I don't know if POV-ray is actually used much these days if it hasn't had a release in 4 years.
If you want to do ray tracing today for any practical use case, you'd use a GPU renderer. Pretty much the same argument as can be used against Cinebench.
 

uzzi38

Platinum Member
Oct 16, 2019
2,625
5,894
146
Official supported memory standard = strange choice of RAM got it!

Anything beyond the official memory standard is technically overclocked and not stock. While I myself never ever run purely stock settings, I respect the reviewer's decision to stick to standardized settings.



You like to snipe from the shadows, but not one of you has attempted to explain any of these anomalies. Your take is, "just accept it." Well, maybe you're that naive and want to assume that HWUB is infallible but I'm sure as hell not.
Enabling XMP is also technically overclocking. Would you like reviewers to use JEDEC spec only from here on out?
 
  • Haha
Reactions: Kaluan

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Enabling XMP is also technically overclocking. Would you like reviewers to use JEDEC spec only from here on out?

I already said that while I personally would never use stock memory speeds (I prefer faster memory as I have it in my own rig), I respect the reviewer's decision to do so. The official supported memory was the status quo for reviews for a long time before AMD "persuaded" reviewers to use overclocked settings, and not just on the memory, but on the fabric interconnect as well which was a big departure from previous review standards.

But I have to ask, how far are you willing to go? I said on a previous page, that while Zen 4 may benefit from DDR5 6200, Intel can go much higher. I bet many AMD supporters here and elsewhere would cry foul if a reviewer tested Zen 4 with DDR5 6200 and Raptor Lake with DDR5 8000 or some such.

For reasons of fairness, I suppose most reviewers still try to have similar memory speeds between Zen 4 and Raptor Lake. Only two reviewers that I can recall used significantly faster RAM with Raptor Lake compared to Zen 4, Linus Tech Tips and Tech Yes City. Both used DDR5 6800 CL34. One thing I found interesting, is that with the faster DDR5 RAM, the 13900K beat the 7950x in compression performance quite handily, and the 12900K edged out the 5950x.

Anyway, there are much faster DDR5 speeds available now than when those reviews were made.


 

MarkPost

Senior member
Mar 1, 2017
233
326
136
Interesting. Of course, I have no way of verifying whether what you say is accurate. Pov-Ray has been around for decades. I remember Pov-Ray from the Pentium IV era, or even possibly before then.

AVX2 has been supported by AMD for a long time now as well, so I have no idea why it would be deactivated on Ryzen CPUs assuming what you say is true.

Doing a quick Google search seems to indicate that AVX2 acceleration IS SUPPORTED by Pov-Ray for AMD CPUs. There are even posts on PovRay.org that speak of AMD CPUs using AVX2.

At any rate, I'm not a programmer so I can't investigate this problem properly. But I do know that some compilers are faster than others due to better vectorization. So maybe the compiler you're using has better vectorized code output perhaps.

Just a guess.

No.

You can check it for yourself, go to the povray github repositorie, specifically here: povray/optimizednoise.cpp at master · POV-Ray/povray · GitHub

That file defines intructions to use. Here:

optimizednoise.PNG
 

MarkPost

Senior member
Mar 1, 2017
233
326
136
I remember POV-ray being around a very long time ago. Wikipedia says initial release in 1991; I probably didn't see anything about it until the late 1990s. The first pentium was 1993, so we are talkign 386 or 486 days. The wikipedia article list the last stable release as 2018, so it wouldn't be surprising that that release doesn't have proper support for Zen processors. They should check the feature flag such that things like this does not happen. There probably are non-official versions with it fixed for Ryzen processors. I don't know if POV-ray is actually used much these days if it hasn't had a release in 4 years.

AVX2 isn't implemented in stable version (3.7.0). It's implemented in beta versions (3.7.1 and 3.8.0, latest Aug 9, 2021)
 

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
According to clipka at news.povray.org, this is a deliberate decision that POVRAY made AT THE REQUEST OF AMD in 2017. AMD did code-path testing on a bunch of machines and determined that the AVX codepath was faster on their hardware than AVX2. This is not a surprise as Zen1 handles 256 bit AVX2 operations as a pair of 128 bit ones and has roughly half the throughput. The link for the records:

CLIPKA discusses AVX code path code in POVRAY

Since there have been no major updates to POVRAY since then, it makes sense that it hasn't been addressed.
 

MarkPost

Senior member
Mar 1, 2017
233
326
136
According to clipka at news.povray.org, this is a deliberate decision that POVRAY made AT THE REQUEST OF AMD in 2017. AMD did code-path testing on a bunch of machines and determined that the AVX codepath was faster on their hardware than AVX2. This is not a surprise as Zen1 handles 256 bit AVX2 operations as a pair of 128 bit ones and has roughly half the throughput. The link for the records:

CLIPKA discusses AVX code path code in POVRAY

Since there have been no major updates to POVRAY since then, it makes sense that it hasn't been addressed.


yeah, that explanation would make sense if it would be true. But it isn't. Anyone with a Zen1 CPU can check that Zen1 is actually faster with binaries including AVX2 support for all CPUs. Proof:

Latest Beta version (3.8.0 beta 2, dated Aug 9, 2021) AVX

1700povray38b2avx.PNG

Latest Beta version (3.8.0 beta 2, dated Aug 9, 2021) AVX2

1700povray38b2avx2.PNG


And more importantly, FIRST beta build WITH AVX2 implemented (3.7.1-beta.3, dated Feb 19, 2017). Ironically AVX2 in this build isnt limited to Intel CPUs only, but AVX2 is activated for any AVX2 capable CPU, no need to recompile it

1700povray371b3avx2.PNG

... compared with the immediately previous beta build WITHOUT AVX2 implemented, just AVX here (3.7.1-beta.2, dated Jan 11, 2017):

1700povray371b2avx.PNG


In fact AVX2 was removed for non Intel CPUs with v3.7.1 Beta 6 (dated May 7, 2017). This is the result with this build (capped to generic AVX for non Intel CPUs):

1700povray371b6avx.PNG


... compared with the "official" v3.7.1 beta 5, the latest including AVX2 for all capable CPUs (dated Mar 25, 2017):

1700povray371b5avx2.PNG


As can be seen, Zen1 is ever faster with AVX2 (remember, more is better). Oh and since 2017 there are a bunch of beta releases, so no, doesn't make sense that it hasn't been addressed.
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
5,245
7,793
136
The comparison was obviously to point out a "worst case" scenario.

That's only a worse case scenario for CPU power differences though, not necessarily system power. They can be two different things.

There is nothing reasonable or valid about a result that is 100W different than anyone else, and likewise with many of their other outliers. This is tech, not magic. You don't get 100W difference by random fluctuations, slight memory differences, etc.

They're not the only one reporting large system power differences though. JT reported 135W difference in system power when running Cinebench. That's way more power than can be explained by CPU power alone. I already agreed 100W seems high and it's something HWUB should look into, but again, assuming that all of that power is being consumed by the GPU is a bad assumption and just throwing out their data because it looks high is an overreaction.

There is very obviously something wrong with several of their results. Good outlets would look at that data, make the obvious conclusion, and then retest after fixing the underlying issue. HWUB decided to publish with none of that due diligence. That is plenty reason to throw out their results.

HWUB did retest. They said they retested the BF5 benchmark 5 times because of the odd results but it kept coming out the same. If there is a bug in the driver or game or OS, it's not the reviewers job to fix it. They should double and triple check their setup, retest a few times, and then publish what they have. They can put a caveat on it explaining it's probably a bug, which is exactly what they did, but witholding the data is not the prudent thing to do. If it is an e-core issue, they're not the only ones to still face random e-core issues from time to time. Gamers Nexus mentioned in one of their videos that they seem to get workloads bounced to the e-cores randomly from time to time.

If HWUB was known for reasonably representative methodology, that might make sense. But all you're doing here is amplifying the impact of obviously flawed testing.

If it is obviously flawed testing, please point out the obvious flaw.

You're missing the point though. While you're correct that Jarrod's Tech found a 135W difference between the 13700K and the 7700x, the former is also 56% faster than the latter. So 56% faster, for 67% more power. This is different from the Plague Requiem discrepancy which shows the 13600K rig using 25% more power but performing 10% slower than the 7600x.

How is that even possible?

As I said above, it's not just the fact that the 13600K rig is using much more power than the 7600x, it's that it's supposedly using much more power while underperforming. Bringing up stuff like PCIe links and what not is just silly. We're talking at least 100w of power difference between the two here, and that cannot be explained away without bringing up the GPU as that is the only component other than the CPU which could account for the increased power draw.

And I'm certain it's not the CPU, because as I said, the game is primarily GPU heavy and only moderately taxes the CPU in crowded areas or when the rats are on screen. In normal gaming circumstances, the game is very light on the CPU.

I'm not missing the point at all. It's not about faster, it's about power use. Of course something can be slower and use more power. My point with JT is that their system is showing way more power use in a CPU only dependent benchmark that can be explained by the difference in CPU power alone. So if there is a large difference when only the CPU (and memory) are running due to the platform differences, then there could be an even larger difference in platform power once you start having active high speed links for the GPU and storage and even more memory accesses from the GPU as well as the CPU. Again, I'm not coming to a conclusion or saying the results shouldn't be looked at. What I am saying is that assuming all of that extra power must be going to the GPU despite lower fps for the Intel system proves it is bad data is wrong. There are too many variables with no controls and no data to make this type of conclusion and there are reasonable explanations and data from another reviewer showing that there is, or at least can be, a significant difference in platform power between the two which is not accounted for in the CPU alone.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
That's only a worse case scenario for CPU power differences though, not necessarily system power. They can be two different things.
Sure, but you can't explain 100W difference as run to run variation or any of the typical sources of statistical noise. 100W, either the CPU or GPU or both are responsible.
That's way more power than can be explained by CPU power alone. I already agreed 100W seems high and it's something HWUB should look into, but again, assuming that all of that power is being consumed by the GPU is a bad assumption and just throwing out their data because it looks high is an overreaction.
So why didn't HWUB actually investigate? There're plenty of applications that can tell you approximate numbers for the CPU, GPU, etc., and that should be plenty sufficient to diagnose the cause here. Instead they take the lazy way out and publish clearly flawed data without any effort to explain themselves.
They can put a caveat on it explaining it's probably a bug, which is exactly what they did, but witholding the data is not the prudent thing to do. If it is an e-core issue, they're not the only ones to still face random e-core issues from time to time.
They're the only ones that seem to have this particular issue. The scheduler isn't random, it's deterministic. If it's actually a bug with the scheduler it should be reproducible by others.

But the exact cause of the issue is besides the point. You have a hypothesis right here - that it's a scheduling issue. It would be absolutely trivial for HWUB to disable E-cores, rerun the test, and make a conclusion on that hypothesis. Yet they couldn't be bothered to put in that minimal amount of effort?

I don't understand why people insist they're such a reputable outlet while defending their unwillingness to put even a minimal amount of effort into validating their results.
 

Abwx

Lifer
Apr 2, 2011
10,940
3,445
136
Sure, but you can't explain 100W difference as run to run variation or any of the typical sources of statistical noise. 100W, either the CPU or GPU or both are responsible.

So why didn't HWUB actually investigate?

Because there s no need to investigate.

The 13900K take something like 35-40W more than the 7950X in games, assuming the 13900K has 5% better perf this imply that the GPU , wich is say at 390W with the 7950X, will madate 15% more power for thoses 5% better GPU perf, that s about 100W more GPU + CPU power and something like 130W more at the main.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
Because there s no need to investigate.

The 13900K take something like 35-40W more than the 7950X in games, assuming the 13900K has 5% better perf this imply that the GPU , wich is say at 390W with the 7950X, will madate 15% more power for thoses 5% better GPU perf, that s about 100W more GPU + CPU power and something like 130W more at the main.
The comparison system was a 13600K vs a 7600X, where not only was the 13600K system consuming 100W extra, but it also was performing substantially worse than the 7600X. So if anything, the trend you describe would go in the opposite direction.
 
  • Like
Reactions: Geddagod

dr1337

Senior member
May 25, 2020
331
559
106
A Plague Tale Requiem has the 13600K rig using nearly 120w more than the 7600x rig, yet it loses in that game by 10% at 1440p. How is that even possible?
If the game has more threads then its possible that its loading up the e-cores harder, could be as simple as the video driver assigning more threads. Games aren't universal in their workload, one game consuming more power on one brand isn't completely unexpected either.
The comparison system was a 13600K vs a 7600X, where not only was the 13600K system consuming 100W extra, but it also was performing substantially worse than the 7600X. So if anything, the trend you describe would go in the opposite direction.
Are you still talking about the HUB review where they found 13600k to only be 5% slower? Such a small difference between the two is identical performance as far as I am concerned and it would make sense that the chip with 8 more cores on it consumes more power. Even techpowerup found the 13600k to consume nearly 2x more power than 7600x in gaming with, funnily enough, a 5% increase in performance. However techpowerup also consistently gets worse FPS from their overclocking of intel chips, where their max OC example is only 3% faster...

Speaking of techpowerup
Last CPU tested by TPU/Wizzard before Ryzen 7000 came out:
View attachment 69456
Ryzen 7000 review day results:
View attachment 69457

If in only a month of time makes 5800x3d loses 8% of performance against the 12900k, its completely and wholly likely that as time goes on we're seeing similar changes in intel 13th gen performance too. New bios, new drivers, new windows builds, all make huge differences. Its definitely a lot easier for HUB to fudge and cherry-pick numbers in a 1v1 comparison, but I would assume his difference from other old reviews is simply new software. I guess we could start talking about the idea that maybe all reviewers are cheating and that maybe the techpowerup graphs are even faker than HUB?
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
If the game has more threads then its possible that its loading up the e-cores harder, could be as simple as the video driver assigning more threads. Games aren't universal in their workload, one game consuming more power on one brand isn't completely unexpected either.
So your claim is that the game is somehow sabotaging itself by spawning useless threads? Because anything that took advantage of the E-cores for more performance would show a large gap vs the 7600X.
 

dr1337

Senior member
May 25, 2020
331
559
106
So your claim is that the game is somehow sabotaging itself by spawning useless threads? Because anything that took advantage of the E-cores for more performance would show a large gap vs the 7600X.
I mean I usually play games like ArmA where the engine indeed does spend a lot of time actively sabotaging itself via the CPU. The spawned threads could be thought of redundant more so than useless. Or even, a quirk of drivers as this test was using a 4090 after all. It could be that the E cores feed the GPU better and the Nvidia driver asks for extra threads even though the game is bottlenecked somewhere else. HUB saw the 13600k system consume more power in every game, the closest gap being 40w. Keeping in mind its a total system power measurement, the difference could be entirely due to something like the intel motherboard having different power delivery, or default bios settings being overly aggressive from the factory.

Given that techpowerup always gets worse performance from an OC on intel CPUs, bad default settings on a motherboard would easily cause more power consumption for no performance gains. And this could be something thats cherrypicked to happen, or one can say its representative of average consumer experience, or maybe even its both.
 
  • Like
Reactions: Rigg and moinmoin

Hitman928

Diamond Member
Apr 15, 2012
5,245
7,793
136
Sure, but you can't explain 100W difference as run to run variation or any of the typical sources of statistical noise. 100W, either the CPU or GPU or both are responsible.

It's not unheard of for the same CPU used in two different motherboards to use 40 - 70 W more system power in one motherboard than the other with only the CPU loaded. So no, the CPU and GPUs don't have to be the responsible parties.

So why didn't HWUB actually investigate? There're plenty of applications that can tell you approximate numbers for the CPU, GPU, etc., and that should be plenty sufficient to diagnose the cause here. Instead they take the lazy way out and publish clearly flawed data without any effort to explain themselves.

I find it funny you calling them lazy when they consistently publish pieces with tests of far more games than the vast majority of other sites/channels and do not just rely on built in benchmarks. This takes an enormous amount of time, far more than most anyone else is willing to put in. Even if they had only done a limited amount of games, it's not lazy, it's sticking to their testing methodology between tests which is exactly what they should do. Reviewers don't have infinite time and again, it's not their responsibility to completely debug any results that seem off. It is their responsibility to ensure consistent testing methodology and to triple check all of their settings and methods to make sure nothing is wrong on their end. If they've done that (which they say they have) then they should publish. The community can give feedback and are free to try to replicate their results. I still haven't heard anyone point out what they've done that is obviously flawed. They just point to a number and think it's too big. If anything, my complaint would be that they are using system power consumption to compare CPU efficiency in the first place because of the large amount of variables you can't really control to say which CPU is more efficient in this instance. I typically ignore system consumption numbers for CPU tests for this reason.

They're the only ones that seem to have this particular issue. The scheduler isn't random, it's deterministic. If it's actually a bug with the scheduler it should be reproducible by others.

I'm sure it's not actually random, I wasn't using that word in a technical or strict sense.

But the exact cause of the issue is besides the point. You have a hypothesis right here - that it's a scheduling issue. It would be absolutely trivial for HWUB to disable E-cores, rerun the test, and make a conclusion on that hypothesis. Yet they couldn't be bothered to put in that minimal amount of effort?

I don't understand why people insist they're such a reputable outlet while defending their unwillingness to put even a minimal amount of effort into validating their results.

Maybe they will investigate further, I hope they do, but let's just say that it is a bug with the hybrid setup, does that make their results invalid just because they didn't figure out what was causing it? The only way the result is invalid is if they made a mistake with their setup or how they tested. If they've retested 5 times and had the same results, that's an honest effort to make sure that the issue isn't on their end. No one is perfect and everyone can make mistakes, even after retesting, but that doesn't mean we automatically assume it's their mistake and label their results invalid. JT tested a larger number of games and had similar overall results but no one seems to be calling out that channel. TPUs gaming summary rankings had some significant shifts over short periods of time and significant shifts again when they tested a larger set of games, but no one seems to find these things interesting. It's weird to me that people get so up in arms over which tests are valid because it shows one side or the other winning by a meaningless amount.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
It's not unheard of for the same CPU used in two different motherboards to use 40 - 70 W more system power in one motherboard than the other with only the CPU loaded.
It absolutely is unheard of for a 13600k to use 100W difference based on the motherboard. Nor would that explain why the gap doesn't seem consistent across tests.
I find it funny you calling them lazy when they consistently publish pieces with tests of far more games than the vast majority of other sites/channels and do not just rely on built in benchmarks.
Quantity, not quality.
it's not their responsibility to completely debug any results that seem off
Yes, it absolutely is their responsibility to ensure that their results adequately reflect the target platform. If no one else can reproduce their data, then it clearly doesn't reflect what a buyer can expect out of the system, regardless of how they arrived at those numbers.
I still haven't heard anyone point out what they've done that is obviously flawed.
There're a thousand ways to get bad data. We could blindly speculate, but only they have the tools necessary to actually conclude which one it is.
but let's just say that it is a bug with the hybrid setup, does that make their results invalid just because they didn't figure out what was causing it?
Yes, if only they are seeing this supposed bug, then it doesn't represent the actual hardware, and thus makes for an invalid result.
 

Abwx

Lifer
Apr 2, 2011
10,940
3,445
136
The comparison system was a 13600K vs a 7600X, where not only was the 13600K system consuming 100W extra, but it also was performing substantially worse than the 7600X. So if anything, the trend you describe would go in the opposite direction.

In the few games tested, at stock settings, by Computerbase, the 13600K use 50W more than the 7600X in the most demanding games, you have at least 50% being explained, dunno what are the games tested by the decried site, if they have overclocked RAM/IMC and what are exactly the tested scenes...

FTR the 7600X hoover at 60-66W depending of the game...
 
Last edited:
  • Like
Reactions: Kaluan