As far as I know the CPU-Z score is quite unstable, it seems the result could vary across different version of CPU-Z, if there's a cinebench result it could tell more.
However it seems RKL performance is pretty in line with what we expected in the past. Gaming is the biggest question right now.
I think you're too kind. It's the same people in every Intel thread, spreading FUD over and over and over. The trick is to treat their posts for what they are and move on.
I think 10-20% gains were obvious from that very first GeekBench leak. Some guys chose to ignore the fact that memory bw/lat sensitive subtests have regressed and ran away with conclusion that "IPC regressed".
As 6-8C CPU RKL will be just fine, state of art performance on desktop. Except one can buy very similar performance in 5600-5800X today.
For gaming i expect them to run very near each other. Both will have PCIE4, Intel will still have edge in highest end hand tuned memories as AMD sat on their asses with DDR4 3800, while Intel can hit well north of 4000. Swan song of DDR4 will clearly belong to Intel.
Also from the i9 moniker. Peformance for the RKL-S 8c/16t needs to be somewhat close to CML 10c/16t in MT workloads, hence it needs to lose by single digits in MT perf while winning ST by double digits.
I'll just leave you with these since it didn't sink in last time. There's this thing called an SPD, and a variation called XMP, and another variation called A-XMP.
This RAM is tuned for AMD. That hasn't seemed to sink into your thick one yet. The chips don't care but if that's the extent of your knowledge you probably shouldn't be talking. There's a lot more to it.
Picking out the right DDR4 dual-channel memory kit for an AMD Ryzen processor can be a daunting task as there are hundreds of kits on the market to pick from and not all work well with AMD's latest AM4 platform. The good news is G.Skill has come out with the Flare X and FORTIS DDR4 memory series...
We show you the best DDR4 memory kits and memory speeds for Ryzen 7 2700X builds using the X470/B450 chipset, maximising performance of your Zen 2 build.
I think you're too kind. It's the same people in every Intel thread, spreading FUD over and over and over. The trick is to treat their posts for what they are and move on.
This post, coming from one of the most biased posters I've seen in the past couple of years here has been truly fascinating to read 🤣 thank you and have a good day.
Also from the i9 moniker. Peformance for the RKL-S 8c/16t needs to be somewhat close to CML 10c/16t in MT workloads, hence it needs to lose by single digits in MT perf while winning ST by double digits.
ROTFL. I just genuinely realized that my 10900K is I9. Everyone everywhere just calls it 10900K. I don't care about Intel or AMD marketing, i just buy what is best for task at hand. The people who buy monikers without researching will not get "burned" either.
You know - i don't even care about MT performance as i run said 10900K with HT disabled for that sweet consistent 5.1 locked performance. Would 8C8T would be enough in long(er) run? That is a valid question. But I probably won't be buying RKL and even if I did AMD/Intel will come out with faster stuff than MT deficit would start to matter.
ROTFL. I just genuinely realized that my 10900K is I9. Everyone everywhere just calls it 10900K. I don't care about Intel or AMD marketing, i just buy what is best for task at hand. The people who buy monikers without researching will not get "burned" either.
You know - i don't even care about MT performance as i run said 10900K with HT disabled for that sweet consistent 5.1 locked performance. Would 8C8T would be enough in long(er) run? That is a valid question. But I probably won't be buying RKL and even if I did AMD/Intel will come out with faster stuff than MT deficit would start to matter.
ROTFL. I just genuinely realized that my 10900K is I9. Everyone everywhere just calls it 10900K. I don't care about Intel or AMD marketing, i just buy what is best for task at hand.
I don't care about marketing either, but it's still a valid gauge for future SKU performance when used to estimate the lowest acceptable gain for... uhm... JoeAverage.
This post, coming from one of the most biased posters I've seen in the past couple of years here has been truly fascinating to read 🤣 thank you and have a good day.
You're the only one laughing. I dare you to find a post from me that says AMD is bad, or any of the other special adjectives your ilk have reserved for Intel products. You don't understand what "bias" means, but that's entirely your fault, not mine.
You're the only one laughing. I dare you to find a post from me that says AMD is bad, or any of the other special adjectives your ilk have reserved for Intel products. You don't understand what "bias" means, but that's entirely your fault, not mine.
It certainly doesn't explicitly mean you're saying that a company is bad. I've watched you defending Intel's shenanigans and failures with ridiculous spin doctoring for quite some time. It may not have happened yesterday, but not everyone has the memory span of a goldfish. Also this is an Intel topic, so I'm not sure where you got the idea from that I'd suggest you were saying something bad about AMD of all companies.
I think 10-20% gains were obvious from that very first GeekBench leak. Some guys chose to ignore the fact that memory bw/lat sensitive subtests have regressed and ran away with conclusion that "IPC regressed".
As 6-8C CPU RKL will be just fine, state of art performance on desktop. Except one can buy very similar performance in 5600-5800X today.
For gaming i expect them to run very near each other. Both will have PCIE4, Intel will still have edge in highest end hand tuned memories as AMD sat on their asses with DDR4 3800, while Intel can hit well north of 4000. Swan song of DDR4 will clearly belong to Intel.
To be faster than Zen 3 by 15-20% one needs to have a corresponding IPC gain of greater than 18% going from Skylake to Cypress Cove. There is no indication thus far that we'll get such a gain. 5-10% faster than Zen 3 at same core count is what we're going to end up with.
You seem to expect ridiculously much out of Rocket Lake. On what are you basing that assumption?
AMD was doing poorly with the 3xxx and earlier because it had similar amount of usable L3 cache for games (2x16 vs 16MB) and higher memory latency. While the memory latency is still higher, 5xxx series improved it by 5-10ns and doubled the usable cache (32MB vs 16MB) which with many workloads cuts the actual latency in half (depends highly on workloads and cache hits but on average it's right around 50% less for double the cache) .
The result is that:
AMD does particularly well in games where the games fast-path fits into L3 the cache. These are usually lighter eSports titles (look at Rainbow Six, CS:GO, Valorant, Overwatch (which is clearly bottlenecked on all CPUs).
But it still beats Comet Lake on average even when the latter is heavily overclocked and uses optimized memory as Gamers Nexus and igorslab show)
The problem with Rocket Lake is, that while L2 cache size is increased, L3 size remains the same at 16MB. Cache bandwidth seems to be considerably increased (particularly L1), but not latency. Also the memory controller was also already very aggressive on Skylake. There really isn't that much room to go beyond 40ns of memory latency. So where do you expect to see the gains? If it were "general IPC" Zen 3 would be much stronger than it is.
Going back to the "GPU limited" point:
1. While Comet Lake pulls some wins on igorslab benches, some look like this (I would hope you agree that RTX 3090 @ 1280x720 is NOT GPU limited):
Please do check out the World War Z results above, this is an excellent example of a game fitting well into L3 cache. Do You really expect Rocket Lake to make up the difference AND add 20% to that?
2. Take Techpowerup's pure draw-call tests (where it's very easy to see when it shifts from 100% CPU to 100% GPU limited):
The problem I saw with "Rocket Lake Gamining king!" results is that it all but certainly won't retake the crown in benchmarks that play well with L3 cache, as AMD simply has 2x more.
Games that have loads of cache-misses are already very competitive on 10900K and yes, it will improve there, but nowhere near 20%.
I mean, just look how 6700K performed vs 2700K or 3770K for instance, although IPC uplift is in the same ballpark as Comet Lake -> Rocket Lake.
It certainly doesn't explicitly mean you're saying that a company is bad. I've watched you defending Intel's shenanigans and failures with ridiculous spin doctoring for quite some time. It may not have happened yesterday, but not everyone has the memory span of a goldfish. Also this is an Intel topic, so I'm not sure where you got the idea from that I'd suggest you were saying something bad about AMD of all companies.
I've defended Intel's product performance from ridiculous claims like the one Ondma was referring to. That's been it from day one. You know, AMD good, Intel bad. Your post history should tell everyone which part of that divide you're on. Hehe
On topic: Expect Intel to take back the single thread performance crown. Assuming no GPU bottlenecks, gaming performance should increase back to about 5% - 8% on average against Zen 3, with a few outlier cases favouring Zen 3 due to L3 cache size advantage.
AMD was doing poorly with the 3xxx and earlier because it had similar amount of usable L3 cache for games (2x16 vs 16MB) and higher memory latency. While the memory latency is still higher, 5xxx series improved it by 5-10ns and doubled the usable cache (32MB vs 16MB) which with many workloads cuts the actual latency in half (depends highly on workloads and cache hits but on average it's right around 50% less for double the cache) .
Lets not neglect the 4C sized CCX impact. There were plenty of situations where AMD was heavily penalized for threads running on different CCX and having nasty communication penalty. Even if game is mostly single threaded like those old DX9 era games like CS etc it still has to communicate with DirectX runtime and GPU card drivers run on different thread as well. All those calls requiring locks and synchronization. That is why rising Infinity Fabric clocks was so important - those communications basically ran limited by IF link speeds.
Now with 8C its much better as typical workload is more likely to fit. And compounded by huge L3 cache that makes fast sharing even easier and likely. All resulting in much higher FPS maxes.
Where RKL will stand depends on testing: in 3200C20 "JEDEC stock config they will be the same. Click on top left for this type of review that is completely useless for people who read deep dive reviews, but relevant for people who don't.
In proper 3600C16 and above setting I expect Intel to retake crown by some 5%, but no more. And in uber high end handtuned 3800CL14 vs 4000C15 setting they will probably be the same on average.
AMD does particularly well in games where the games fast-path fits into L3 the cache. These are usually lighter eSports titles (look at Rainbow Six, CS:GO, Valorant, Overwatch (which is clearly bottlenecked on all CPUs).
But it still beats Comet Lake on average even when the latter is heavily overclocked and uses optimized memory as Gamers Nexus and igorslab show)
Everyone's big assumption here is that Tech Jesus really had the optimum settings. He didn't. His settings are 5.1Ghz all core with a 4Ghz ring bus using DDR4-3200 CL14. Someone using AMD might think that is good. It isn't for Intel.
Using GN's memory settings, this guy shows what more is possible. He's using open loop cooling and a 10900K, but it's still relevant. Staying with the DDR4-3200 and just going to 5Ghz ring bus gave him +5% FPS.
That +5% alone would vault the 10600K on tech jesus' chart to even with a 5950X stock and above their overclocked 5800X.
Based on his results, sufficiently fast memory (4400Mhz) plus increasing ring to 5Ghz should get you around +21% above GN's benchmarks.
That would have put the 10600K about 8% FPS above any of the Zen 3 systems on GN's chart. A 10900K with settings like that absolutely obliterates them.
These benchmarks are done at highest detail 10900K / 3090 5.4Ghz all core.
Here's another one. GN's original benchmarks of the 10600K, notice where the 10900K OC is. He actually got a lower score than the stock 10900K.
His high for 10900K at 1080P medium on SOTR is 178.1.
So no, GN is not doing extreme tuning.
Tuned by GN:
This guy has a 10900K / 2080 Tiand got 188 at highest. He is running Ring 5.3 and DDR4-4600. Even at higher detail vs GN's medium settings he is obliterating GN's top score for the 10900K using the same tier card (2080 Ti).
You are dead wrong on this, even if they may have used the same RAM kit they loaded the default RAM speed for the CPUs. Check big pages like Computerbase, you should know this! A few have used OC RAM speeds for Intel but the majority didn't, they tested according to spec. Because of the same RAM speed with RKL-S Intel should look better out of the box in games regardless of the IPC improvements.
This absoultely dead-wrong.
From the big ones only Anandtech and Computerbase are guilty. Anandtech does stock only. Toms runs stock without XMP, but it also has OC results (which go up to 3800-4000 Mhz for both Intel and AMD). Computerbase has a funky memory setup (as mentioned above) but it's also not quite stock, all have CL-14 timings, granted it will favor AMD.
I went through the trouble of going over every review from this roundup and the reality is just the opposite of what you claim. Most of the sites do use the same RAM with XMP, usually at 3600 Mhz or 3200 Mhz (if also testing zen+ and older)
I marked "+" for those who used same overclocked memory and "-" to those that didn't. Also nothing for those that have no comparisons at all (e.g. only test 5900X):
+ 4Gamer - used 3600 Mhz on both
+ Adrenaline - used 4000 Mhz on both
- Anandtech- didn't (non-XMP all around)
- Benchmark.pl - didn't (3200 MHz CL16 vs 2933 CL16)
+ Bitwit - used 3600CL16 on both
CoolPC - more of a forum post, unclear
+ ComptoIrHardware - DDR4-3200 CL14 on both
- Computerbase - CL14 timings but slower Mhz on intel
+ Coreteks - 3600CL16 on both
+ Cowcotland - 3200 Mhz for both
der8auer - I'll skip this one, uses 4000 Mhz XMP but ONLY features 5950X and what it can achieve (no comparison)
+ eTknix - 3000Mhz on all
+ eurogamer (DigitalFoundry) - 3600 Mhz XMP on all
GamerMeld - just plays around with 5xxx series, no comparions
+ Phoronix (though very Linux specific) DDR4-3600 on all
+ PRO Hi-Tech (seem to really know their stuff) - both heavily overclocked (mem and CPU), up to 4000 MHz CL16 for Intel, 3800 Mhz CL16 for AMD
+ Sweclockers - 3200 MHz, 14-14-14-34 for all
+ Tech Critter - 3200 Mhz for both
+ Techpowerup - 3200 14-14-14-34 for all (later 3800 Mhz + RTX 3090 added)
+ Techtesters (again smaller, but I recommend) - 3600 Mhz XMP enabled
+ Tech YES City - 3600 CL16 for all
+ ThinkComputers - same DIMMS 3600 Mhz for all
Tweak.dk - just references AMD's slides (which ironically should be a "+" as AMD used same DIMMs @ 3600 Mhz for both Intel and itself)
+* Tom's Hardware - While "stock" is without XMP, it makes sense, as "OC" uses both 3800 (zen 2) and 4000 Mhz (zen 3) vs Intel (4000 Mhz). They could also have higher Mhz I guess, but it certainly isn't stock
+ UNIKO's hardware 3600 CL14 for all
+ XanxoGaming - same 3200 CL14 kit for all
XFastest Taiwan - don't know, unreachable for me
+ XFastest Hong Kong - DDR4-3200 for all
Results:
Most clearly have XMP enabled. Of the sites that actually did comparisons 46 used the same speed and timings for all, XMP or not (47 if you count Tom's) while 9 didn't (10, if you count Tom's).
From all the reputable old-time review sites only Computerbase and Anandtech run uneven settings only (and even then it's worth mentioning that Computerbase often does separate memory tuning articles)
Most importantly all of most prominent youtubers (Linus, GamersNexus, HW Unboxed, but also smaller quality ones like OptimumTech and TechTesters) run equal settings. We might not like them, but their traffic dwarfs the written review sites.
Note: I marked most prominent HW youtubers and old-time HW sites I've seen around for 10+ years doing reviews in bold for clarity (sorry if i missed some regional ones), not that it changes anything.
Everyone's big assumption here is that Tech Jesus really had the optimum settings. He didn't. His settings are 5.1Ghz all core with a 4Ghz ring bus using DDR4-3200 CL14. Someone using AMD might think that is good. It isn't for Intel.
Using GN's memory settings, this guy shows what more is possible. He's using open loop cooling and a 10900K, but it's still relevant. Staying with the DDR4-3200 and just going to 5Ghz ring bus gave him +5% FPS.
The vast majority of users don't run open loop cooling. The vast majority of reviewers aren't going to use open loop coolers in their reviews. They might have a separate article/video addressing that, but they won't put it in the main review. Even when not running strictly spec, reviewers aren't going to post reviews of CPUs with hand tuned, overclocked to the brink results. First, because it takes a lot of time to get to that point and they don't usually have near enough time for that between receiving hardware and embargo ending. Second, hand tuning and overclocking is not a widely developed skill and very few even enthusiasts bother with it anymore. Third, getting RAM that can clock really high at really tight timings is typically very expensive and most people would benefit far more from spending that money going up a tier in CPU or GPU. Lastly, hand tuning everything to the max is also highly dependent on sample quality, both of the RAM and CPU so even if someone posts amazing results and you spend the money buying exactly what they did, there's absolutely no guarantee you'll be able to match their overclocks/tuning/performance.
That +5% alone would vault the 10600K on tech jesus' chart to even with a 5950X stock and above their overclocked 5800X.
In the video I posted earlier, GN uses faster memory than what any of their AMD CPUs could support, with hand tuned timings, and the cache ratio overclocked to 4.9 GHz and the 10600K still lost to a stock 5800x with 3200 MHz RAM in 4/5 tests. So no, this is not true at all.
Based on his results, sufficiently fast memory (4400Mhz) plus increasing ring to 5Ghz should get you around +21% above GN's benchmarks.
That would have put the 10600K about 8% FPS above any of the Zen 3 systems on GN's chart. A 10900K with settings like that absolutely obliterates them.
These benchmarks are done at highest detail 10900K / 3090 5.4Ghz all core.
You're right, GN doesn't do extreme tuning in their base reviews. They do tune the memory, but not to the extreme. They do separate videos for extreme tuning which I already posted a link to that you seemed to completely ignore.
This guy has a 10900K / 2080 Tiand got 188 at highest. He is running Ring 5.3 and DDR4-4600. Even at higher detail vs GN's medium settings he is obliterating GN's top score for the 10900K using the same tier card (2080 Ti).
Rendering at high is obviously going to also require more CPU. The two results are not comparable. All you have to do is look at the CPU render numbers on same CPU at different settings to know that. Even changing the resolution will change those numbers.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.