Discussion Intel current and future Lakes & Rapids thread

Page 532 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ajay

Lifer
Jan 8, 2001
15,468
7,871
136
There will be a difference in gaming between every tier of ADL-S. As @epsilon84 already mentioned, Intel segments based on L3 cache as well, and this has a direct impact on gaming performance. In fact, unlike previous gens, even the 12600K will have more L3 than the rest of the i5 line. We've already seen just how important is L3 cache size in gaming.

Therefore, for some users a case can be made to spend more on the Intel CPU and get more cache, which will arguably extend the lifetime of their purchase by 1-2 years. This was certainly the best option during the Sandy Bridge -> early Skylake era, when buying i7 over i5 was arguably the better value over time choice, allowing you to skip 1 or even 2 generations. Nowadays though, the market is far more competitive and the value gamer is arguably better off purchasing from the $150-200 value zone and upgrading more often.

If Alder Lake delivers on the gaming front then their i5 lineup will be quite the wake-up call for some of the forumites who thought buying a cheap 10700K/10900K in the past 6 months was the best idea for "future-proof" gaming setups. With more cache and stronger P cores than the 10700K or even the 11900K, the 12600K should be able to deliver quite a valuable lesson here. It's going to be fun watching people who previously stated they wouldn't buy a 6-core for gaming in 2020-2021, some of them may awkwardly start recommending a 6-core in 2022. :smilingimp:
Sadly, 'future proofing' is a somewhat pointless atm, wrt Gaming. Those who are not ATF ballers, need to sell kidney to buy a decent GFX card. Apparently, this problem will persist throughout this year and next. Who knows what will be affordable in 2023, supply is expected to increase due to increased CAPEX spending by the top Fabs. If one has a 5-6 year timeline between completely new builds (as I do on average), then buying up a level or two up on a CPU makes more sense. Those who are on a two year plan need not worry so much.
 
  • Like
Reactions: Tlh97 and Makaveli

Mopetar

Diamond Member
Jan 31, 2011
7,848
6,003
136
I get the above especially with Microsoft, but if it stays in the company that's a really bad analogy.

I've heard the phrase equally bent into an "x86" tax.

Enforcement of patents is another misuse of tax, that while more fitting, is equally using loaded language.

I stand by my I don't think you understand taxes and what they are used for .

It's an expression, not meant to be taken literally because it isn't a real tax. It's just meant to convey annoyance at the added cost that no one wants to pay, but has little choice in the matter.

It's like complaining that when a fire C-level executive gets a big payout that it shouldn't be called a golden parachute because making it out of gold is an obvious misunderstanding of how parachutes work and what they're used for.
 

Schmide

Diamond Member
Mar 7, 2002
5,587
719
126
It's an expression, not meant to be taken literally because it isn't a real tax. It's just meant to convey annoyance at the added cost that no one wants to pay, but has little choice in the matter.

It was a biased expression and my post was meant to express that. Your pedantic over ANALyzing of it is taxing.

It's like complaining that when a fire C-level executive gets a big payout that it shouldn't be called a golden parachute because making it out of gold is an obvious misunderstanding of how parachutes work and what they're used for.

Literal vs figurative sometimes the analogy fits sometimes it doesn't.

I stand by my I don't think you understand taxes and what they are used for.

You literally went back to the original post to reexplain your position to me.
 

Mopetar

Diamond Member
Jan 31, 2011
7,848
6,003
136
It was a biased expression and my post was meant to express that. Your pedantic over ANALyzing of it is taxing.

I'd have to go back to reread the original post that kicked all of this off and perhaps I'd probably even agree with you that the use of X-tax is stretching, but your original post was itself overly pedantic. Just say, "I disagree" instead of trying to make some argument about not understanding taxes. If that was your original intent then I don't think you expressed it very effectively.

It's not hard to see that AMD was fine bumping prices as soon as they had Intel squarely beat in gaming performance. Maybe you could argue that their own costs went up and they were just passing that on to the customer, but I bet their gross margin is up on Ryzen CPUs. Maybe it's better to call it a dominance-tax but that's hardly unusual in any business.
 
  • Like
Reactions: dundundundun

Schmide

Diamond Member
Mar 7, 2002
5,587
719
126
I'd have to go back to reread the original post that kicked all of this off and perhaps I'd probably even agree with you that the use of X-tax is stretching, but your original post was itself overly pedantic. Just say, "I disagree" instead of trying to make some argument about not understanding taxes. If that was your original intent then I don't think you expressed it very effectively.

I don't disagree with what was said, it was logically correct, but with the loaded language use in the post, it deserved the hyperbolic answer I gave.

It's not hard to see that AMD was fine bumping prices as soon as they had Intel squarely beat in gaming performance. Maybe you could argue that their own costs went up and they were just passing that on to the customer, but I bet their gross margin is up on Ryzen CPUs. Maybe it's better to call it a dominance-tax but that's hardly unusual in any business.

Premium products get premium pricing. In this case there is no artificial segmentation, products are priced on a tier, and no one is paying a tax. The use of the term in the offending post was only to disparage the value of the product.

A proper example the Pro graphics segment. Tax would be a quasi correct term as part of the pricing goes to fund the special drivers that make the cards so valuable.
 

Mopetar

Diamond Member
Jan 31, 2011
7,848
6,003
136
You're still not getting the whole X-tax thing. It's an intentional use of an improper word for the negative connotation and that it's inescapable.

If you weren't familiar with some somewhat obscure internet lingo, no big deal. Let's just move on, but don't play it off as though it was something else.
 
  • Like
Reactions: Elfear and podspi

Hulk

Diamond Member
Oct 9, 1999
4,228
2,016
136
I'm really interested in seeing how ADL with the big cores shut down will perform compared to equal clocked Skylake cores.

My ancient 4770K Haswell system is still quite useful for video editing, music production, photo editing and other tasks you might think beyond it's useful life. Of course that may well be the case (meaning it's not useful) and I'm just used to working with it. I'll be upgrading to either ADL or Zen 3 when I've read reviews on both. I considered going with Rocket but decided I've gone this long I can wait a bit more and just go all in with a new system from the ground up. Well I'm hoping I can reuse my case and power supply...

Anyway my 4770K is only running stock 3.9GHz. If 4 "e" cores can really equal it's performance at 1/4 or less die area and 1/2 the power I'll be super impressed.
 

Schmide

Diamond Member
Mar 7, 2002
5,587
719
126
You're still not getting the whole X-tax thing. It's an intentional use of an improper word for the negative connotation and that it's inescapable.

If you weren't familiar with some somewhat obscure internet lingo, no big deal. Let's just move on, but don't play it off as though it was something else.

I get the term. As with ever term, nuance exists. I'm not playing anything off. The original post posed 2 scenarios. A governing body. The use of loaded language. The original poster admitted it was loaded language. Moreover the point was it was a poor use of the term and sense then more proper examples were discussed.

You're the one drawing this out. It should of died yesterday.
 

Hulk

Diamond Member
Oct 9, 1999
4,228
2,016
136
@Mopetar , @Schmide Kindly stop. None of really care. K?

Yes. Both of you guys are obviously very intelligent. There was a miscommunication due to the nature of writing. Slang, innuendo, and other figures of speech are difficult to write. Sometimes they require spoken inflection, facial expression, body language, immediate clarification, etc... Just one of those things that happens when writing. No harm, no foul.

About those Coves and Monts...
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
Of course not, but the narrative in the last 6 months very core-count oriented. Core count was king, and this perception was only exacerbated by the lackluster RKL gaming performance. With an underwhelming 11900K and ADL-S being anounced as an 8+8 config, meaning potentially just 8 strong cores for gaming, some seriously underestimated the importance of core performance and cache size in favor of core-count (on an already gaming proven design which is Skylake). An adjustment is in order.


Yes, but the rest of the i5 lineup will be 6+0. These will be the "cheap" gaming SKUs for 2022.

Not taking into account the cache differences between Intels i5/i7/i9 lineup, I would say 6C/12T is the minimum core/thread count for high end gaming these days, and an argument can be made that in the coming years the 8C/16T or higher chips will start to pull away from a 6C/12T CPU as games become more multi-threaded. We're just starting to see signs of this in the most demanding titles today, and that difference will only grow, not shrink, from this point on.

For that reason, I wouldn't begrudge anyone getting an 8 core (or better) CPU thinking it would be more 'future proof' because the fact is, it is when compared to a current gen 6core CPU. Is it a couple of hundreds dollars better? Probably not in current games, but they could conceivably keep the platform for a bit longer, so there is that. It's like the 4C/4T i5 vs i7 4C/8T debate from yesteryear, while those that bought the i5 were able to save some money initially, those who bought the i7 were able to enjoy a couple more years of playable gaming once games started to choke on 4C/4T CPUs.

Could the 'core count is king' crowd have waited for ADL-S or Zen3D since these are supposed to bring much higher gaming IPC? Of course, but we can always play this waiting game.

Btw, I just checked the leaked price of the 6+0 12600 and its supposedly only $30 cheaper than the 12600K? Don't see how that is a good value considering you lose 4 cores plus the ability to overclock. I'm looking at this from an enthusiast perspective ofc. The 12400 will apparently be at or below $200 which is a decent enough price difference to consider it over a 12600K, especially if the E cores make little impact to gaming performance.
 
  • Like
Reactions: Tlh97

Hulk

Diamond Member
Oct 9, 1999
4,228
2,016
136

Don't know if this is legit of course but the score seems reasonable. CPUz 825 for ADL vs 648 for 5950X.

What is more interesting to me is what appears to be an equal MT score to 5950X of 11906.
Taking that apart a bit with some quick napkin math...

Assume single core score is at 5.3GHz, back it down to 5.0, multiply by 8 and then 1.25 for SMT and you get 7783 for the Coves.

Now take a representative Skylake score of 587, back down the clock a bit my multiplying by 4/5, then multiply by 8 for 3757. This is my Gracemont estimate.

Add the two and you come up with 11540, very close to the 5950X score shown.

Those Gracemonts may very well be performing like (or perhaps even a bit better than) Skylake cores.
 

Ed1

Senior member
Jan 8, 2001
453
18
81
Don't know if this is legit of course but the score seems reasonable. CPUz 825 for ADL vs 648 for 5950X.
What is more interesting to me is what appears to be an equal MT score to 5950X of 11906.
Taking that apart a bit with some quick napkin math...
Assume single core score is at 5.3GHz, back it down to 5.0, multiply by 8 and then 1.25 for SMT and you get 7783 for the Coves.
Now take a representative Skylake score of 587, back down the clock a bit my multiplying by 4/5, then multiply by 8 for 3757. This is my Gracemont estimate.
Add the two and you come up with 11540, very close to the 5950X score shown.
Those Gracemonts may very well be performing like (or perhaps even a bit better than) Skylake cores.
I read somewhere that the priority goes like this, Pcores, then Ecores then HT on Pcores. So HT is the last use case for the cores.
We will have to see how this pans out in various workloads.
 

Asterox

Golden Member
May 15, 2012
1,026
1,775
136
I don't know if any of this is expected by anyone. Alder Lake launch seems like a complete cluster F.

Microsoft is making wholesale changes to Windows 11, seemingly with no forethought, just weeks prior to general release. Software that is supposed to be for 100s of millions of users is hijacked to satisfy timing an an oddball CPU that has zero users.
It seems that Microsoft is rushing Windows 11 as if this was a vaccine to fight pandemic, as opposed to deliberate planning, gathering feedback, refining etc... Alder Lake is not Covid, lives are not at risk, Microsoft does not have to cut corners to make an arbitrary deadline.

Sandra seems to be "customizing" its testing to new hardware, as opposed to testing how new hardware runs existing software, how adaptable the new hardware is to all kinds of software users will throw at it...

View attachment 50669

Windows 11 circus is not worthy of any additional comment or discussion.

Sisoftsandra Preview experienced an NDA air strike, deleted or disappeared in the form of fog.But ok, Web arhive saved the situation.

https://www.sisoftware.co.uk/2021/0...ew-benchmarks-big-little-performance-preview/

 

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
These are the changes that SiSoft has implemented:

Changes in Sandra to support Hybrid
Like Windows (and other operating systems), we have had to make extensive changes to both detection, thread scheduling and benchmarks to support hybrid/big-LITTLE. Thankfully, this means we are not dependent on Windows support – you can confidently test AlderLake on older operating systems (e.g. Windows 10 or earlier – or Server 2022/2019/2016 or earlier) – although it is probably best to run the very latest operating systems for best overall (outside benchmarking) computing experience.
  • Detection Changes
    • Detect big/P and LITTLE/E cores
    • Detect correct number of cores (and type), modules and threads per core -> topology
    • Detect correct cache sizes (L1D, L1I, L2) depending on core
  • Scheduling Changes
    • All Threads” (thus all cores + all threads – e.g. 24T)
    • All Cores (big+LITTLE) Only” (both core types but not threads – thus 16T)
    • big/P Cores Only” (only “Core” cores) – thus 8T/P
    • LITTLE/E Cores Only” (only “Atom” cores) – thus 8T/E
    • Single Thread big/P Core Only” (thus single “Core” core) – thus 1T/P
    • Single Thread LITTLE/E Core Only” (thus single “Atom” core) – thus 1T/E
  • Benchmarking Changes
    • Dynamic/Asymmetric workload allocator– based on each thread’s compute power
      • Note some tests/algorithms are not well-suited for this (here P threads will finish and wait for E threads – thus effectively having only E threads)
    • Best performance core/thread default selection– based on test type
      • Some tests/algorithms run best just using cores only (SMT threads would just add overhead)
      • Some tests/algorithms (streaming) run best just using big/P cores only (E cores just too slow and waste memory bandwidth)
      • Some tests/algorithms sharing data run best on same type of cores only (either big/P or LITTLE/E) (sharing between different types of cores incurs higher latencies and lower bandwidth)
For this reason we recommend using the very latest version of Sandra and keep up with updated versions that likely fix bugs, improve performance and stability.
Supporting latest hardware tech properly is now a bad thing. God forbid we see the strength of both platforms in a fair benchmarking contest. I suppose this is yet another negative talking point crossed off the list. As long as your favorite platform is also well supported, what's your 'beef'? :)
 
  • Like
Reactions: dundundundun

inf64

Diamond Member
Mar 11, 2011
3,703
4,032
136
Those Sisoft results are not really encouraging, I think there might be something wrong with the ADL test system. If that is the final performance then it is not really shining in Sisoft suite, and hopefully real world workloads will do much better.
 

mikk

Diamond Member
May 15, 2012
4,141
2,154
136
Sisoft relies heavily on AVX2 or AVX512 by the looks of it, the AVX512 advantage from RKL is huge in their test. Maybe the Gracemont cores have a worse AVX2 performance than Skylake. In real consumer workloads AVX2 is not that important. In Cinebench or Geekbench v5 AVX is negligible.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Glad to see that DDR5 wins in latency against DDR4. That's a pretty good improvement.

The score is "inverse" of latency. The disaster level of latency can even be seen in multicore read/write results -> DDR5 can't sustain so many concurrent reads/writes due to each taking so long to serve. Ironical for supposedly quad virtual memory setup with 1.5x theoretical memory bandwidth.

So latency drama of DDR5 continues. When OEM DDR4 3200 memory beats the living hell out of supposedly premium setup you know you are in deep trouble.
 
Last edited:

Abwx

Lifer
Apr 2, 2011
10,959
3,474
136
Sisoft relies heavily on AVX2 or AVX512 by the looks of it, the AVX512 advantage from RKL is huge in their test. Maybe the Gracemont cores have a worse AVX2 performance than Skylake. In real consumer workloads AVX2 is not that important. In Cinebench or Geekbench v5 AVX is negligible.

Only a few tests rely on AVX512, this is clearly visible on the comparisons, besides Sandra has been quite accurate for comparisons within a same brand, assuming you take account of the instructions used on thoses tests.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
I think that is the actual latency in ns, not a score.


It could be, but since on the graph it has latency point of 99ns @ 16MB size for DDR4, i think that is indicator of completely useless test results as normally that is cache by L3.

And there is usually a memory "score" also:
1632686029519.png