Discussion Intel current and future Lakes & Rapids thread

Page 555 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

arandomguy

Senior member
Sep 3, 2013
556
183
116
Surprisingly good average results though yeah 1% is beyond terrible. If this is an omen of what's to come V-cache vs Alder lake will become very interesting (I'm pretty sure V-cache will help with 0.1% and 1% more than with average)

I'd be very skeptical about the 1% and 0.1% low numbers from those videos at the moment.

Comparing known examples of the 11900k and 10900k the numbers are horrendous compared to the numbers we've seen elsewhere. Even the channels own video with the 10700k a few weeks ago (same RTX 3070) has 1% lows in HZD at in the 70s and RDR2 in the 50s, yet the 10900k numbers are below 20 for HZD and in the 30s for RDR2 in this recent comparison.

The numbers in general seem to suggest that both the 11900k and 10900k would be extremely noticeable stutterfests across some of the games tested, yet we do not have wide reports or other tests that show that to be case.

Also in some tests the numbers are 0 for both 1% and 0.1% lows.

I'd be wondering if there is some testing methodology issue going on here. 1% and even more so 0.1% low are much more sensitive to testing issues than avg fps numbers, which makes them harder to work with. Just glancing at the channel it seems to going for a very quantity approach to content.
 
Last edited:
  • Like
Reactions: IntelUser2000

insertcarehere

Senior member
Jan 17, 2013
639
607
136
Good. It outperforms the M1 Pro and Max. Actually Apple chose to compare it to 11800H, since it would make it look much more impressive.

I do think Geekbench 5 will end up underestimating the impact of the hybrid configuration, as the test is so short that it doesn't stress the Golden Cove cores much and pretty much runs close to peak, while as in real world applications it won't happen.

Apple chose to compare the M1pro/max to the 11800H because laptops with Alder Lake aren't released. Changing from a i7 TGL-H to i9 TGL-H SKU realistically should have minimal impact on the perf/watt metrics being compared anyways.

12900HK score looks very, very impressive. Even accounting for the liberties Intel are taking with power consumption, it would most likely take 12 Zen 3 cores to match the results from 6GC + 8GM here in a similar power envelope.

With Alder Lake, it will takes more cores in aggregate to match performance with AMD in an efficient manner, which is fine for Intel as some of those cores are tiny. 12900k looks bad efficiency wise because they pushed clocks up hard to try matching 8GC + 8GM cores with 16 Zen 3, when in reality 8P + 8E takes the area of 10P and should be a better match for the 5900x. It'd be an interesting exercise to see how ADL's performance scales with power consumption.
 
Last edited:

Abwx

Lifer
Apr 2, 2011
10,950
3,469
136
It's strange though. Look at AssCreed. I know the guy made multiple videos to maximize views but this is what he got:

12900K - ~ 86 fps
10900K - ~ 80 fps
12700K - ~ 74 fps
5800X - ~ 71 fps
5600X - ~ 70 fps

You would think the 12700K shouldn't be much slower than the 12900K. Also Comet Lake is beating Zen 3.

Dunno if that s only in this game but how did he manage to have the 10900K being 15% faster than a 5800X when Computerbase put them at 1% difference on the average and 3% for minimum fps..?..

 

arandomguy

Senior member
Sep 3, 2013
556
183
116
Dunno if that s only in this game but how did he manage to have the 10900K being 15% faster than a 5800X when Computerbase put them at 1% difference on the average and 3% for minimum fps..?..

You're using an aggregate result that doesn't even have the same game in the test suite. 5800x is 50%+ faster than the 10900k in the Valorant test in that particular test suite as an example.

Different work loads will have different results. Not just different games either but different settings or even test scenes.

Assassin's Creed from what I can recall tends to have favored CML/RKL.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Apple chose to compare the M1pro/max to the 11800H because laptops with Alder Lake aren't released. Changing from a i7 TGL-H to i9 TGL-H SKU realistically should have minimal impact on the perf/watt metrics being compared anyways.

Of course they didn't compare it to Alderlake, it doesn't exist!

They should have compared it to Core i9, if just for the performance. These are marketing tactics. "Oh, it's 50% faster and uses less power! Intel will NEVER catch up now!" When more than half of that gap could be cut by an existing chip.

@Hulk Perhaps the performance loss is minimal for the power level difference. Perhaps it fixes some bugs.

You can't take leaks at a face value. The original Athlon performed horribly. When it released it was a monster. The hardware couldn't have changed, but drivers and firmware(BIOS) can.
 
  • Like
Reactions: Tlh97 and lobz

andermans

Member
Sep 11, 2020
151
153
76
Dunno if that s only in this game but how did he manage to have the 10900K being 15% faster than a 5800X when Computerbase put them at 1% difference on the average and 3% for minimum fps..?..


Maybe the performance issues with win11 and AMD CPUs everyone is talking about? The video description doesn't say one way or another which means it is a possibility?
 

zir_blazer

Golden Member
Jun 6, 2013
1,165
408
136
You can't take leaks at a face value. The original Athlon performed horribly. When it released it was a monster. The hardware couldn't have changed, but drivers and firmware(BIOS) can.
Totally offtopic, but interesed on that claim. You mean that there were leaks about the early K7 Athlon performance pointing that it was mediocre, then surprised everyone at launch day? As far that I know by doing some digital archaeology, a lot of technical details were known and speculated on before launch (Like the stupidly powerful x87 FPU), but I don't recall about any performance leaks.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I really don't care anymore. Wait for it to release, and it'll do how it'll do, simple as that. Since when has leaks based on protoype chips become an accurate representation of the product? Availability and perf/$ is more important for most people anyway.

I bought this Celeron 10th Gen platform chip since it was the cheapest, relatively modern, and it was available. Simple as that. The cheapest Zen platform at that time was $200, and this cost me 1/3rd.
 

DrMrLordX

Lifer
Apr 27, 2000
21,634
10,848
136
Availability and perf/$ is more important for most people anyway.

That is the million-dollar question, for sure. How many Alder Lake-S chips will you be able to buy at launch? Will it be a situation where Intel floods the market and sells out anyway due to people being starved for upgrades (see: Vermeer)? Will it be poor availability for a month or two followed by gradual increases? Will it be poor availability for months due to yield problems on 10ESF? We do not know.

I hope Intel has gotten all the yield problems out of their advanced 10nm nodes by now, and Tiger Lake-H seems to indicate that they can reliably fab a fairly-large die on 10SF, so why not 10ESF? We will see.
 
  • Like
Reactions: Tlh97 and RanFodar
Jul 27, 2020
16,326
10,338
106

jpiniero

Lifer
Oct 1, 2010
14,600
5,221
136

Krteq

Senior member
May 22, 2015
991
671
136
Heh, so according to Igor, Intel will finally completely scratch their CPUs "TDP spec" - PL1 won't exists at all

2021-10-25_14-45x7j8e.png
 

eek2121

Platinum Member
Aug 2, 2005
2,930
4,026
136
Didn't Intel update the behavior of Alder Lake-S to adhere to a PL2 of 210W?

PL1 should equal PL2. Both should be 125W! Of course they probably won't be able to do that this generation, but IMO if they do this, it is a step in the right direction. We will see.
 

jpiniero

Lifer
Oct 1, 2010
14,600
5,221
136
Heh, so according to Igor, Intel will finally completely scratch their CPUs "TDP spec" - PL1 won't exists at all

Boards can set PL1 to whatever they want. They are just saying the recommended default for K is whatever PL2 ends up being.
 

dullard

Elite Member
May 21, 2001
25,066
3,415
126
PL1 should equal PL2.
Why should the goal average power be equal to the goal peak power? I cannot think of a single reason in any field that peak and average should be the same for any feature. Think about it: your car's average speed should be its peak flat-course no-wind speed? Your coffee's average temperature is probably pretty tepid, so slightly warm should be the peak coffee temperature? A jacket ranges from thin ultra-light to thick winter down, so the average is a medium thickness--are you saying that medium warmth coats should be the peak thickness?

What needs to change is that people need to realize that PL1 is average power over time and NOT peak power.
 
Last edited:

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
Ha yeah. Plus I imagine for the vast majority of normal tasks it won't pull anywhere near that. It's not like many of us are forcing totally static continuous power levels through our CPUs. Even with a slight voltage bump to hit 5.2Ghz, my old 9900KS normally sat way way down on actual power outside of very specific benchmarks or multicore encoding.
 

blckgrffn

Diamond Member
May 1, 2003
9,127
3,067
136
www.teamjuchems.com
*sniff sniff* What's that smell? Is that desperation, Intel?

Was coming here to bring up the article :)

Yikes. PL1 (and PL2 to a lesser extent) settings have been my go-to in building Intel based systems with "normal" components to try to ensure they live long, healthy and please-don't-ever-call-me-about-them lives. I guess I'll still be able to dig in there and turn them down.

I know they almost are never relevant, but if they didn't matter why do they get set so darn high out of the box?