Discussion Intel current and future Lakes & Rapids thread

Page 554 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
Because they have to rely on much higher clock speeds and have a huge big core count disadvantage to barely match 16C Zen3 at 4 Ghz. Do you understand this?

And whose fault is that? Hmmmmmmm?

(yes, I do understand that fact, which is why I commented in the first place)

Correction: They appear to be inefficient in the range where the 12900k can start to beat the 5950x in some MT workloads.

How a 12700k compares vs a 5900x, or a 12600k compares vs a 5800x is still very much TBD. And as somebody with a limited budget, I am far more interested in the latter two comparisons.

The 12900k is their flagship. It's not really gonna look good for them when it struggles against a CPU from a year ago, fabbed on an N7.
 
  • Like
Reactions: Tlh97 and moinmoin

Hitman928

Diamond Member
Apr 15, 2012
5,177
7,629
136
I don't think the LN2 data point should be used because the low temperature itself has a major impact on power consumption, by flattening the voltage/power vs frequency curve as well as greatly reducing the impact of leakage on power usage. I don't think der8bauer did a similar plot for the 5000 series, but for a 3900X he created a chart showing power and frequency scaling with temperature.

Xn4vS92.png



As I mentioned, it was just meant to be a rough first order estimate. But even if I add 150W to the 5.8 GHz data point, that only puts the 5 GHz interpolation at 325W and 425W at 5.3 GHz. I don't think 150W is very realistic but even still, it shines a positive light on Zen 3 comparatively. It will be interesting to see how sustainable the twitter frequencies are as well. The reason you need sub-ambient to reach 5GHz+ on Zen3 is because of hotspotting, not the overall power consumption. I'm sure Golden Cove is probably more spread out and even has some dark silicon from not enabling AVX512 to help out there, but we'll see if it's enough to sustain 5.3 GHz all p-core without sub-ambient cooling.
 
  • Like
Reactions: Tlh97 and moinmoin

Hitman928

Diamond Member
Apr 15, 2012
5,177
7,629
136
Correction: They appear to be inefficient in the range where the 12900k can start to beat the 5950x in some MT workloads.

How a 12700k compares vs a 5900x, or a 12600k compares vs a 5800x is still very much TBD. And as somebody with a limited budget, I am far more interested in the latter two comparisons.

12700k and 12600k do look like the sweet spot for ADL to me as they should be able to compete/win in most tasks without haven't to draw crazy high power to do so. Intel still doesn't really have an answer to AMD's chiplet/Zen 3 approach when it comes to high core counts (relative to each segment), but ADL at least gets them back in the game for consumers.
 
  • Like
Reactions: Tlh97 and Elfear

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I wouldn't be surprised if Zen 3 is more efficient, but I also wouldn't be surprised if it uses more power under 5.3GHz, since it's likely beyond it's design target frequency. Gracemont might end up less efficient in the 4GHz range. Does that mean Golden Cove is a more efficient chip? NO!

We can see from mobile Tigerlake that it becomes relatively more efficient at higher frequencies compared to Cezanne. That still means little as it ends up being less efficient overall anyways. It just fares better.

Zen 3 performs like Alderlake without needing the hybrid configuration and without needing to be in 2021. I'm pretty sure at some point in the future Intel chips will become competitive with current generation AMD parts in all areas but that's not what Alderlake is.

Golden Cove is still merely an expansion of Sandy Bridge and thus continue to follow the sublinear scaling of performance in regards to performance and die area. Nothing noticeably new.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
People already have CPUs in their hands and testing in variuos games:


1% and 0.1% would make first gen Ryzen look like a champ. FPS jank is visible on graph, even if averages look awesome :(

GOOD: Run on 3600 DR DDR4, while not "peak", i doubt memory latency is causing the jank.
BAD: Scheduler?

So overall at least on DDR4 things are fine, disable the E cores in BIOS, lock the clocks and have the best gaming machine?
 

Gideon

Golden Member
Nov 27, 2007
1,608
3,573
136
Surprisingly good average results though yeah 1% is beyond terrible. If this is an omen of what's to come V-cache vs Alder lake will become very interesting (I'm pretty sure V-cache will help with 0.1% and 1% more than with average)
 

Asterox

Golden Member
May 15, 2012
1,026
1,775
136
People already have CPUs in their hands and testing in variuos games:


1% and 0.1% would make first gen Ryzen look like a champ. FPS jank is visible on graph, even if averages look awesome :(

GOOD: Run on 3600 DR DDR4, while not "peak", i doubt memory latency is causing the jank.
BAD: Scheduler?

So overall at least on DDR4 things are fine, disable the E cores in BIOS, lock the clocks and have the best gaming machine?

As the saying goes, let them see. :mask:

 

Saylick

Diamond Member
Sep 10, 2012
3,084
6,184
136
Surprisingly good average results though yeah 1% is beyond terrible. If this is an omen of what's to come V-cache vs Alder lake will become very interesting (I'm pretty sure V-cache will help with 0.1% and 1% more than with average)
I've always been curious about this. At what point would it make consumers value minimums more than average fps? Would people willingly take a 20% hit on average fps if it mean their minimums were 20% better? What if it were 50% better? Of course, this assumes that the average is already well above playable frame rate to begin with. If not, then yes I could see the argument of chasing higher averages.
 

eek2121

Platinum Member
Aug 2, 2005
2,904
3,906
136
People already have CPUs in their hands and testing in variuos games:


1% and 0.1% would make first gen Ryzen look like a champ. FPS jank is visible on graph, even if averages look awesome :(

GOOD: Run on 3600 DR DDR4, while not "peak", i doubt memory latency is causing the jank.
BAD: Scheduler?

So overall at least on DDR4 things are fine, disable the E cores in BIOS, lock the clocks and have the best gaming machine?

Take those numbers with a grain of salt. pre-release drivers, bios, etc.
 

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
My own comments: Seems like 12900k aren't all that much faster then 10900k (?)

It's strange though. Look at AssCreed. I know the guy made multiple videos to maximize views but this is what he got:

12900K - ~ 86 fps
10900K - ~ 80 fps
12700K - ~ 74 fps
5800X - ~ 71 fps
5600X - ~ 70 fps

You would think the 12700K shouldn't be much slower than the 12900K. Also Comet Lake is beating Zen 3.
 

Accord99

Platinum Member
Jul 2, 2001
2,259
172
106
My own comments: Seems like 12900k aren't all that much faster then 10900k (?)
With a 3070, perhaps that increase would be greater with a 3090.

Could easily be way more than that.
It's possible that it could be done at 65W, virtually all of the multi-core Geekbench tests use less power than Cinebench. On an 11800H, it can run 400-600 MHz higher on Geekbench than CB R23 for the same power use.
 

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
It's possible that it could be done at 65W, virtually all of the multi-core Geekbench tests use less power than Cinebench. On an 11800H, it can run 400-600 MHz higher on Geekbench than CB R23 for the same power use.

What I mean is there's Tiger Lake-H laptops out there with PL1 of way higher than 65 W, at least from notebookcheck's reviews. It could be 80, it could easily be triple digits.
 

diediealldie

Member
May 9, 2020
77
68
61
It's strange though. Look at AssCreed. I know the guy made multiple videos to maximize views but this is what he got:

12900K - ~ 86 fps
10900K - ~ 80 fps
12700K - ~ 74 fps
5800X - ~ 71 fps
5600X - ~ 70 fps

You would think the 12700K shouldn't be much slower than the 12900K. Also Comet Lake is beating Zen 3.

I think that's related to benchmark setup and early compatibility issues. All tests are using almost 100% of GPU so any extra average framerate gain comes from CPU performance. Maybe Golden coves are going idle for power saving much more than Ryzen does so it makes frame-time suffer sometimes, resulting in a worse 1% and 0.1% FPS
 

clemsyn

Senior member
Aug 21, 2005
531
197
116


12900k vs 10900k

12900k vs 5600x

12700k vs 5600x



My own comments: Seems like 12900k aren't all that much faster then 10900k (?)
I think AMD should have no problem handling Alder Lake with a +15% in games increase using v-cache 3dnow!

These benchmarks are looking good for Intel. Good that AMD is giving Intel great competition. Fierce competition is good for us end users. Imagine in 2 years with this type of competition.... faster cpus and lower prices, we win!

Can't wait for anandtech's reviews on Alder Lake.
 

eek2121

Platinum Member
Aug 2, 2005
2,904
3,906
136
What I mean is there's Tiger Lake-H laptops out there with PL1 of way higher than 65 W, at least from notebookcheck's reviews. It could be 80, it could easily be triple digits.

What laptop out there has an 80W PL1 TGL-H???

I think you are confusing PL2, which is not a sustained boost.
 

insertcarehere

Senior member
Jan 17, 2013
639
607
136
Doesn't work that way, esp with the inflation going on. Appears that the official MSRP of Alder is going to be quite a bit more per brand (although probably not as high as speculated).

Comparing i9-11900k/i7-11700k pricing to i9-12900k/i7-12700k pricing makes little sense given that the latter are in clearly different performance levels to the former. If the recent MC leaks hold water $670 for 12900k and $470 for 12700k look pretty competitive as far as pricing is concerned.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Intel Core i9-12900HK performance leaks out, the fastest mobile CPU thus far - VideoCardz.com

Absolutely amazing performance from the 12900HK. Probably @ 65W though.

Good. It outperforms the M1 Pro and Max. Actually Apple chose to compare it to 11800H, since it would make it look much more impressive.

I do think Geekbench 5 will end up underestimating the impact of the hybrid configuration, as the test is so short that it doesn't stress the Golden Cove cores much and pretty much runs close to peak, while as in real world applications it won't happen.
 

eek2121

Platinum Member
Aug 2, 2005
2,904
3,906
136
Good. It outperforms the M1 Pro and Max. Actually Apple chose to compare it to 11800H, since it would make it look much more impressive.

I do think Geekbench 5 will end up underestimating the impact of the hybrid configuration, as the test is so short that it doesn't stress the Golden Cove cores much and pretty much runs close to peak, while as in real world applications it won't happen.

I saw a number of posts today that I didn't have time to reply to, but I will reply to yours, because, why not?!? :D

A good Geekbench 5 benchmark is as good as a SPEC benchmark. Prove me wrong. Even certain folks at AT claim this. The reason GB5 is called out as being "inaccurate" is that 100% of the population of the world has access to it (and can run it in garbage scenarios), vs. 5% of the population that happens to be able to pay up for a SPEC license (who run SPEC in a controlled environment like their very livelihood depends on it...I would too).

I'm not defending Intel, of course (though I'm sure the usual suspects will claim otherwise), but one thing we do know is that, based on these benchmarks, and absent data regarding power, Intel beats Apple and AMD hands down when it comes to CPU performance. Period.

SPEC has quite a few benchmarks surrounding power. I expect that the general power benchmarks will favor AMD/Apple, and the performance benchmarks will see Intel leading, though I refrain from making assumptions, regardless. We will see.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
A good Geekbench 5 benchmark is as good as a SPEC benchmark.

Each of the subtests take less than a minute to run, so while it might be a decent bursty scenario test such as web browsing, in the hybrid scenario where the efficiency cores are supposed to boost MT, it won't show it's full potential.

Case in point is how well Amberlake-Y does in Geekbench ST, while it does poorly everywhere else. You know how people are saying the phone chips are close to 28W chips based on GB ST test?

Well, according to Geekbench, Intel's own 4.5W Amberlake-Y is only about 10% slower than the top of the line U chip. So you can go get yourself an Amberlake-Y laptop and it'll be almost as responsive as the 28W U laptop right?

NO.

In reality, the U chips are more than 50% faster than Amberlake-Y in single thread. That's because even in single threaded scenarios, the core goes well past the 4.5W mark. Actually we can see they need well over 10W.

That should tell you how "good" of a benchmark Geekbench is. It can't even load a single core properly! Can't justify comparing that to SPEC.