Question Alder Lake - Official Thread

Page 97 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

dullard

Elite Member
May 21, 2001
25,119
3,492
126
Kill-a-watt. It changes constantly, so real time. Now after changing to the equivalent of onboard video its doing 78 watts ! But 6 hour ETA. I disabled some turbo things. I am going back in to enable e-cores and turbo. This thing is not fully supported by linux, and its driving me crazy.
I think you'll do yourself a favor if you switch to energy used mode over time rather than instantaneous watts. Alder Lake instantaneous power will be all over the place, depending on past history of the chip. You can then calculate the average watts from the energy used.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,627
14,618
136
OK, 3rd install today. Current BIOS, restored to default and just set memory to 3200 1.4v and cl17. Installed boinc. 266 watts with essentially a IGPU like GPU (no external power connectors, just a heatsink, not even a fan. And since e-cores are enabled, no AVX-512 possible. The 266 is pretty constant, it varies from 264 to 270.

Edit: and how would I switch to energy used mode ?
 
  • Like
Reactions: Drazick

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
Disable cores, if anything the 12900K is far safer to OC than any cheap bin ever will. The 12900K simply sips voltage when compared to 12400 with BCLK OC.

There's something inherently cool about overclocking a CPU that isn't meant to be overclocked though!

Alas, it's a rather moot point unless there are actual affordable DDR4 ext clock gen mobos available, which I doubt.

Btw, did you mean disabling the E cores? That might be something I would try out of curiosity, just to see how the P cores cope without the extra heat / wattage of the E cores.

In practice, I'll most likely leave the E cores on and just undervolt the 12900K as much as possible to optimise the 'factory overclock' ;)
 
  • Like
Reactions: Shmee
Jul 27, 2020
16,712
10,707
106
Btw, did you mean disabling the E cores? That might be something I would try out of curiosity, just to see how the P cores cope without the extra heat / wattage of the E cores.
The power measurements I've seen with E-cores disabled show the 12900K sucking up power like a virus and running very hot.
 

dark zero

Platinum Member
Jun 2, 2015
2,655
138
106
One question guys, is possible to deactivate the P-Cores in order to see how much strong are the E-Cores?
I mean, I want to see how strong might be Alder Lake -N with 8E cores activated
 
Jul 27, 2020
16,712
10,707
106
One question guys, is possible to deactivate the P-Cores in order to see how much strong are the E-Cores?
I mean, I want to see how strong might be Alder Lake -N with 8E cores activated
You cannot have only E-cores. At least one P-core has to be active in BIOS for the PC to boot. Then in Windows, use process affinity to benchmark E-cores.
 
  • Wow
Reactions: psolord

dullard

Elite Member
May 21, 2001
25,119
3,492
126
Edit: and how would I switch to energy used mode ?
It probably varies with specific Kill-a-Watt model, but press the KWH button as described here:

This just gives you a much more usable number for long term energy use, rather than a value that will jump all over the place. Of course, to be usable for your purposes, record the data during the 100% load cases and not when idle. Take the energy use and divide by the time used to get the average power.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,627
14,618
136
It probably varies with specific Kill-a-Watt model, but press the KWH button as described here:

This just gives you a much more usable number for long term energy use, rather than a value that will jump all over the place. Of course, to be usable for your purposes, record the data during the 100% load cases and not when idle. Take the energy use and divide by the time used to get the average power.
Its always@100% load.
 
  • Like
Reactions: Drazick

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,627
14,618
136
OK, update. it was ETA of 6-7 hours, and using 250-260 watts. So, I booted windows 11 and installed BOINC. Its at 3.5 hours and using 195 watts.. Now yesterday, before I messed with the video card, it was doing better, 2:20 hours and 162 watts. So, slower and more power in windows, but for the life of me, I can not get the linux config back that was working really well.
 
  • Like
Reactions: Drazick
Jul 27, 2020
16,712
10,707
106
Your 12700F: Ima gonna eat up your power coz you put in a 590! What? I wasn't enough for you???

Seriously though, either you are forgetting to do something that you did before, or the DC workloads differ in their intensity. What you had before maybe wasn't that stressful (not using the AVX2 units maybe?) and the workloads or work units you are getting now probably need more CPU power to crunch through.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,627
14,618
136
Your 12700F: Ima gonna eat up your power coz you put in a 590! What? I wasn't enough for you???

Seriously though, either you are forgetting to do something that you did before, or the DC workloads differ in their intensity. What you had before maybe wasn't that stressful (not using the AVX2 units maybe?) and the workloads or work units you are getting now probably need more CPU power to crunch through.
I took out the 590 when I had all the power issues. Now its a very small video card with no power cords, and only a heatsink, no fan.

Edit: same exact workload, the primegrid 321 app.

For the rest that think I was doing things wrong, I now have all cores running, and on windows 11 to have a fully supported OS. But its taking 267 watts ! Is there a power plan or something to fix this ? Bios is all default except memory at 3200 c17 1.4v. But now its doing units in 2:15, almost as good as 5950x in linux, but using more power,
 
Last edited:
  • Like
Reactions: Drazick

coercitiv

Diamond Member
Jan 24, 2014
6,254
12,175
136
There's something inherently cool about overclocking a CPU that isn't meant to be overclocked though!
As cool as overclockig a locked part is, we simply lack the proper value ingredients to make it work outside Youtube videos made for views. Any price premium commanded by the supposedly) mid range DDR4 boards with external clock gens could just as well go towards a 12600, getting higher clocks and better silicon quality from the start. And that's for a 10% OC, when looking for a ~5Ghz OC on 12400 I remain adamant that more serious cooling is required, increasing the price delta dangerously close to 12600K + cheap cooler. At that point the BCLK OC is no longer a value experiment, only a technical one.

In practice, I'll most likely leave the E cores on and just undervolt the 12900K as much as possible to optimise the 'factory overclock' ;)
On the 12900K OC topic, I also meant being able to disable 2 P cores for "playing around". For a daily config you would obviously keep all P cores active, but depending on what you want to achieve the E-cores are not mandatory for a satisfactory experience, I suspect you'll go forth and back repeatedly with enabling/disabling them while evaluating the difference in how the system behaves. Personally I did exactly that: disabled E cores for a while, now I've been running with them enabled for more than a week. I knew I wanted a minimum of 6 cores for work but I also knew the main driver in perceived performance would be ST perf as long as 6+ cores are present for multitasking.

Undervolting can definitely be something to be explored for more performance, especially considering the tools Intel provided us with, as you can setup variable offsets that can potentially affect only the upper 4Ghz+ clock range. Plenty of things one can do on ADL-S to optimize performance (given quality components that allow for lower tolerances), I only wish I had the time. For now I'm just happy I made the upgrade, as the extra performance helps with my workflow.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,389
10,072
126
I apparently had insufficient faith in ADL in DC workloads, so I picked up a couple of 5900X CPUs, because I believed at the time, that Ryzen still had better overall power efficiency in many-core loads, and better performance besides. Of course, it cost a little more (not so much any more, 5900X is $428.99 @ Newegg recently).

I did pick up a pair of 12th-Gen Pentium Gold G7400 CPUs, for the pricey sum of $99 ea., plus tax, just to play around with some hyper-threaded P-cores. (2C/4T).

I am disappointed that Intel "reset" the performance (clock speed) of their Pentium Gold CPUs backwards a bit (12th-gen is only 3.7Ghz), as compared to the 4.0Ghz G6400 and 4.1Ghz G6405 (10th-Gen Pentium Gold CPUs, also 2C/4T, and making great all-around boxes / browser / low-end gaming boxes.) Dropping the clock speed really destroyed the value proposition of the Pentium Gold lineup.

(Normally, throughout the generations, the clock speeds always went "up". But Intel saw the great performance increase possible with ADL, and decided that was "too much value" for their customers, hence they cut down the clock-speed again, in a fairly unprecedented move. It's not like the process for this CPU lineup doesn't have the headroom, with a factory-spec i9-12900KS CPU being released with a max factory clock speed of 5.5Ghz, the highest ever for an Intel CPU AFAIK.)

The G6400 CPUs that I picked up were around $64-65 ea. So now with ADL, Intel is charging MORE, for LESS clock-speed. Sure, there's an IPC boost, in CERTAIN work-loads (some opcodes have an instruction latency of 1 clock regardless, so for workloads dependent on those instructions, performance depends primarily on clock-speed, and with the clock-speed regression, presumably, in those workloads, ADL Pentium Gold would be SLOWER than 10th-Gen Pentium Gold.)

Edit: And with AMD's recent announcement of the imminent release of their lower-end Zen3 and Zen2-based CPUs, at budget prices, and taking LGA 1700 mobo prices versus AM4 mobo prices into account, I NO LONGER have any reason to recommend ADL at the low-end, where it (temporarily, until AMD responded) had a small niche in the market.

There's no problem performance-wise with my Pentium Gold G7400 CPUs for browsing and overall desktop usage (have yet to try gaming with them, but there's some YT vids around demonstrating it's capability in gaming.) Suffice to say, that based on the performance of the 12th-Gen P-cores, it actually punches a bit above it's weight-class as far as gaming "with a Pentium 2C/4T" goes, but like most "less-than-true-quad-core CPU" gaming setup, it suffers from "stutters", often at the worst times (like during a fire-fight), so really not as desireable for gaming as a 12400F.
 
Last edited:

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
The G6400 CPUs that I picked up were around $64-65 ea. So now with ADL, Intel is charging MORE, for LESS clock-speed. Sure, there's an IPC boost, in CERTAIN work-loads (some opcodes have an instruction latency of 1 clock regardless, so for workloads dependent on those instructions, performance depends primarily on clock-speed, and with the clock-speed regression, presumably, in those workloads, ADL Pentium Gold would be SLOWER than 10th-Gen Pentium Gold.)

That's how things worked in 1994, before Pentium PRO. You'd be HARD pressed to find or even artificially craft a workload that regresses in performance due to loosing 300-400mhz Alder Lake to Skylake.
Nowadays those are massive OoO machines, most instructions take 1 cycles to execute and each cycle the can execute multiple instructions as long as OoO engine has their operands ready. And ADL has massive advantages versus SKL here both in execution engine and in OoO engine.

On topic of value setups, 2C4T is not for 2022. Everything is incredibly multithreaded, even browsers are using multiple threads and frankly 2C will run into simple trouble of overtaxing execution resources of just two cores.
I think 12100F is first stop for 2022 and 12400F is powerhouse already even without OC. ( both have advantage of having no troubles with efficiency core scheduling, they will just work without hidden pitfalls )
 

coercitiv

Diamond Member
Jan 24, 2014
6,254
12,175
136
The G6400 CPUs that I picked up were around $64-65 ea. So now with ADL, Intel is charging MORE, for LESS clock-speed. Sure, there's an IPC boost, in CERTAIN work-loads (some opcodes have an instruction latency of 1 clock regardless, so for workloads dependent on those instructions, performance depends primarily on clock-speed, and with the clock-speed regression, presumably, in those workloads, ADL Pentium Gold would be SLOWER than 10th-Gen Pentium Gold.)
Can you provide any example where the G7400 is slower than G6400?

So far when comparing the 2 SKUs I can see that G7400 has
  • lower clocks at 3.7Ghz vs 4Ghz
  • higher DDR4 memory speed support at 3200MT/s vs 2666Mt/s
  • more L3 cache at 6MB vs 4MB
  • lower TDP at 46W vs 58W
Personally I expect the ADL Pentium to beat the older model in just about every workload while using significantly less power (which matters for sound profile under load). I would also be very interested to know which AMD product will be competing with the G7400 in sufficient quantities. I can maybe see AMD competing against the 12100 though.
 
  • Like
Reactions: Mopetar

dullard

Elite Member
May 21, 2001
25,119
3,492
126
  • lower TDP at 46W vs 58W
The 21% lower power usage in the G7400 vs the G6400 will eliminate a lot of the process gains from going to Intel 7. For CPU intensive workloads, it will be close to a wash. But, the iGPU should be noticeably better in the 7400. It has 25% more execution units, each running up to 29% faster clock rates, and a few features that may or may not be useful for any particular user.
 

coercitiv

Diamond Member
Jan 24, 2014
6,254
12,175
136
The 21% lower power usage in the G7400 vs the G6400 will eliminate a lot of the process gains from going to Intel 7. For CPU intensive workloads, it will be close to a wash.
That's not how it works at all. Here's the 12700K with just 2 cores active running CB23 @ 3.7Ghz. Notice the 25W average power consumption. Even a lower quality bin won't get anywhere near 46W TDP, especially considering the massive difference in L3 cache.
CB23-2c2t-37x.png
 

dullard

Elite Member
May 21, 2001
25,119
3,492
126
That's not how it works at all. Here's the 12700K with just 2 cores active running CB23 @ 3.7Ghz. Notice the 25W average power consumption. Even a lower quality bin won't get anywhere near 46W TDP, especially considering the massive difference in L3 cache.
Pssst: there aren't many benchmarks of the G7400 vs G6400, but those that are out there are often within margins of error of the test.
https://cpu.userbenchmark.com/Compa...s-Intel-Pentium-Gold-G6400/m1755065vsm1221579
So, yes they are close to a wash.

Bumping the G7400 up to 4.0 GHz in order to be significantly faster than the G6400 would use more power and would then require a higher TDP in order to allow ALL G7400s to pass validation testing. Just pointing to ONE underclocked 12700K running with fewer cores (and your spoiler of 2C2T is incorrect by the way) has very little to do with how ALL G7400s will operate. Remember, the TDP specs have to be high enough to allow all chips to pass, otherwise they get downgraded even further to Celeron or the dustbin.
 
Last edited:
Jul 27, 2020
16,712
10,707
106
Remember, the TDP specs have to be high enough to allow all chips to pass, otherwise they get downgraded even further to Celeron or the dustbin.
There's a dustbin for functional dies that don't even pass validation for worthy of being Celerons??? Oh no!!! Why do they waste them? Why not donate them to Africa or other third world countries? An Alder Lake reject CPU would still be tons better than some Raspberry Pi or Atom CPU.
 

jpiniero

Lifer
Oct 1, 2010
14,679
5,305
136
There's a dustbin for functional dies that don't even pass validation for worthy of being Celerons??? Oh no!!! Why do they waste them? Why not donate them to Africa or other third world countries? An Alder Lake reject CPU would still be tons better than some Raspberry Pi or Atom CPU.

Celeron is assumed to be the 99% quality chips. The lowest Alder Lake desktop one is 2.4 Ghz with no turbo and only 2 GC cores. If there's anything worse available perhaps those are sold as off-list products to embedded customers.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,389
10,072
126
There's a dustbin for functional dies that don't even pass validation for worthy of being Celerons??? Oh no!!! Why do they waste them? Why not donate them to Africa or other third world countries? An Alder Lake reject CPU would still be tons better than some Raspberry Pi or Atom CPU.
/me wonders about what they do with ADL dies that have zero functional P-cores, but do have some working e-cores. Are those salvagable at all, if the current crop of ADL BIOSes (and I do believe that it is strictly a BIOS limitation) requiring a P-core to be active to boot. (ADL with P-cores disabled, only running on e-cores, wouldn't that be useful for a file-server / NAS role, where the loads aren't "bursty" like PC software?)
 

coercitiv

Diamond Member
Jan 24, 2014
6,254
12,175
136
Pssst: there aren't many benchmarks of the G7400 vs G6400, but those that are out there are often within margins of error of the test.
https://cpu.userbenchmark.com/Compa...s-Intel-Pentium-Gold-G6400/m1755065vsm1221579
So, yes they are close to a wash.
If you call 24% higher MT score a wash... then sure lol! Did you even check the scores or did just go by the "Effective Speed" that is a totally made-up metric invented by UserBenchmark?

So let's recap, according to your quoted benchmark:
G7400 2-core score - 250Pts
G6400 2-core score - 202Pts

and at 3.6 GHz by the way not 3.7 GHz
by the way that's 3691Mhz in the screenshot, so definitely 3.7Ghz.
 

Hulk

Diamond Member
Oct 9, 1999
4,264
2,078
136
Intel is back to form with the 12900KS "Emergency Edition" in preparation for the 5800X3D. Just kidding of course. If they have some silicon that is doing these clocks then why not sell it at a premium?

Base clocks are the same as the 12900K but TDP at base is 25W higher. I assume this part has more leakage?

Turbo Watts is the same as 12900K. I wonder if at say 5000MHz all core it will use less power than a 12900K?

If there is headroom and it runs at lower volts at relatively high frequency it could be a nice part... if it wasn't so expensive.
 

dullard

Elite Member
May 21, 2001
25,119
3,492
126
If you call 24% higher MT score a wash... then sure lol! Did you even check the scores or did just go by the "Effective Speed" that is a totally made-up metric invented by UserBenchmark?
The effective speed is the average of all benchmarks they run, not just one. Average speed is yes totally made up, as all benchmarks averages are. But averages are still quite relevant. The average of all benchmarks they use has the G7400 at 73.1% vs the G6400 at 70.0%. That is an unnoticeable difference for most people.

And you are still avoiding the real meat of my point. Bumping the G7400 up to 4.0 GHz like the G6400 would not allow acceptable yields at 46 W. If it were left at 58 W, with the higher clocks that would have allowed, then the G7400 would be quite a great improvement over the G6400. But, as it is, for CPU tasks it is more of a side grade (especially considering the significantly higher price). If your tasks require a more powerful GPU, then yes the G7400 is much better than the G6400.
by the way that's 3691Mhz in the screenshot, so definitely 3.7Ghz.
Sorry, my mistake, your screenshot shows 3.61 GHz and that is what I went with before scrolling to the right. I edited it out of my post above.

1648229404391.png

One final nitpick, there are supposed to be spaces between numbers and units. The correct English is "46 W" not "46W". Correct English is "3691 MHz" not "3691Mhz".
 
Last edited:
Jul 27, 2020
16,712
10,707
106
(and I do believe that it is strictly a BIOS limitation)
Until Intel shows a working E-core only CPU, I have my doubts. Maybe the E-core cluster is a relatively quick hackjob done on the advice of Jim Keller, when it became apparent to them that they would have issues with MT throughput in future and so the CPU cluster cannot communicate to the outside world unless the communication is initiated by a P-core?