- Oct 9, 1999
- 4,865
- 3,263
- 136
With the release of Alder Lake less than a week away and the "Lakes" thread having turned into a nightmare to navigate I thought it might be a good time to start a discussion thread solely for Alder Lake.
I think you'll do yourself a favor if you switch to energy used mode over time rather than instantaneous watts. Alder Lake instantaneous power will be all over the place, depending on past history of the chip. You can then calculate the average watts from the energy used.Kill-a-watt. It changes constantly, so real time. Now after changing to the equivalent of onboard video its doing 78 watts ! But 6 hour ETA. I disabled some turbo things. I am going back in to enable e-cores and turbo. This thing is not fully supported by linux, and its driving me crazy.
Disable cores, if anything the 12900K is far safer to OC than any cheap bin ever will. The 12900K simply sips voltage when compared to 12400 with BCLK OC.
The power measurements I've seen with E-cores disabled show the 12900K sucking up power like a virus and running very hot.Btw, did you mean disabling the E cores? That might be something I would try out of curiosity, just to see how the P cores cope without the extra heat / wattage of the E cores.
You cannot have only E-cores. At least one P-core has to be active in BIOS for the PC to boot. Then in Windows, use process affinity to benchmark E-cores.One question guys, is possible to deactivate the P-Cores in order to see how much strong are the E-Cores?
I mean, I want to see how strong might be Alder Lake -N with 8E cores activated
It probably varies with specific Kill-a-Watt model, but press the KWH button as described here:Edit: and how would I switch to energy used mode ?
Its always@100% load.It probably varies with specific Kill-a-Watt model, but press the KWH button as described here:
This just gives you a much more usable number for long term energy use, rather than a value that will jump all over the place. Of course, to be usable for your purposes, record the data during the 100% load cases and not when idle. Take the energy use and divide by the time used to get the average power.
I took out the 590 when I had all the power issues. Now its a very small video card with no power cords, and only a heatsink, no fan.Your 12700F: Ima gonna eat up your power coz you put in a 590! What? I wasn't enough for you???
Seriously though, either you are forgetting to do something that you did before, or the DC workloads differ in their intensity. What you had before maybe wasn't that stressful (not using the AVX2 units maybe?) and the workloads or work units you are getting now probably need more CPU power to crunch through.
As cool as overclockig a locked part is, we simply lack the proper value ingredients to make it work outside Youtube videos made for views. Any price premium commanded by the supposedly) mid range DDR4 boards with external clock gens could just as well go towards a 12600, getting higher clocks and better silicon quality from the start. And that's for a 10% OC, when looking for a ~5Ghz OC on 12400 I remain adamant that more serious cooling is required, increasing the price delta dangerously close to 12600K + cheap cooler. At that point the BCLK OC is no longer a value experiment, only a technical one.There's something inherently cool about overclocking a CPU that isn't meant to be overclocked though!
On the 12900K OC topic, I also meant being able to disable 2 P cores for "playing around". For a daily config you would obviously keep all P cores active, but depending on what you want to achieve the E-cores are not mandatory for a satisfactory experience, I suspect you'll go forth and back repeatedly with enabling/disabling them while evaluating the difference in how the system behaves. Personally I did exactly that: disabled E cores for a while, now I've been running with them enabled for more than a week. I knew I wanted a minimum of 6 cores for work but I also knew the main driver in perceived performance would be ST perf as long as 6+ cores are present for multitasking.In practice, I'll most likely leave the E cores on and just undervolt the 12900K as much as possible to optimise the 'factory overclock'
The G6400 CPUs that I picked up were around $64-65 ea. So now with ADL, Intel is charging MORE, for LESS clock-speed. Sure, there's an IPC boost, in CERTAIN work-loads (some opcodes have an instruction latency of 1 clock regardless, so for workloads dependent on those instructions, performance depends primarily on clock-speed, and with the clock-speed regression, presumably, in those workloads, ADL Pentium Gold would be SLOWER than 10th-Gen Pentium Gold.)
Can you provide any example where the G7400 is slower than G6400?The G6400 CPUs that I picked up were around $64-65 ea. So now with ADL, Intel is charging MORE, for LESS clock-speed. Sure, there's an IPC boost, in CERTAIN work-loads (some opcodes have an instruction latency of 1 clock regardless, so for workloads dependent on those instructions, performance depends primarily on clock-speed, and with the clock-speed regression, presumably, in those workloads, ADL Pentium Gold would be SLOWER than 10th-Gen Pentium Gold.)
The 21% lower power usage in the G7400 vs the G6400 will eliminate a lot of the process gains from going to Intel 7. For CPU intensive workloads, it will be close to a wash. But, the iGPU should be noticeably better in the 7400. It has 25% more execution units, each running up to 29% faster clock rates, and a few features that may or may not be useful for any particular user.
- lower TDP at 46W vs 58W
That's not how it works at all. Here's the 12700K with just 2 cores active running CB23 @ 3.7Ghz. Notice the 25W average power consumption. Even a lower quality bin won't get anywhere near 46W TDP, especially considering the massive difference in L3 cache.The 21% lower power usage in the G7400 vs the G6400 will eliminate a lot of the process gains from going to Intel 7. For CPU intensive workloads, it will be close to a wash.
Pssst: there aren't many benchmarks of the G7400 vs G6400, but those that are out there are often within margins of error of the test.That's not how it works at all. Here's the 12700K with just 2 cores active running CB23 @ 3.7Ghz. Notice the 25W average power consumption. Even a lower quality bin won't get anywhere near 46W TDP, especially considering the massive difference in L3 cache.
There's a dustbin for functional dies that don't even pass validation for worthy of being Celerons??? Oh no!!! Why do they waste them? Why not donate them to Africa or other third world countries? An Alder Lake reject CPU would still be tons better than some Raspberry Pi or Atom CPU.Remember, the TDP specs have to be high enough to allow all chips to pass, otherwise they get downgraded even further to Celeron or the dustbin.
There's a dustbin for functional dies that don't even pass validation for worthy of being Celerons??? Oh no!!! Why do they waste them? Why not donate them to Africa or other third world countries? An Alder Lake reject CPU would still be tons better than some Raspberry Pi or Atom CPU.
/me wonders about what they do with ADL dies that have zero functional P-cores, but do have some working e-cores. Are those salvagable at all, if the current crop of ADL BIOSes (and I do believe that it is strictly a BIOS limitation) requiring a P-core to be active to boot. (ADL with P-cores disabled, only running on e-cores, wouldn't that be useful for a file-server / NAS role, where the loads aren't "bursty" like PC software?)There's a dustbin for functional dies that don't even pass validation for worthy of being Celerons??? Oh no!!! Why do they waste them? Why not donate them to Africa or other third world countries? An Alder Lake reject CPU would still be tons better than some Raspberry Pi or Atom CPU.
If you call 24% higher MT score a wash... then sure lol! Did you even check the scores or did just go by the "Effective Speed" that is a totally made-up metric invented by UserBenchmark?Pssst: there aren't many benchmarks of the G7400 vs G6400, but those that are out there are often within margins of error of the test.
https://cpu.userbenchmark.com/Compa...s-Intel-Pentium-Gold-G6400/m1755065vsm1221579
So, yes they are close to a wash.
by the way that's 3691Mhz in the screenshot, so definitely 3.7Ghz.and at 3.6 GHz by the way not 3.7 GHz
The effective speed is the average of all benchmarks they run, not just one. Average speed is yes totally made up, as all benchmarks averages are. But averages are still quite relevant. The average of all benchmarks they use has the G7400 at 73.1% vs the G6400 at 70.0%. That is an unnoticeable difference for most people.If you call 24% higher MT score a wash... then sure lol! Did you even check the scores or did just go by the "Effective Speed" that is a totally made-up metric invented by UserBenchmark?
Sorry, my mistake, your screenshot shows 3.61 GHz and that is what I went with before scrolling to the right. I edited it out of my post above.by the way that's 3691Mhz in the screenshot, so definitely 3.7Ghz.
Until Intel shows a working E-core only CPU, I have my doubts. Maybe the E-core cluster is a relatively quick hackjob done on the advice of Jim Keller, when it became apparent to them that they would have issues with MT throughput in future and so the CPU cluster cannot communicate to the outside world unless the communication is initiated by a P-core?(and I do believe that it is strictly a BIOS limitation)