- Jan 5, 2017
- 3,865
- 3,729
- 136
This post is borne out of my own testing on the laptop in my signature. The purpose of this analysis is to evaluate "real-world" power consumption of an Intel CPU. By "real-world" I mean working under two assumptions:
#1 should be quite evident. Most people don't buy a $300+ CPU or a $600+ GPU to game at the most commonly used resolution of 1080p. The most popular modern NVIDIA GPU is the RTX 3060, while the most popular ones from AMD are the RX 6600/XT. A cursory glance at any review site, like TPU, would show that these are, for the moment, more than enough for gaming at 1080p with at least high settings.
#2 is based on the observation that while people are more likely to give greater attention to cooling the CPU, the GPU will be most likely be as it is from the factory. Coupled with the trend of "gamer" aesthetics prioritizing looks over functionality, it means that the GPU will be operating in a way that is more sensitive to temperature than the CPU. Since the boost frequency, and therefore performance, is inversely related to temperature (unlike CPUs which by design are nowadays running at 90-95 C), it is reasonable to assume that heat is more of a problem for GPUs than CPUs.
With that out of the way, let's get to the testing methodology. I test two games that are on the opposite sides of the CPU/GPU intensiveness spectrum: Counter-Strike 2 and Cyberpunk 2077 (without RT of course). The reason for choosing these two is that they are easy to benchmark with repeatable passes. I run a bot match on Dust II in CS2 - capture starts after spawning and when the countdown timer ends. I use the same route to move around every time and engage with the enemy bots during the end of the run. For CP2077 I simply use the built-in benchmark.
For settings, I use 1600x900 resolution is CS2 with medium-derived settings using CMAA instead of MSAA and forcing FXAA on through the NVCP. This allows me to run the game at a reasonable 100+ average FPS. For CP2077 I also use medium-derived settings, at 720p with Intel XeSS set to Ultra Quality. I actually cap the FPS at 30 while playing the game, while the benchmark obviously runs at uncapped FPS.
As for the CPU, I cap PL2 with ThrottleStop at 30 W. I choose this number because this is empirically the optimal PL2 that allows the best heat distribution between the CPU and GPU since the cooling is shared in my laptop.
So let's get to the results -


I test at PL1s ranging from 10-30 W, in 5 W increments. 30 W is the maximum PL1 set by the Dell BIOS with the highest performance mode. The green bars represent what % of time the game spends above the threshold of the target FPS, which is 60 in case of CS2 and 30 in case of CP 2077. The yellow bars show time spent below the threshold, while red represents the stutters. The benchmark runs last for 60 s.
As it is clearly observed, setting PL1 at or close to PL2 is not optimal. In CS2, the optimal value is somewhere around 20 W, or 0.67x PL2; while in CP2077 the optimal value is 25 W, or 0.83x PL2.
So clearly, if you are thermally constrained, Intel's recommendation of PL1 = PL2 for the -K SKUs beginning with Alder Lake, may actually hurt performance in games.
So we can conclude that for the purpose of benchmarking, Intel telling that PL1 should be equal to PL2 makes sense for getting the highest benchmark numbers and topping the charts, but for general gaming use, they are not optimal.
Lastly, how is this testing different from the power scaling benchmarks that reviewers have tested? Most of them lower the power limit but still use PL1=PL2, and some do test the old way of PL1 < PL2, but the data points are limited.
TL;DR
If you're mostly gaming, set PL2 to some lower value than the Intel spec, and always set PL1 lesser than PL2, depending on your target performance.
Thanks for reading.
- A typical user (non-enthusiast who just wants to play games) will be pushing their GPU far more than their CPU, MOST of the time, regardless of their specs and settings at which a game is run.
- The most common cause for performance being capped at a certain level, provided everything else is fine with the PC, is the GPU thermal limit.
#1 should be quite evident. Most people don't buy a $300+ CPU or a $600+ GPU to game at the most commonly used resolution of 1080p. The most popular modern NVIDIA GPU is the RTX 3060, while the most popular ones from AMD are the RX 6600/XT. A cursory glance at any review site, like TPU, would show that these are, for the moment, more than enough for gaming at 1080p with at least high settings.
#2 is based on the observation that while people are more likely to give greater attention to cooling the CPU, the GPU will be most likely be as it is from the factory. Coupled with the trend of "gamer" aesthetics prioritizing looks over functionality, it means that the GPU will be operating in a way that is more sensitive to temperature than the CPU. Since the boost frequency, and therefore performance, is inversely related to temperature (unlike CPUs which by design are nowadays running at 90-95 C), it is reasonable to assume that heat is more of a problem for GPUs than CPUs.
With that out of the way, let's get to the testing methodology. I test two games that are on the opposite sides of the CPU/GPU intensiveness spectrum: Counter-Strike 2 and Cyberpunk 2077 (without RT of course). The reason for choosing these two is that they are easy to benchmark with repeatable passes. I run a bot match on Dust II in CS2 - capture starts after spawning and when the countdown timer ends. I use the same route to move around every time and engage with the enemy bots during the end of the run. For CP2077 I simply use the built-in benchmark.
For settings, I use 1600x900 resolution is CS2 with medium-derived settings using CMAA instead of MSAA and forcing FXAA on through the NVCP. This allows me to run the game at a reasonable 100+ average FPS. For CP2077 I also use medium-derived settings, at 720p with Intel XeSS set to Ultra Quality. I actually cap the FPS at 30 while playing the game, while the benchmark obviously runs at uncapped FPS.
As for the CPU, I cap PL2 with ThrottleStop at 30 W. I choose this number because this is empirically the optimal PL2 that allows the best heat distribution between the CPU and GPU since the cooling is shared in my laptop.
So let's get to the results -


I test at PL1s ranging from 10-30 W, in 5 W increments. 30 W is the maximum PL1 set by the Dell BIOS with the highest performance mode. The green bars represent what % of time the game spends above the threshold of the target FPS, which is 60 in case of CS2 and 30 in case of CP 2077. The yellow bars show time spent below the threshold, while red represents the stutters. The benchmark runs last for 60 s.
As it is clearly observed, setting PL1 at or close to PL2 is not optimal. In CS2, the optimal value is somewhere around 20 W, or 0.67x PL2; while in CP2077 the optimal value is 25 W, or 0.83x PL2.
So clearly, if you are thermally constrained, Intel's recommendation of PL1 = PL2 for the -K SKUs beginning with Alder Lake, may actually hurt performance in games.
So we can conclude that for the purpose of benchmarking, Intel telling that PL1 should be equal to PL2 makes sense for getting the highest benchmark numbers and topping the charts, but for general gaming use, they are not optimal.
Lastly, how is this testing different from the power scaling benchmarks that reviewers have tested? Most of them lower the power limit but still use PL1=PL2, and some do test the old way of PL1 < PL2, but the data points are limited.
TL;DR
If you're mostly gaming, set PL2 to some lower value than the Intel spec, and always set PL1 lesser than PL2, depending on your target performance.
Thanks for reading.