blckgrffn
Diamond Member
Great question. I typically set PL1 to the old school advertised TDP (like 95W or 125W) and then PL2 based on the cooler and board. Disabling all core enhancements always. I don't use cheap parts but I'll use B365 or similar chipsets when the budget dictates it. Just doing an H370 build right now.What are your preferred values for those settings and how much is the performance loss for ensuring systems stay stable and reliable over a long period of time?
Real world performance sacrifices? That's a harder question to answer. I build a lot of gaming rigs (or did? gpus being what they are) and realistically reading here and other places, I didn't think it should impact them much at all. I haven't built HFR (200+ FPS) rigs that I know of, most are playing and just are happy with decent 1080P or 1440P performance when they set the slider to "high". This should almost always be a GPU limited scenario (given the mid range GPUs in question).
That said, I have used the Intel tools to investigate that the limits actually work, only extensively in one case but that was on my daughters 9700K build. It's a B365 board that doesn't have overbuilt power delivery and it has "only" Hyper 212 Evo cooler so I wanted it to work. I set PL1 to 88W (I believe - the equivalent of my AMD 3600 at the time) and PL2 120W. The Intel utility indicated that in all core workloads I was wattage limited and throttled down to about 3.4 ghz after exhausting PL2. That's something like 25% I suppose in some weird corner case scenario. Single thread boosting I am still consistently seeing (if the system can be trusted) ~4.6 ghz plus, which seems to be right where a 9700K should be.
I want to circle back though - the whole why of this is to prevent some mining malware or power virus type application (think Amazon New World and 3090's but maybe AVX or similar) to really use that default board configured headroom to escape the abilities of the board to deliver power or the CPU cooler to keep it from hard throttling (after a couple of years and some dust on everything). It is my understanding that in real world scenarios these limits should not noticeably impact the user.
Which gets me back to the power limits here (and the crazy Ampere power usages) where Intel seems bent on crazy power usage to get the last couple of percent of performance but where this costs board partners and everyone else real resources to build out to meet those demands - for reasons that don't benefit users.
It's like there is a benchmark beast mode in the firmware, enabled by default. I find it distasteful in the way I find the 5800x TDP the same way. It's not just an Intel thing.
Sorry for the long answer, I pondered it over lunch.