Question Why does TDP and PPT differ, on consumer CPUs? And what role does Core Performance Boost and Turbo Clocks have on TDP and wattage?

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

VirtualLarry

No Lifer
Aug 25, 2001
56,349
10,049
126
Serious question. I've got a 65W-rated TDP Ryzen R5 1600, in a rig, on a 65W-rated AMD stock heatsink. It's blue-screening, crashing, and the CPU temps just keep going up and up.

I updated HWMonitor, and it's showing a "Package Power" for the entire chip, at 82W or so. No wonder it's constantly overheating and crashing. 82W TDP CPU > 65W TDP Heatsink.

The worst part is, this is AFTER limiting the number of PrimeGrid threads, down from 12 to 9. That's right, I'm not even running the CPU at a full thread load.

Edit: Yes, I know that the obvious answer, is to "get a better heatsink", and that the "stock heatsink" for the 1600 was the 95W TDP model. Which, at the time, was stated that AMD wanted to give users the ability to OC on the stock heatsink. Now I know that was a lie, it's because AMD CPUs (at least, the 1600), are NOT able to stay within their stated rated specs.

Edit: A slight update, very important, actually. My original premise for this thread, was that I *thought* I was using a 65W TDP-rated AMD stock Wraith Stealth cooler with my Ryzen R5 1600 CPU, and it was crashing, at "stock BIOS" settings, which includes "Core Performance Boost" on "Auto", which defaults to enabled, to allow "Turbo Clocks" (the 1600 has an ACT of 3.4Ghz). I was initially placing the blame on AMD for the fact that HWMonitor reported the "Package Power" as something like 82W, which I thought was overcoming the 65W-rated heatsink. As it turned out, I actually was using a 95W Wraith Stealth (copper-cored) in the first place. Yet, it was still crashing due to overheating of the CPU. Part of this was due to the heat load of dual GPUs mining, and part of it was due to using a case that had NO vents on top, no fan mounts, no rad mounts, nothing but a solid steel top, and only a single 120mm exhaust out the rear, combined with the fact that my PCs are in desk cubbies. They are open to the front, and have dual 120mm intakes and vented fronts, but that still wasn't enough to prevent the CPUs from slowly creeping up in temp, passing 95C, and crashing/restarting.

Thus far, I have split the two GPUs up, one per PC (same case, same type cubby, same EVGA 650W G1+ 80Plus Gold PSUs), and disabled CPB on both of them (one has a 3600 CPU, one has a 1600 CPU), and then also in Wattman, set the Power Limit for the RX 5600XT (which was a refurb, both of them) to -20%. Thus far, overnight, they seem to have stabilized at under 90C on the CPU, and haven't crashed.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
21,637
10,855
136
For anyone that dares... download BOINC, sign up for PrimeGrid, select the "challenge project" (PPS-DIV), and see how many of your "stable" rigs go crying home.

My PrimeGRID account seems to be throwing errors, and I can't get any projects. Not that I run BOINC anything ever, but I would run it just to see what happens to my rig on default settings. Should be fine though.
 
  • Like
Reactions: Captante

maddie

Diamond Member
Jul 18, 2010
4,746
4,686
136
My poor tortured rigs. Either this new LLR2 PrimeGrid workload (on PPS-DIV project), is SO heavy (apparently, someone heading up the project had to disable CPB and PBO on his Zen2 rig to keep it from crashing on these loads too), or something in one or both of my secondary rigs is toasted. Probably the PSUs, maybe these GPUs.

Even disabling CPB, Package Power under 80W, on a DeepCool Gammax 400 (CPU temp 67C, without mining on the GPU(s)), the battery backup is clicking every few minutes.

Either that, or my wiring is shot inside the walls, and that outlet is just NO BUENO to use anymore for anything approaching 400W of usage.

Even when I was mining the last few days, with the RX 5600XT set to 1350Mhz and 850mV (GPU temps not a problem), wattage according to Wattman maxing at 85W, and then the GTX 1650 4G D5 card running at 75W on PCI-E power (but temps on that card, while mining on both cards, reported as 80C, which seems a bit high for the "bottom card" in the stack).

The APC UPS reported 390W of load, WITH my 40" TV included, which is rated at up to 100W, but realistically, is probably taking closer to 50W. So maybe 350W load at the wall for the PC, or 300W DC, more or less.

How is that too much for a 650W 80Plus Gold PSU, or a 810W-rated UPS (even if I only get a few minutes runtime at that load)?

Something is definitely SHOT here, I guess I'll look into RMA'ing the UPS or replacing the battery, and replacing the PSU next month.
To be frank, you seem to have posts that start with a theory and then look for supporting evidence.
 
  • Like
Reactions: Thunder 57

Hitman928

Diamond Member
Apr 15, 2012
5,316
7,994
136
Did anybody read my earlier post about people thinking a consumer grade CPU is fit for running server grade workloads?

It's not the CPU I would be worried about but rather the surrounding components, assuming you give them adequate cooling and a decent ambient temperature at least.
 

TheGiant

Senior member
Jun 12, 2017
748
353
106
Did anybody read my earlier post about people thinking a consumer grade CPU is fit for running server grade workloads?
consumer grade CPUs became server ready 3 years ago for most local loads
just the internet of productivity predenders needs 20k in CRB20 score while rendering with ultrabook closed inside new shiny leather pack
 
Apr 30, 2020
68
170
76
AMD's TDP vs PPT is really simple. PPT is the maximum amount of power AMD will allow the CPU to consume. TDP is now the power level at which AMD guarantees the CPU will be able to maintain it's advertised performance or "base clocks". So if your HSF solution is only capable of removing 65W of heat, AMD guarantees your CPU will at least be able to achieve and sustain all-core base clocks.

Since AMD ships the CPU with a HSF capable of removing 65W, it should be able to maintain base clocks perpetually. But to do that, the HSF will need to be allowed to run the fan up to full speed (100%), and there will be a max ambient limit as well. Probably 95°F at the HSF inlet.

I've done testing with my 3900X playing around with fan curves, and basically as it starts going over 85-90°C, it will start going back down towards base clocks. At 95°C, it will start throttling. I believe much over 95°C it will just turn the system off to protect the CPU.

Did anybody read my earlier post about people thinking a consumer grade CPU is fit for running server grade workloads?
Any CPU should be capable of running any work load without a problem. If a CPU fails running PrimeGrid, it will fail doing an overnight encode job, it will fail rendering something in blender, it will fail at tons of other tasks. There is no such thing as a "server-grade workload" in the context you're trying to use it.
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
5,316
7,994
136
For anyone that dares... download BOINC, sign up for PrimeGrid, select the "challenge project" (PPS-DIV), and see how many of your "stable" rigs go crying home.

Finally got around to trying it and TBH, it doesn't seem that crazy of a load. It's a very heavy load, for sure, but I put my CPUs through worse when testing stability. My 2700 overclocked to 4 GHz is handling it just fine. After about an hour I topped out at 68 degrees CPU temp. Running Prime95 or even some FAH projects gets me to at least 73 or so degrees. Granted I have an AIO water cooler on the CPU in a case with decent airflow on a desk, so maxing out the CPU (even when overclocked) isn't an issue for me.

1603995952162.png
 
  • Like
Reactions: Captante

Captante

Lifer
Oct 20, 2003
30,277
10,783
136
Finally got around to trying it and TBH, it doesn't seem that crazy of a load. It's a very heavy load, for sure, but I put my CPUs through worse when testing stability. My 2700 overclocked to 4 GHz is handling it just fine. After about an hour I topped out at 68 degrees CPU temp. Running Prime95 or even some FAH projects gets me to at least 73 or so degrees. Granted I have an AIO water cooler on the CPU in a case with decent airflow on a desk, so maxing out the CPU (even when overclocked) isn't an issue for me.

View attachment 32575


My Ryzen 3600 @ 4.3 ghz (all cores) ran this without issue for 12+ hours and Prime 95 for 24+ ... never topped 70c. (also on AIO water)
 

VirtualLarry

No Lifer
Aug 25, 2001
56,349
10,049
126
My Ryzen 3600 @ 4.3 ghz (all cores) ran this without issue for 12+ hours and Prime 95 for 24+ ... never topped 70c. (also on AIO water)
It must be the heat from my GPUs then.

HWMonitor on my rig with the 3600, RX 5700, GTX 1660 ti, shows CPU temp high of 78C, and current 70C. On 240mm AIO WC, current ambient temps are fairly low right now. Room is chilly.

A few days ago, when I was finishing off the PrimeGrid challenge project (PPS-DIV), I did run my secondary rig with the 3600 and RX 5600XT (Refurb) and GTX 1650 4GB D5 card, but I didn't mine on the GPUs. CPU temp then was around 77C, in the cubby, with just the CPU doing things.

I've had both of my secondary PCs mining on both GPUs (both have RX 5600XT refurb as primary, and GTX 1650 4GB D5 as secondary). The rig with the 1600, I haven't been mining on the CPU, but I have on the 3600 rig.

So far, after turning off CPB, turning all board fan-speeds to "full speed", and setting my RX 5600XT "fine-tune" settings to 1350Mhz / 850mV (resulting in 80-85W mining ETH according to Wattman), and keeping the room fairly cool, they haven't rebooted.

I did get a call from a neighbor about the lights flickering late at night. I told her that I was fairly disturbed by that too, but it has been doing that since I moved in there (before her).
 

blckgrffn

Diamond Member
May 1, 2003
9,127
3,069
136
www.teamjuchems.com
Hahahaha, that suspect power though.

If your UPS is clicking that's a terrible, terrible sign imo. I've had more clients that put me through hell because their power was dirty than any other thing. A couple of them I did so much "warranty" work I would have them buy power conditioners (different from UPS) to continue to get service and lo and behold those would light up that the power was out of spec and that conditioning was active.

I love/hate this thread so much :tearsofjoy:
 

scannall

Golden Member
Jan 1, 2012
1,946
1,638
136
Hahahaha, that suspect power though.

If your UPS is clicking that's a terrible, terrible sign imo. I've had more clients that put me through hell because their power was dirty than any other thing. A couple of them I did so much "warranty" work I would have them buy power conditioners (different from UPS) to continue to get service and lo and behold those would light up that the power was out of spec and that conditioning was active.

I love/hate this thread so much :tearsofjoy:
Been there and done that as well. Dirty power caused me so many headaches with clients. Well, that and simply them not paying attention.
 

Captante

Lifer
Oct 20, 2003
30,277
10,783
136
It must be the heat from my GPUs then.

HWMonitor on my rig with the 3600, RX 5700, GTX 1660 ti, shows CPU temp high of 78C, and current 70C. On 240mm AIO WC, current ambient temps are fairly low right now. Room is chilly.

A few days ago, when I was finishing off the PrimeGrid challenge project (PPS-DIV), I did run my secondary rig with the 3600 and RX 5600XT (Refurb) and GTX 1650 4GB D5 card, but I didn't mine on the GPUs. CPU temp then was around 77C, in the cubby, with just the CPU doing things.

I've had both of my secondary PCs mining on both GPUs (both have RX 5600XT refurb as primary, and GTX 1650 4GB D5 as secondary). The rig with the 1600, I haven't been mining on the CPU, but I have on the 3600 rig.

So far, after turning off CPB, turning all board fan-speeds to "full speed", and setting my RX 5600XT "fine-tune" settings to 1350Mhz / 850mV (resulting in 80-85W mining ETH according to Wattman), and keeping the room fairly cool, they haven't rebooted.

I did get a call from a neighbor about the lights flickering late at night. I told her that I was fairly disturbed by that too, but it has been doing that since I moved in there (before her).


Sorry can't recall if you already said but I'm too lazy to re-read the thread ... what case do you have?

I went with a Corsair 220T RGB which has 3 Mag-Lev fans front-mounted and (perhaps more importantly for cooling) has a metal grate instead of a pane of glass in the front for max-airflow. The Corsair 240mm AIO is top-mounted with 2 of the same fans exhausting.

Note my GTX-980 still has the stock EVGA heatsink but I've zip-tied two Antec temp-sensing 120mm fans onto it plus a third blowing onto the side of the card & VRM's plus a fourth (non-temp sensing) exhausting from the rear.

*(fans on quiet, pump on balanced in ICUE)


Been there and done that as well. Dirty power caused me so many headaches with clients. Well, that and simply them not paying attention.


Interesting ... I have fairly sketchy power in my apartment and (at least so far) all that's happened as a result is I had to replace the batteries in my APC XS-1200 UPS a bit sooner than expected.

Lived here nearly 5 years with the same UPS in service btw and the unit switches on at least once or twice a day.

Maybe my power isn't really all that bad?
 
Last edited:

Captante

Lifer
Oct 20, 2003
30,277
10,783
136

Reviews are scary for my UPS units. Maybe they damaged my hardware? Someone else @ Amazon reported hardware damage, and someone reported smoke, even!

I used to think APC was a good brand. Used to.


I've had very few problems with APC over the years and I've used many of them .... never one of the "Sinewave" models though.

One problem with most of the new APC UPS's is uncontrolled heat buildup. The older APC XS-1200 I use currently has big open vents on the front and top along with an 80mm temp-sensor fan while newer models have only a few tiny slots on the top with near-zero airflow.


Eaton and Tripp-Lite are decent alternatives ... personally I would avoid Cyberpower but ymmv.
 
  • Like
Reactions: Leeea