Question Why does TDP and PPT differ, on consumer CPUs? And what role does Core Performance Boost and Turbo Clocks have on TDP and wattage?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

VirtualLarry

No Lifer
Aug 25, 2001
56,226
9,990
126
Serious question. I've got a 65W-rated TDP Ryzen R5 1600, in a rig, on a 65W-rated AMD stock heatsink. It's blue-screening, crashing, and the CPU temps just keep going up and up.

I updated HWMonitor, and it's showing a "Package Power" for the entire chip, at 82W or so. No wonder it's constantly overheating and crashing. 82W TDP CPU > 65W TDP Heatsink.

The worst part is, this is AFTER limiting the number of PrimeGrid threads, down from 12 to 9. That's right, I'm not even running the CPU at a full thread load.

Edit: Yes, I know that the obvious answer, is to "get a better heatsink", and that the "stock heatsink" for the 1600 was the 95W TDP model. Which, at the time, was stated that AMD wanted to give users the ability to OC on the stock heatsink. Now I know that was a lie, it's because AMD CPUs (at least, the 1600), are NOT able to stay within their stated rated specs.

Edit: A slight update, very important, actually. My original premise for this thread, was that I *thought* I was using a 65W TDP-rated AMD stock Wraith Stealth cooler with my Ryzen R5 1600 CPU, and it was crashing, at "stock BIOS" settings, which includes "Core Performance Boost" on "Auto", which defaults to enabled, to allow "Turbo Clocks" (the 1600 has an ACT of 3.4Ghz). I was initially placing the blame on AMD for the fact that HWMonitor reported the "Package Power" as something like 82W, which I thought was overcoming the 65W-rated heatsink. As it turned out, I actually was using a 95W Wraith Stealth (copper-cored) in the first place. Yet, it was still crashing due to overheating of the CPU. Part of this was due to the heat load of dual GPUs mining, and part of it was due to using a case that had NO vents on top, no fan mounts, no rad mounts, nothing but a solid steel top, and only a single 120mm exhaust out the rear, combined with the fact that my PCs are in desk cubbies. They are open to the front, and have dual 120mm intakes and vented fronts, but that still wasn't enough to prevent the CPUs from slowly creeping up in temp, passing 95C, and crashing/restarting.

Thus far, I have split the two GPUs up, one per PC (same case, same type cubby, same EVGA 650W G1+ 80Plus Gold PSUs), and disabled CPB on both of them (one has a 3600 CPU, one has a 1600 CPU), and then also in Wattman, set the Power Limit for the RX 5600XT (which was a refurb, both of them) to -20%. Thus far, overnight, they seem to have stabilized at under 90C on the CPU, and haven't crashed.
 
Last edited:

Thunder 57

Platinum Member
Aug 19, 2007
2,647
3,706
136
But if AMD's thermal-throttling worked as well as Intel's, then it shouldn't be crashing. Even with two GPUs in the case. (GPU chip vendor is same as CPU vendor, GPU card vendor is same as mobo vendor. Don't they test these things together?)

As others have said I really doubt the CPU is at fault here. More likely a refurb part, or the PSU/VRMs. How old is the PSU? They degrade over time.

Also, one more data-point, when the power went out here after a storm, my battery-backup (APC consumer Pure-Sine 1350VA / 810W) on the rig with the two RX 5600XT cards in it, shut off immediately, and the UPS started to whine, not the normal beeping when the power was cut off. It showed an "F01" error code, which is "battery overload". I then had further issues with it, with the PC. What I don't get, though, is the PC has 2x 180W TDP GPUs, 1x 65W TDP CPU, some RAM, chipset, an SSD, etc. I have an EVGA 650W 80Plus Gold G1+ modular PSU in both of the secondary rigs. But the crashing/shutoff happened to both of them, with both of these GPUs in them. (Are they cursed or something? Or is something shorted?) So it's not the PSU.

I switched the power plug from "Battery" to "Surge Only", and I still got crashes (on the Ryzen 3600 rig with the Gammax 400 cooler). So I pulled out the rig with the 1600 and stock Wraith Stealth cooler (yeah, I knew it was kind of underpowered, but it could CPU-mine OK). Put the 2x RX 5600XT (did I mention yet, they were factory refurbs, that had to flash the BIOS on them, they came with the 12Gbit/sec BIOS with 150W TDP, I put the 14Gbit/sec BIOS with the 180W TDP onto them).

Anyways, the CPU + 2x GPUs + system load, shouldn't be too much for my 650W Gold PSU, and in turn, that shouldn't be too much (at the wall) for my 810W-rated battery backup.

Same question, how old is the UPS? Batteries need replacing after a few years. At the very best you'll get reduced runtime. Eventually, it will likely just crash. I've never gotten that far but it seems likely. Also, running that much on a 650W PSU will make it less efficient. You want to try to keep the max load to maybe 60-70%. It's actually not as bad as I thought but every little bit helps:

Gold.png


And can your USP display watts currently being used? I know mine can display some helpful info including that. It's basically a convenient Kill-a-Watt.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,226
9,990
126
But you think it's the 1600 that's the problem?
Well, I feel that the default settings in the mobo BIOS that affect things like TDP and PPT and "Package Power", are not quite Kosher, if they let a 65W TDP CPU, with a 65W TDP heatsink, overheat and crash.

I'm not really sure how or why a mobo being refurb would interfere with the thermal-throttling or Core Performance Boost algorithm, I would think that would be more due to BIOS code, and the mobo would either have working VRMs or it wouldn't.
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,092
1,065
136
Larry, what kind of ram are you using and did you OC it? Did you tighten the timings too much? Ram is like the ghost in the machine if not 100% stable.
 

blckgrffn

Diamond Member
May 1, 2003
9,110
3,029
136
www.teamjuchems.com
Well, I feel that the default settings in the mobo BIOS that affect things like TDP and PPT and "Package Power", are not quite Kosher, if they let a 65W TDP CPU, with a 65W TDP heatsink, overheat and crash.

I'm not really sure how or why a mobo being refurb would interfere with the thermal-throttling or Core Performance Boost algorithm, I would think that would be more due to BIOS code, and the mobo would either have working VRMs or it wouldn't.

I think the point that is being made is that the crash is likely coming from off CPU components that AMD cannot control. Your voltage might start to droop when some component is getting hot and bam, crash.
 
  • Like
Reactions: spursindonesia

VirtualLarry

No Lifer
Aug 25, 2001
56,226
9,990
126
Larry, what kind of ram are you using and did you OC it? Did you tighten the timings too much? Ram is like the ghost in the machine if not 100% stable.
I mean, that's possible, including the increased heat from the GPU and the CPU heating the RAM, but at least, thus far, it has been stable @ XMP DDR4-3000 settings in that board, with that 1600 CPU, for months and probably years now, doing things like CPU mining, and BOINC (DC) like PrimeGrid. It's been pretty stable, well, up until I installed the twin RX 5600XTs. Maybe I'll call them my "Evil GPU twins".

Maybe I should just split them up, and put one in each PC, maybe that would mitigate any thermal and electrical / PSU issues, more or less.

Edit: I think that I'm going to do that, then if one or the other PC starts crashing and dying, then I'll be able to narrow it down, possibly, to a particular GPU.
 

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
This is exactly my point.

You're running a track day ("pure AVX2 torture for the CPU") with daily-driver equipment (a 1600 with Wraith Stealth "for more mundane, normal tasks").

If you're unhappy with how the CPU and cooler are performing in PrimeGrid then return them.
Except that the cpu itself has internal temp monitoring that should trigger throttling. He's blue-screening, instead of throttling.
 

Velgen

Junior Member
Feb 14, 2013
16
9
81
Please don't let my "insane (at times) ranting" drive you away. This is a (mostly) good site. And I'm (usually) a bit more "on the ball".

Ya I've been lurking and reading for awhile and you have had some well thought out posts I've liked over the years. (This problem maybe being less thought out than usual lol) So don't worry about it I enjoy this place.

Could be the XMP now being unstable as you mentioned or could be some weird underlying issue in the refurb board that couldn't get caught in testing. Had one board that was causing my intermittent issues that I couldn't replicate on demand they just happened when they felt like it had to return it. That board probably ended up getting sold to someone as a refurb or open box to another customer. As someone said earlier the thing you know about refurbs is that someone else most likely had an issue with it. Now whether it was user error or something the manufacturer caught and fixed who knows.

Really you just need to test everything one by one to rule them out as being the issue.
 

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
Can someone speak to my theory, that there may be something amiss ("shorting", for lack of a better term), in one or both of the GPU's power-delivery section, and possibly, they are drawing more +12V power (say, from the PCI-E slot), and thus, lowering the input voltage to the CPU's VRMs, thus causing them to transfer more current to the CPU, thus causing the CPU to heat up more? Could this even be a thing? Or am I grasping at straws here (I'm no EE), and the reality is more like AMD specifies PPT to the CPU socket as higher than TDP, and I'm trying to use a 65W TDP cooler on it, and thus, the equation fails to keep temps under control. To say nothing about my potentially-overvolting Gigabyte mobo BIOS.
What psu are you using? With the 2 gpus and cpu at full blast at higher than ambient temps, that could be challenging to the psu as well.
 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
PrimeGrid is a workload which any CPU at default settings, with a cooler which is sized and operated as specified by the CPU vendor (which includes parameters such intake air temperature) needs to run stable and return correct results.

From his post, it appears as though he's using an undersized cooler.
 

StefanR5R

Elite Member
Dec 10, 2016
5,459
7,718
136
But both were mining/crunching whatever, with dual GPUs before (GTX 1660 ti), without issue / crashes / UPS beeping. It's when I installed this blasted pair of RX 5600XT refurbs that everything started to go severely downhill.\
Uhm...

Why was this info not given in the 1st post?

The crashes could be in the GPU driver (due to a driver fault or due to bad GPU hardware). Or the crashes may perhaps be power supply related.

Do you have specifications of the rails arrangement of the PSU, and the allowable Amperage at each of the rails? The PSU may not have enough headroom for spikes in the power draw — or it actually might have the headroom but it might have an overly sensitive over-current protection.

--------

*Unrelated* to this crashing problem of yours, but related to your thoughts about changes to the cooling setup: Always consider which fan pulls air from where. E.g., active top exhaust may just make matters worse for the CPU/RAM/VRM area. A divider within the case to create separate cooling zones may be beneficial.
 
  • Like
Reactions: Thunder 57

VirtualLarry

No Lifer
Aug 25, 2001
56,226
9,990
126
From his post, it appears as though he's using an undersized cooler.
Funny thing about that...

I found a 95W Wraith Spire with the copper core, the full-height one, and then I took the PC apart, and took off the cooler... and what is this? It was already a 95W Wraith Spire on the CPU already. I re-pasted it of course, and made sure it was secure. (I'm quite certain that it was secure before as well, as those stock coolers bottom-out the screws once they're screwed all the way down.)

So, that just leaves... excessive heat load from the GPUs. So I've split up the (RX 5600XT "Evil") GPUs, one per rig, and I'm sticking in a low-wattage GTX 1650 4GB D5 card into the secondary slot, just because I can, for extra mining.
 

Velgen

Junior Member
Feb 14, 2013
16
9
81
I would say hold off on that low wattage GTX 1650 just to keep things simple and reduce the variables. Never want to change too many things at once otherwise you don't know what caused the problem. The PSU model is solid and "SHOULD" be fine for this provided it hasn't degraded from the heat or anything.
 
  • Like
Reactions: Leeea

StefanR5R

Elite Member
Dec 10, 2016
5,459
7,718
136
I have an EVGA 650W 80Plus Gold G1+ modular PSU in both of the secondary rigs. But the crashing/shutoff happened to both of them,
Do you have specifications of the rails arrangement of the PSU, and the allowable Amperage at each of the rails?
This seems to be the specification which I asked for:
Source: https://www.evga.com/articles/01185/evga-g-plus-power-supplies/
Screenshot_20201021_221915.png
So there is only a single +12V rail. It is IMO unlikely that the two GPUs manage to trip the PSU's over-current protection by simultaneous spikes. But I am not an expert with GPUs and PSUs.
 
Last edited:
  • Like
Reactions: Leeea

Velgen

Junior Member
Feb 14, 2013
16
9
81
Ya I mean worst case I can imagine each GPU pulling 200W (well maybe 210) each, the CPU at 80-90W draw, and that leaves 150W for the mobo/storage/fans. It "SHOULD" be fine cutting it a little closer than I would like, but "SHOULD" (really want to emphasize that should lol) be fine unless it has degraded either to heat or prolonged high power draw (would have to assume it drew more than it's rated wattage for that kind of degradation from a year of use).
 

StefanR5R

Elite Member
Dec 10, 2016
5,459
7,718
136
That's not the central problem
I agree that this is not VL's problem, now that it took the thread to grow mere 3 pages before actual info was given by the thread opener.

and it's not idiocy.
Sure it is. Why specify the cooling system for 65 W sustained but the electrical system for 88(?) W sustained?

AMD has a rather smart system set in place to squeeze performance out of these CPUs. This "idiotic" power management you're talking about helped someone like me run 1600X without active cooling on a Scythe Ninja heatsink, using only airflow from low rpm case fans. Imagine that, my 95W TDP CPU with stock settings managed to keep temps at a constant 75C while gradually throttling down from 95W+ under Prime95 to 70-75W where it achieved thermal stability.
Out of peripheral interest: Are the 75 °C you are talking about the same "95" °C AMD is talking about in their specifications?

(PS, big deal, you had a cooling system with limited performance, and the CPU throttled gradually. GPUs did the same before AMD introduced it in CPUs. Meanwhile, Intel CPUs continued to achieve the very same by fluctuating between a high and a low clock — not as elegant, but in the end you get a Prime95 throughput reduction out of that just the same.)
 

VirtualLarry

No Lifer
Aug 25, 2001
56,226
9,990
126
I'd like to thank everyone that responded, and most of you (pretty-much everybody) was positive. Thank you.

I guess, I was just too keen on "cutting corners".

(*And if you missed my update, it turned out that the CPU cooler on the 1600 was indeed the 95W Wraith Spire cooler after all. And still hitting 95C.)

I have kind of a love/hate relationship with these Rosewill Magnetar cases. They are well-built, solid, lots of options, dual 120mm LED intake, rear 120mm exhaust... but NO top cooling, no TOP Rad mount., bah. If they had that, they might be perfect. It's clear that even if my 650W PSU isn't over-taxed by the twin RX 5600XT GPUs and a "65W" Ryzen CPU, then these cases are not meant for "serious" multi-GPU configs. Not without some top blow-hole modding to the cases, and those are fairly thick steel, as Rosewill cases go.

As for the provocative thread title, I still don't really care for the fact that a CPU with a "65W TDP" label rating, will easily show a Package Power load of 80-82W, once a "serious load" (scientific calculations) are put onto the CPU. That disappoints and dismays me. Whether some of that is due to Gigabyte trying to "game the system" as far as reviews go, with their BIOS, if it defaults to tweaking the CPU parameters for "max performance", without regard to factory-spec TDPs... well, as we all know, Intel has been doing that for a while too. So it seems, disappointingly, "industry-standard".

Right now, I have two secondary rigs, each with ONE of the RX 5600XT cards, along with a PCI-E slot-powered GTX 1650 4GB D5 ITX card (also a Gigabyte factory refurb, as it turns out). Each PC has an EVGA 80Plus Gold G1+ PSU, 650W, too. One of the PCs has a Ryzen R5 1600 with a Wraith Spire cooler, the other has a Ryzen R5 3600 with a Gammax 400 tower cooler. Neither is overclocked, although they are using XMP memory.

I'll see tonight or tomorrow, if one of the PCs has crashed or rebooted. Both have their sides back on, and both are back in the cubbies.
 
  • Like
Reactions: Leeea

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
You're running a track day ("pure AVX2 torture for the CPU") with daily-driver equipment (a 1600 with Wraith Stealth "for more mundane, normal tasks").

I'm looking forward to the eventual thread over in the Graphics Cards forms complaining about Radeon/GeForce TDPs once he gets his CPU problems sorted out and starts running FurMark.

Anyone know of some good torture tests for RAM or SSDs? I'll wager dollars to donuts those manufacturers are lying about their typical power use numbers as well!
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,478
14,434
136
I'd like to thank everyone that responded, and most of you (pretty-much everybody) was positive. Thank you.

I guess, I was just too keen on "cutting corners".

(*And if you missed my update, it turned out that the CPU cooler on the 1600 was indeed the 95W Wraith Spire cooler after all. And still hitting 95C.)

I have kind of a love/hate relationship with these Rosewill Magnetar cases. They are well-built, solid, lots of options, dual 120mm LED intake, rear 120mm exhaust... but NO top cooling, no TOP Rad mount., bah. If they had that, they might be perfect. It's clear that even if my 650W PSU isn't over-taxed by the twin RX 5600XT GPUs and a "65W" Ryzen CPU, then these cases are not meant for "serious" multi-GPU configs. Not without some top blow-hole modding to the cases, and those are fairly thick steel, as Rosewill cases go.

As for the provocative thread title, I still don't really care for the fact that a CPU with a "65W TDP" label rating, will easily show a Package Power load of 80-82W, once a "serious load" (scientific calculations) are put onto the CPU. That disappoints and dismays me. Whether some of that is due to Gigabyte trying to "game the system" as far as reviews go, with their BIOS, if it defaults to tweaking the CPU parameters for "max performance", without regard to factory-spec TDPs... well, as we all know, Intel has been doing that for a while too. So it seems, disappointingly, "industry-standard".

Right now, I have two secondary rigs, each with ONE of the RX 5600XT cards, along with a PCI-E slot-powered GTX 1650 4GB D5 ITX card (also a Gigabyte factory refurb, as it turns out). Each PC has an EVGA 80Plus Gold G1+ PSU, 650W, too. One of the PCs has a Ryzen R5 1600 with a Wraith Spire cooler, the other has a Ryzen R5 3600 with a Gammax 400 tower cooler. Neither is overclocked, although they are using XMP memory.

I'll see tonight or tomorrow, if one of the PCs has crashed or rebooted. Both have their sides back on, and both are back in the cubbies.
You should leave the sides off, and out of the cubbies until you verify that the rest is working good. Then you put them back in, and if THEN they fail, you know for certain its case.cooling problem.
 
  • Like
Reactions: Drazick

VirtualLarry

No Lifer
Aug 25, 2001
56,226
9,990
126
I'll wager dollars to donuts those manufacturers are lying about their typical power use numbers as well!
That's the only thing that makes sense to me, that those RX 5600XT (of which, the download page on Gigabyte's site lists the BIOS release notes, and all of them mention the card going from 150W TDP to 180W TDP), actually take more than that. I'm not sure how much more that they can, though, and stay in-spec, since they just have a single 8-pin power connector, and the PCI-E slot connector, which together should max at 225W.

But the PC shutting down mysteriously several times, overloading my UPS which supposedly can handle 810W at the wall (40" TV monitor takes up to 100W).

The enormous heat load, from two of those RX 5600XT cards in one PC, when I've had a pair of 120-130W GTX 1660 ti cards in one of those machines, and it kept humming along, mining on CPU and both GPUs.
 
  • Wow
Reactions: Thunder 57

Velgen

Junior Member
Feb 14, 2013
16
9
81
Ya I agree best to make sure cooling is not a problem at all before throwing them back in there. He has cut down on the GPUs in there so he could maybe get away with the sides and the cubbies since he said the front and back of cubbies are open I believe, but best to make sure they work in ideal conditions before putting them in non ideal conditions.

Edit: @VirtualLarry Believe under Furmark with the the 180W BIOS those cards will pull around 200W so I assume that is most likely their power draw while mining can't imagine it going much more above the spec than that. Worst possible I can imagine is 210W highly doubt it would hit the max power draw possible from PCI-E slot and connector.
 
  • Like
Reactions: Leeea