Question Why does TDP and PPT differ, on consumer CPUs? And what role does Core Performance Boost and Turbo Clocks have on TDP and wattage?

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

VirtualLarry

No Lifer
Aug 25, 2001
56,353
10,050
126
Serious question. I've got a 65W-rated TDP Ryzen R5 1600, in a rig, on a 65W-rated AMD stock heatsink. It's blue-screening, crashing, and the CPU temps just keep going up and up.

I updated HWMonitor, and it's showing a "Package Power" for the entire chip, at 82W or so. No wonder it's constantly overheating and crashing. 82W TDP CPU > 65W TDP Heatsink.

The worst part is, this is AFTER limiting the number of PrimeGrid threads, down from 12 to 9. That's right, I'm not even running the CPU at a full thread load.

Edit: Yes, I know that the obvious answer, is to "get a better heatsink", and that the "stock heatsink" for the 1600 was the 95W TDP model. Which, at the time, was stated that AMD wanted to give users the ability to OC on the stock heatsink. Now I know that was a lie, it's because AMD CPUs (at least, the 1600), are NOT able to stay within their stated rated specs.

Edit: A slight update, very important, actually. My original premise for this thread, was that I *thought* I was using a 65W TDP-rated AMD stock Wraith Stealth cooler with my Ryzen R5 1600 CPU, and it was crashing, at "stock BIOS" settings, which includes "Core Performance Boost" on "Auto", which defaults to enabled, to allow "Turbo Clocks" (the 1600 has an ACT of 3.4Ghz). I was initially placing the blame on AMD for the fact that HWMonitor reported the "Package Power" as something like 82W, which I thought was overcoming the 65W-rated heatsink. As it turned out, I actually was using a 95W Wraith Stealth (copper-cored) in the first place. Yet, it was still crashing due to overheating of the CPU. Part of this was due to the heat load of dual GPUs mining, and part of it was due to using a case that had NO vents on top, no fan mounts, no rad mounts, nothing but a solid steel top, and only a single 120mm exhaust out the rear, combined with the fact that my PCs are in desk cubbies. They are open to the front, and have dual 120mm intakes and vented fronts, but that still wasn't enough to prevent the CPUs from slowly creeping up in temp, passing 95C, and crashing/restarting.

Thus far, I have split the two GPUs up, one per PC (same case, same type cubby, same EVGA 650W G1+ 80Plus Gold PSUs), and disabled CPB on both of them (one has a 3600 CPU, one has a 1600 CPU), and then also in Wattman, set the Power Limit for the RX 5600XT (which was a refurb, both of them) to -20%. Thus far, overnight, they seem to have stabilized at under 90C on the CPU, and haven't crashed.
 
Last edited:

Zstream

Diamond Member
Oct 24, 2005
3,396
277
136
I'd take off the case sides, let it run and verify it's the heat causing issues. If it is, just replace the darn fans in the case with something like this: https://www.amazon.com/gp/product/B008N323U6/ref=ppx_yo_dt_b_asin_title_o01_s00?ie=UTF8&psc=1

It may not be worth it in the long run, and you could likely find cheaper fans that are just as efficient, but I've rarely used the stock fans in cases, especially if I know a customer will have it "inside" their desk or in a cube sized enclosure.
 

Velgen

Junior Member
Feb 14, 2013
16
9
81
Don't think different fans would help much since it's more an issue he needs more exhaust with a single 120mm exhaust you can't really get the air out quick enough with two 5600XTs running full tilt unless you don't care about noise I guess. Would need something faster than those Cougars though.
 

Thunder 57

Platinum Member
Aug 19, 2007
2,675
3,801
136
That's the only thing that makes sense to me, that those RX 5600XT (of which, the download page on Gigabyte's site lists the BIOS release notes, and all of them mention the card going from 150W TDP to 180W TDP), actually take more than that. I'm not sure how much more that they can, though, and stay in-spec, since they just have a single 8-pin power connector, and the PCI-E slot connector, which together should max at 225W.

But the PC shutting down mysteriously several times, overloading my UPS which supposedly can handle 810W at the wall (40" TV monitor takes up to 100W).

The enormous heat load, from two of those RX 5600XT cards in one PC, when I've had a pair of 120-130W GTX 1660 ti cards in one of those machines, and it kept humming along, mining on CPU and both GPUs.

Get a second UPS for just your monitor. Totally worth it.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,570
14,520
136
Larry, you should edit your title. Its flamebait, and its not true.

1) You have NO decent ventilation, you have admitted this.
2) after updating your bios in your video cards, they, along with the CPU and other parts, may be exceeding the power supplies capabilities.
3) You may have your CPU set in your bios to draw more than 65 watt.
4) Your heatsink, for a CPU doing mining 24/7 at 100% load may not be sufficient. Most Everyone says this based on your mining.

I could go on, but you really need to edit your title.
 

maddie

Diamond Member
Jul 18, 2010
4,747
4,690
136
Larry, you should edit your title. Its flamebait, and its not true.

1) You have NO decent ventilation, you have admitted this.
2) after updating your bios in your video cards, they, along with the CPU and other parts, may be exceeding the power supplies capabilities.
3) You may have your CPU set in your bios to draw more than 65 watt.
4) Your heatsink, for a CPU doing mining 24/7 at 100% load may not be sufficient. Most Everyone says this based on your mining.

I could go on, but you really need to edit your title.
Don't forget the storm issue. We don't know for sure, but possibly lightning surge. Even Larry says that things changed after that event.

But AMD lies about CPU power which caused high temps. Got some entertainment though.
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
7,409
2,443
146
One thing I noticed here, if configured properly, those 5600XTs should only be drawing about 100W each. That is assuming they are mining ETH. Even my 5700XT only draws around 115W when mining ETH, as the core is underclocked and undervolted. So I doubt it is a power problem there, unless the 5600XTs are running stock core settings. (this also means less heat in the case)

Overall, I suspect that the BSODs may be caused by unstable GPUs, though your memory OC could also be suspect on primegrid. As others mentioned, likely to be a defective component, possibly one of the refurb parts. It could be the CPU, I have heard of a few Ryzen CPUs that are defective, but then this could be any part that is defective.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,353
10,050
126
Larry, you should edit your title. Its flamebait, and its not true.

1) You have NO decent ventilation, you have admitted this.
2) after updating your bios in your video cards, they, along with the CPU and other parts, may be exceeding the power supplies capabilities.
3) You may have your CPU set in your bios to draw more than 65 watt.
4) Your heatsink, for a CPU doing mining 24/7 at 100% load may not be sufficient. Most Everyone says this based on your mining.

I could go on, but you really need to edit your title.
With all due respect, Mark, Stephan has already weighed in about AMD's PPT tracking, up to 88W Package Power for a "65W TDP" CPU. The title is not incorrect.

And, at least by spec, the MAX that the GPUs, each, can take, is 150W from the 8-pin PCI-E connector, as well as 75W from the PCI-E slot, a total of 225W each card. That's 450W. The CPU, is "65W TDP" (82W measured by software). The mobo, RAM, and SATA SSD, can't take more than 50W. That's 582W, well within the capacity of the 650W 80Plus Gold PSU's rating.

As for the heatsink being insufficient, well, it was sufficient, even for mining on CPU with two GPUs, with GTX 1660 ti cards. But not with my particular refurb RX 5600XT cards.

As far as point 1 goes, what does my case ventilation, have to do with software Package Power tracking, hitting 82W? Is that the fault of my case ventilation? Or AMD's lying TDP specs?

And lastly, you and I both know, that Intel does this too. However, I personally own AMD rigs, so it affects me in that manner. Hence the title.

Edit: It should be noted, that even disabling "Core Performance Boost" in the BIOS, and attempting to set "cTDP" to "45" and "CPU Temp Limit" to "80" in BIOS, still resulted in Package Power being show as 67-77W. Edit: And temps above 80C.

Edit: I did not remove the word "lie", but I added to the title, to clarify that I'm speaking of worst-case AVX software loads.
 
Last edited:

VirtualLarry

No Lifer
Aug 25, 2001
56,353
10,050
126
Don't forget the storm issue. We don't know for sure, but possibly lightning surge. Even Larry says that things changed after that event.

But AMD lies about CPU power which caused high temps. Got some entertainment though.
No, the issue with AMD's Ryzen R5 1600 CPUs taking more Package Power (as monitored by HWMonitor), being greater than 65W "rated TDP" of the CPU, has been an issue ever since the CPU came out, when processing a strenuous AVX2 workload like PrimeGrid. This isn't something new, and has nothing to do with any power outage. I've been experiencing this (across multiple Ryzen R5 1600 PCs), for
YEARS. I only just now decided to speak out about it, because of my (mistaken) belief that it was causing my high CPU temps. Certainly, it contributed, but that was not the SOLE reason, rather, it seems that it was due to the overall heat load of twin RX 5600XT cards mining, as well as a 12T PrimeGrid workload, in a case with restricted airflow (no top ventilation).
 

bbhaag

Diamond Member
Jul 2, 2011
6,660
2,045
146
So as a take away from this thread from my perspective anyway it is not to believe the TDP numbers from any manufacture because they are a marketing tool used by AMD, Intel, Nvidia, et al to mislead consumers and make their products appear better than they really are.
 
  • Like
Reactions: Leeea

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,570
14,520
136
With all due respect, Mark, Stephan has already weighed in about AMD's PPT tracking, up to 88W Package Power for a "65W TDP" CPU. The title is not incorrect.

And, at least by spec, the MAX that the GPUs, each, can take, is 150W from the 8-pin PCI-E connector, as well as 75W from the PCI-E slot, a total of 225W each card. That's 450W. The CPU, is "65W TDP" (82W measured by software). The mobo, RAM, and SATA SSD, can't take more than 50W. That's 582W, well within the capacity of the 650W 80Plus Gold PSU's rating.

As for the heatsink being insufficient, well, it was sufficient, even for mining on CPU with two GPUs, with GTX 1660 ti cards. But not with my particular refurb RX 5600XT cards.

As far as point 1 goes, what does my case ventilation, have to do with software Package Power tracking, hitting 82W? Is that the fault of my case ventilation? Or AMD's lying TDP specs?

And lastly, you and I both know, that Intel does this too. However, I personally own AMD rigs, so it affects me in that manner. Hence the title.
a 650 watt PSU can deliver 520 watts if its 80% efficient. (unless my math is off) so, you are over. It would have to be over 89% efficient (platinum or titanium range ??) to get 582 watts. Not to mention, as you said the heat problem.
 
  • Like
Reactions: Drazick and Tlh97

VirtualLarry

No Lifer
Aug 25, 2001
56,353
10,050
126
I would also like to investigate Vsoc settings, on this Gigabyte X370 Gaming ATX mobo, it seems pre-set to 1.200V, which is over the recommended setting of 1.100V. But I can't seem to find any voltage settings other than VDIMM in this BIOS F25. I think I'm forgetting the "secret Gigabyte hotkey" to get to the voltage MIT pages. Does anyone remember?
 

VirtualLarry

No Lifer
Aug 25, 2001
56,353
10,050
126
a 650 watt PSU can deliver 520 watts if its 80% efficient. (unless my math is off) so, you are over.
No Mark, that's not how it works. The rated power limit, as stated on the label, is DC power. The AC power (power draw "at the wall", is where it exceeds that number). The "efficiency rating" does NOT subtract from the label-rated available DC power.


Edit: Maybe you're thinking of "de-rating curves"? IOW, if a PSU is rated to provide 500W of DC output (say, all on +12V), @ 40C, then if your PSU, due to ambient and whatnot temps is higher than rated, say 50C inside the PSU due to limited cooling / airflow, then the PSU may be de-rated down to say, 400W of DC output capacity. (Numbers for example only, if someone wants to post a measured de-rating curve for a real PSU, be my guest.)
 
  • Like
Reactions: Leeea

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,570
14,520
136
No Mark, that's not how it works. The rated power limit, as stated on the label, is DC power. The AC power (power draw "at the wall", is where it exceeds that number). The "efficiency rating" does NOT subtract from the label-rated available DC power.


Edit: Maybe you're thinking of "de-rating curves"? IOW, if a PSU is rated to provide 500W of DC output (say, all on +12V), @ 40C, then if your PSU, due to ambient and whatnot temps is higher than rated, say 50C inside the PSU due to limited cooling / airflow, then the PSU may be de-rated down to say, 400W of DC output capacity. (Numbers for example only, if someone wants to post a measured de-rating curve for a real PSU, be my guest.)
I just chacked and you are right. Howver, I still think you are way too close to the PSU limit, and WAY over the cooling requirments. Not to mention the video card modded bios, the motherboard setting that may be giving too much power/voltage and the reconditioned products are in question too. There are so many vartiables here as to the real power draw and stability, I still question you blaming it on the CPU. I have 18 boxes (2 of which are Xeon, and 6 EPYC) and I have never had the kind of issues you are talking about.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,353
10,050
126
It would probably be instructive to find out exactly WHAT software AMD runs on their CPUs, to calculate "rated" TDP. Because, indeed, when I'm just mining, or NOT running PrimeGrid, my Package Power of both my 1600 and 3600 hover roughly around 62-69W. So, say a time-weighted average of 65W.

Likewise, Intel specs "base clocks" and "AVX offset" for the multiplier when an application uses AVX/AVX2/AVX512 opcodes. I *think* I remember reading about one particular Intel CPU, in which the default "AVX offset" would, in fact, drop clocks "below base clocks". Some server or HEDT many-core CPU.

Maybe AMD needs something similar.

Maybe they just use Cinebench runs to judge TDP ratings, and not a so-called (by unknowing people) "power virus" like PrimeGrid. (Which is NOT a "power virus", just a highly-optimized piece of mathematical code. (Like Linpack - that's a math library, it wasn't invented as a stress-test, believe it or not.)
 

VirtualLarry

No Lifer
Aug 25, 2001
56,353
10,050
126
Not to mention the video card modded bios, the motherboard setting that may be giving too much power/voltage and the reconditioned products are in question too.
Well, actually, it's NOT a "modded" BIOS on the video card, I linked to Gigabyte's site for the GPU, it's a standard stock BIOS. You don't consider factory-issued mobo BIOS updates to be "modded BIOSes", do you?

As far as the mobo settings, that may well be true, and I'm going to look up Gigabyte's secret menu to adjust voltage in a second here, to try to tone down Vsoc and then see what the reported Package Power is.

The reconditioned products being in question? NO DOUBT THERE. :)


Ok, give me a moment to read that second linked article. Gigabyte X370 mobos overvolting Ryzen CPUs and burning them up? DO TELL!

Gigabyte hasn't issued a response yet but they did update their support page and have removed the affected BIOS from the downloads list.

Scary article, but the BIOS F25 is still up on their site, so presumably, that bug doesn't apply to me? I almost wish that it did, then I might have found a solution, LOL. As long as I can figure it out before my CPU burns up. (Been on the same BIOS F25 version for like a year now, mining, crunching, nothing burned out, so I'm probably OK.)

Still trying to figure out how to tone down the Vsoc though.

Hey, a helpful Anandtech article to the rescue.

Edit: Maybe not. That article is for the Gigabyte X370 Gaming 7 ATX, I have the regular Gaming.

There's NO fixed vcore, or vsoc voltages available in my BIOS... AT ALL. Not good, because the way to get around the overvolting mentioned in the article, is to set a manual vcore and vsoc. All I have is "Dynamic VID Vcore" and "Dynamic VID Vsoc" (aka offset voltage mode). :(
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
5,321
8,004
136
It would probably be instructive to find out exactly WHAT software AMD runs on their CPUs, to calculate "rated" TDP. Because, indeed, when I'm just mining, or NOT running PrimeGrid, my Package Power of both my 1600 and 3600 hover roughly around 62-69W. So, say a time-weighted average of 65W.

Likewise, Intel specs "base clocks" and "AVX offset" for the multiplier when an application uses AVX/AVX2/AVX512 opcodes. I *think* I remember reading about one particular Intel CPU, in which the default "AVX offset" would, in fact, drop clocks "below base clocks". Some server or HEDT many-core CPU.

Maybe AMD needs something similar.

Maybe they just use Cinebench runs to judge TDP ratings, and not a so-called (by unknowing people) "power virus" like PrimeGrid. (Which is NOT a "power virus", just a highly-optimized piece of mathematical code. (Like Linpack - that's a math library, it wasn't invented as a stress-test, believe it or not.)

AMD doesn't suffer the same effects on power draw from AVX like Intel does. AMD guarantees their base clocks at TDP under AVX loads. With that said, sometimes motherboards like to game the systems a bit and can cause CPUs to draw more than they are supposed to. If your CPU isn't sticking to TDP with performance boost turned off, it's probably something your motherboard is doing. If I remember right, some motherboards essentially forced performance boost on if you enable XMP on the memory. Stuff like that can happen, some motherboard vendors are shady that way.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,570
14,520
136
It would probably be instructive to find out exactly WHAT software AMD runs on their CPUs, to calculate "rated" TDP. Because, indeed, when I'm just mining, or NOT running PrimeGrid, my Package Power of both my 1600 and 3600 hover roughly around 62-69W. So, say a time-weighted average of 65W.

Likewise, Intel specs "base clocks" and "AVX offset" for the multiplier when an application uses AVX/AVX2/AVX512 opcodes. I *think* I remember reading about one particular Intel CPU, in which the default "AVX offset" would, in fact, drop clocks "below base clocks". Some server or HEDT many-core CPU.

Maybe AMD needs something similar.

Maybe they just use Cinebench runs to judge TDP ratings, and not a so-called (by unknowing people) "power virus" like PrimeGrid. (Which is NOT a "power virus", just a highly-optimized piece of mathematical code. (Like Linpack - that's a math library, it wasn't invented as a stress-test, believe it or not.)
If thats true, I could believe it. AMD does not expect a CPU to run AVX2 software at 100% for 24 hours straight, all day every day.. Again, as suggested, it you are going to stress it the way you do, you need more than stock cooling, let alone the horrible cooling you have,.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,570
14,520
136
Well, actually, it's NOT a "modded" BIOS on the video card, I linked to Gigabyte's site for the GPU, it's a standard stock BIOS. You don't consider factory-issued mobo BIOS updates to be "modded BIOSes", do you?

As far as the mobo settings, that may well be true, and I'm going to look up Gigabyte's secret menu to adjust voltage in a second here, to try to tone down Vsoc and then see what the reported Package Power is.

The reconditioned products being in question? NO DOUBT THERE. :)


Ok, give me a moment to read that second linked article. Gigabyte X370 mobos overvolting Ryzen CPUs and burning them up? DO TELL!



Scary article, but the BIOS F25 is still up on their site, so presumably, that bug doesn't apply to me? I almost wish that it did, then I might have found a solution, LOL. As long as I can figure it out before my CPU burns up. (Been on the same BIOS F25 version for like a year now, mining, crunching, nothing burned out, so I'm probably OK.)

Still trying to figure out how to tone down the Vsoc though.

Hey, a helpful Anandtech article to the rescue.
Gigabyte altered my RMA'ed motherboard years ago, and denied my RMA (it was not like the pictures they sent me when it left my house). I have not touched one since. The motherboards alone could be all of your problems.

Well, and the horrible cooling solution, including PUTTING A COMPUTER with NOT INSUFFICIENT COOLING IN A CUBBIE ??????
 
Last edited:

VirtualLarry

No Lifer
Aug 25, 2001
56,353
10,050
126
I have 18 boxes (2 of which are Xeon, and 6 EPYC) and I have never had the kind of issues you are talking about.
Well, as far as EPYC goes, Stefan (I'll get your name right eventually) mentioned, that unlike the consumer Ryzen CPUs, the EPYC CPUs have their PPT set to the same as their rated TDP. Which might explain why you see what you're seeing, Mark, and why I'm seeing what I'm seeing. (PPT higher than rated TDP.)
 
  • Wow
Reactions: ksosx86

VirtualLarry

No Lifer
Aug 25, 2001
56,353
10,050
126
If I remember right, some motherboards essentially forced performance boost on if you enable XMP on the memory. Stuff like that can happen, some motherboard vendors are shady that way.
This Gigabyte X370 Gaming ATX board, isn't even giving me manual fixed voltage settings for Vcore and Vsoc. So much for X370 being a "high end" chipset, LOL. Leave it to Gigabyte. :(
 

Steltek

Diamond Member
Mar 29, 2001
3,042
753
136
I would also like to investigate Vsoc settings, on this Gigabyte X370 Gaming ATX mobo, it seems pre-set to 1.200V, which is over the recommended setting of 1.100V. But I can't seem to find any voltage settings other than VDIMM in this BIOS F25. I think I'm forgetting the "secret Gigabyte hotkey" to get to the voltage MIT pages. Does anyone remember?

@VirtualLarry , I think to get to the MIT menu the BIOS must be in "Classic mode". It sounds like yours is in the simpler"Easy Mode".
To switch, try pressing the F2 key in the BIOS.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,570
14,520
136
Well, as far as EPYC goes, Stefan (I'll get your name right eventually) mentioned, that unlike the consumer Ryzen CPUs, the EPYC CPUs have their PPT set to the same as their rated TDP. Which might explain why you see what you're seeing, Mark, and why I'm seeing what I'm seeing. (PPT higher than rated TDP.)
What about my 2970wx, by 3 2990wx's, and my 3 3900x's ? No problems with those.
 
  • Like
Reactions: Drazick

VirtualLarry

No Lifer
Aug 25, 2001
56,353
10,050
126
Well, and the horrible cooling solution, including PUTTING A COMPUTER NOTH INSUFFICIENT COOLING IN A CUBBIE ??????
Well, I interrupted the mining, restarted, went into the BIOS looking for a Vsoc fixed voltage option, but when I got into the BIOS, there was a message that my BIOS settings had been reset. So I set XMP and existed. Which, of course, removed my setting for disabling Core Performance Boost, as well as my cTDP 45 and CPU Temp Limit 80. I rebooted into Windows.

And just now, the PC restarted.

Let's recap. Ryzen R5 1600 CPU, Wraith Spire 95W copper-cored heatsink, properly pasted and torqued down, with a refurb Gigabyte RX 5600XT with BIOS F3 (current newest, added support for 14Gbit/sec memory, and boosted TDP to 180W), and a 75W PCI-E slot-powered Gigabyte GTX 1650 4GB D5 ITX card in the lower slot.

Mining on both GPUs, and PrimeGrid on the CPU, for less than 10 minutes.

BOOM! Hard restart.

Not looking good.

I don't have BOTH RX 5600XT cards in there, just one of them. Sigh.

Edit: Ok, on the 1600 rig, I went back into BIOS, disabled Core Performance Boost, set cTDP to 45, and then set CPU Temp Limit to 80. Rebooted into Windows. Went into Wattman, set Power Limit on the RX 5600XT to -20%. Exited.

Letting it mine and crunch PrimeGrid.

Edit: The more that I think about this, the more that it seems like possible the mobo IS overvolting the CPU, somehow. Maybe if I flash to a newer BIOS? They're up to like F50 or so, I think, maybe higher now. I thought that the PinnaclePI 1.0.0.6 was best for Ryzen 1 CPUs though.
 
Last edited: