Tricky Temperatures

Beraid

Member
Dec 2, 2010
29
0
66
I'm trying to find out what temp my CPU is running at and this is proving to be a little confusing. The cpu is an AMD Athlon X4 750K and the motherboard is a Gigabyte GA-F2A88X-D3H. Using an Hyper 212+ and the room temp is around 60°F. Now before we begin the BIOS seems to give me the correct temperature, at least thats what my gut tells me. That said, here is a look at what programs under Windows 7 report the temp(s) as while idling and while stressed;

HWMonitor: 42°C, 62°C (labeled as 'Package')
Speccy: 42°C, 62°C
CoreTemp: 0°C, 0°C
OpenHardwareMonitor: 0.0°C, 17.6°C (Labeled as Cores #1-#4)
A note about OHM; If I double-click the Cores text I get a pop-up window titled 'Parameters', and it says Offset [°C] | Default (checkbox) | Value 0.

Not really sure what to make of that. So what do you think the temp of my CPU is? Honestly only worried about the coming warmer weather and if I ever bother getting into overclocking.
 

greenhawk

Platinum Member
Feb 23, 2011
2,007
1
71
I personally go for the 42 by HWMonitor or Speccy. CoreTemp does not appear to be communicating with your motherboard correctly (might just be settings).

The issue is that for most motherboards, CPU temp is the temp read under the CPU (in the socket) as this is the most reliable one in terms of closest to the temperature around it.

The package temp / second temps are probably from the temperature sensor inside the cpu on the actual silicon itself. While it is closer to the heat generating source, they can be very inaccurate without watching them. I remember seeing a quad core cpu report a 10+ degree difference between 4 different readings (one for each core), but no idea as to which one was accurate as all where different.

A over clocker might have a better way to trust the internal temperatures, but they can generally be higher than the motherboard's temperature before causing issues.
 

zir_blazer

Golden Member
Jun 6, 2013
1,261
574
136
The package temp / second temps are probably from the temperature sensor inside the cpu on the actual silicon itself. While it is closer to the heat generating source, they can be very inaccurate without watching them. I remember seeing a quad core cpu report a 10+ degree difference between 4 different readings (one for each core), but no idea as to which one was accurate as all where different.
Actually, that may very well be real. At least on most modern Intel Quad Core CPUs, each Core got its own diode. Based on the Core own load and what is next to it (Another hot Core or dark silicon), each one will report a different value.
CoreTemp supposedly reads temperature values straight from a CPU Register which serves explicitly for that purpose. I don't think Motherboard is related at all.
 

BonzaiDuck

Lifer
Jun 30, 2004
16,632
2,027
126
Actually, that may very well be real. At least on most modern Intel Quad Core CPUs, each Core got its own diode. Based on the Core own load and what is next to it (Another hot Core or dark silicon), each one will report a different value.
CoreTemp supposedly reads temperature values straight from a CPU Register which serves explicitly for that purpose. I don't think Motherboard is related at all.

I don't want to disturb Beraid's problem-solving and our efforts to help. But you have raised a different perspective specifically to a Sandy Bridge example for which I'd had some settled assumptions since the release of Conroe and later CPUs.

As I understood it, there was supposed to be a margin of error for the core sensors. In other words, whether one core is hotter or cooler, the sensor itself was supposed to have an expected error-margin of +/- 5C, and I even think someone had put it at +/- 6C.

That being said, the explanation suggested by the posted diagram you offered doesn't explain my own Sandy Bridge readings. My hot core is the one to the left -- "Core #1" or Core 2 -- depending on whether you number them 0 to 3 or 1 to 4. So even if there is a "dark silicon" explanation to these differences, it is a situation that has "multiple causation."

All along, based on my reading going back a few years, I had assumed that there is statistical error in the sensor accuracy. For this reason, I came also to conclude that a four-core average of temperatures at a point in time was a more sensible snapshot of CPU temperature, just as one might take an average of either a single core or those same four-core averages sampled every few seconds of an hour or two of stress to get an overall picture of "load temperature."

I think it's important to resolve this issue, because people assume that because their hottest temperature is reported by a certain core, that reflects its "true temperature" or thermal state. On the other hand, any kind of measuring device has a range of acceptable error and limited accuracy. Further, one has to ask what the equilibrium temperatures would be after so much time under load, since heat is being exchanged all over the die and the further spread in all directions by the IHS.

Also, worthy to point out -- Intel itself had made official pronouncements about the sensors, noting that they weren't "meant to be accurate" at idle temperatures. Inconclusive as it may seem, this would also imply some range of error at load temperatures -- a range of error that would be "more accurate" than the idle values.

I urge you and especially others who may have more and better information to chime in about this. [And Virge, you've "been around" for a while, so you could add more even though we may be complicating things for the OP.]

ADDENDUM: Looking again at the picture/diagram and CoreTemp readings, I think the actual pattern of values -- not just the inconsistency with my own "hot core" -- supports my perspective here. My LOWEST temperatures come from some cores shown to be "high" from the picture's CoreTemp screenie. In other words, Core 0, 2 & 3 on my system are all within 1 to 3C of each other -- I have only one "hot" core which exceeds the rest by as much as 10C, and it exceeds the average of the four cores by about 5C. This is consistent with the explanation that there is simply measurement error in any given core sensor for any given CPU. But cores shown by the picture as "second highest" and "third highest" are much closer to the hot core, while my three lower cores are closest to each other with the hot core showing the widest range of variation from any of the rest or -- for that matter -- the average.
 
Last edited:

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Your CPU clearly isn't at 0C, so any programs reporting that are obviously wrong.

As for the other programs, keep in mind that CPUs report delta-T (how many degrees until they reach their limit), not absolute temperature. If you doubt the absolute temperatures, look at the delta instead.
 

Beraid

Member
Dec 2, 2010
29
0
66
I personally go for the 42 by HWMonitor or Speccy.

I have a hard time believing it though simply because as I said the room is ~60°F and while the case certainly doesn't have the best airflow its not a sealed box either. A Hyper 212+ in a 60°F room while idling, especially when the BIOS was telling me it was idling at 25°C just doesn't seem to make sense to me.
 

bononos

Diamond Member
Aug 21, 2011
3,938
190
106
The short answer is that AMD doesn't report temps but its own reading using an arbitrary scale. The motherboard may have its own thermistor which reports temps but its inaccurate because it really measures the air around the cpu.

Intel reports actual temps since P4 or Conroe (I think). While it may not be very accurate, its miles better than AMD which needs tweaking and guesswork and seems to have wide variations on the same build.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
My understanding is that Trinity APUs have totally borked temp sensors (design defect), so don't even bother. (Certainly limits OC potentital too, when you can't properly monitor temps.)
 

BonzaiDuck

Lifer
Jun 30, 2004
16,632
2,027
126
My understanding is that Trinity APUs have totally borked temp sensors (design defect), so don't even bother. (Certainly limits OC potentital too, when you can't properly monitor temps.)

Nobody addressed my dissertation on the [Intel] thermal sensor error factor, but I was probably wrong to deluge the thread with it since the OP has an AMD system. Even so, observations about built-in error would seem generally applicable across the duopoly of CPU options.

I just think that putting that question to rest is important to measurement integral to an overclocking exercise, even if overclocking and desktops may be headed toward the fate of the steamboat (my favorite example.)