Actually, that may very well be real. At least on most modern Intel Quad Core CPUs, each Core got its own diode. Based on the Core own load and what is next to it (
Another hot Core or dark silicon), each one will report a different value.
CoreTemp supposedly reads temperature values straight from a CPU Register which serves explicitly for that purpose. I don't think Motherboard is related at all.
I don't want to disturb Beraid's problem-solving and our efforts to help. But you have raised a different perspective specifically to a Sandy Bridge example for which I'd had some settled assumptions since the release of Conroe and later CPUs.
As I understood it, there was supposed to be a margin of error for the core sensors. In other words, whether one core is hotter or cooler, the sensor itself was supposed to have an expected error-margin of +/- 5C, and I even think someone had put it at +/- 6C.
That being said, the explanation suggested by the posted diagram you offered doesn't explain my own Sandy Bridge readings. My hot core is the one to the left -- "Core #1" or Core 2 -- depending on whether you number them 0 to 3 or 1 to 4. So even if there is a "dark silicon" explanation to these differences, it is a situation that has "multiple causation."
All along, based on my reading going back a few years, I had assumed that there is statistical error in the sensor accuracy. For this reason, I came also to conclude that a four-core average of temperatures at a point in time was a more sensible snapshot of CPU temperature, just as one might take an average of either a single core or those same four-core averages sampled every few seconds of an hour or two of stress to get an overall picture of "load temperature."
I think it's important to resolve this issue, because people assume that because their hottest temperature is reported by a certain core, that reflects its "true temperature" or thermal state. On the other hand, any kind of measuring device has a range of acceptable error and limited accuracy. Further, one has to ask what the equilibrium temperatures would be after so much time under load, since heat is being exchanged all over the die and the further spread in all directions by the IHS.
Also, worthy to point out -- Intel itself had made official pronouncements about the sensors, noting that they weren't "meant to be accurate" at idle temperatures. Inconclusive as it may seem, this would also imply some range of error at load temperatures -- a range of error that would be "more accurate" than the idle values.
I urge you and especially others who may have more and better information to chime in about this. [And Virge, you've "been around" for a while, so you could add more even though we may be complicating things for the OP.]
ADDENDUM: Looking again at the picture/diagram and CoreTemp readings, I think the actual pattern of values -- not just the inconsistency with my own "hot core" -- supports my perspective here. My LOWEST temperatures come from some cores shown to be "high" from the picture's CoreTemp screenie. In other words, Core 0, 2 & 3 on my system are all within 1 to 3C of each other -- I have only one "hot" core which exceeds the rest by as much as 10C, and it exceeds the average of the four cores by about 5C. This is consistent with the explanation that there is simply measurement error in any given core sensor for any given CPU. But cores shown by the picture as "second highest" and "third highest" are much closer to the hot core, while my three lower cores are closest to each other with the hot core showing the widest range of variation from any of the rest or -- for that matter -- the average.