It is marvelous to be able to share information this way.
In these and other forums, I have various questions or concerns about how people report their OC settings and over-volt settings. For example, when reporting "load" temperature, are they sampling temperatures over a reasonable time-period of running "load" programs? Or are they just recording peak temperatures? What load-testing program do they use, and how long are they running it?
In another topic thread I posted recently, I explained that setting my VCORE above 1.475V on my Striker motherboard seems to be a threshold for higher temperatures. I may also have said that, depending on BIOS revision, there had been (a) a failure for the board to change VCORE to a real value above about 1.42 and 1.44 by changing the setting to some number above that 1.475 value, but this was corrected in later BIOS versions. I noted that usual discrepancy between "set" vcore and monitored values -- on my board, it is a discrepancy of about -0.02V. Then, there is "voltage droop" -- which I measure as the difference in reported idle and load values as about 0.03V.
Today, I discovered some valuable information at Tom's Hardware forums, stating that the E6600 and other similar processors had three TEMPERATURE sensors: one called the TCase sensor located between the cores, and another "TJunction" placed at the hottest locations for each core:
Core-2-Duo Temperature Guide
This guide seems to confirm my own "seat-of-the-pants" decision that voltages above 1.475V seem to make it more difficult for air-cooling. I had tried the voltage they mention, and the load temperature readings only began to fall back at the 1.475 setting.
But I also discovered something else. I currently have two programs that report VCORE in "real-time" -- PC Probe and CPU-Z. nVidia Monitor only reports the "set" values selected in BIOS, and apparently, Everest reports information coded into the CPU.
I notice that CPU-Z reports 1.3925V (+ or -), while ASUS is reporting 1.44V, using a "set-voltage" value of 1.4625V. So it seems to be an open question about which program reports the "true" real-time voltage (in this case, under idle conditions).
Other information gleaned from the THG guides suggests that an idle-to-load temperature swing of up to 25C-degrees is "normal" for these processors -- under ordinary cooling situations. I am pursuing my belief that lower the temperatures of the SB, NB, DRAM and even the graphics card might reduce this somewhat. While my particular motherboard has features that reduce overall motherboard temperatures or keep them stable, the motherboard sensor is not located near either the NB, SB or memory. One would suspect that more attention to cooling these items would reduce CPU temperature highs, since these things are all connected through the NB, and the connections would logically conduct heat.
My understanding is this: Stress to the processor derives from three things -- excessive temperatures, operating voltage, and frequencies higher than what the processor was spec'd for. Both of the latter two factors increase temperature stress, but even at reduced temperatures those factors have impacts of their own.
This is where the discrepant reporting of voltages is of great interest to me. Even if I were to believe the PC Probe readings, my voltage settings result in an actual (real-time) load value that is approximately 0.07V above the printed Intel spec -- which doesn't seem like much. But under CPU-Z's reported "real-time" voltage, it is only about 0.02 or 0.03V above the printed spec.
Without further information, I have to assume that certain steppings and certain "batches" of E6600 processors (or all processors within any other model) -- will run at lower temperatures, higher frequencies and lower voltages, and other steppings and "batches" will run at temperatures, frequencies and voltages that result in less than "record" over-clock speeds for that processor.
One thing I've yet to do with my E6600-Striker Extreme setup. I started adjusting voltage upward using the PC Probe and nVidia Monitor "Auto" values for real-time "set-value" -- or 1.44V. Posts on other forums suggest that you can actually set the voltage much closer to the Intel printed spec, and obtain higher over-clock settings with lower temperatures.
But you can see, from the disparate reports using different monitoring software programs, that questions remain.