6700K & XMP Voltage Query

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

wingman04

Senior member
May 12, 2016
393
12
51
I can't find points of disagreement with Wingman.

There's just no spec anymore for "maximum safe VCORE." If I'm not mistaken, the last published spec was for a 32nm Intel Nehalem processor -- around 1.38V. What Wingman says about Intel's own "calculation" should also be seen in context of their "cost accounting." You would think -- if there were a published spec (and there is no more) -- they would attempt to guarantee a minimum of 3-year warranty returns just from people running the processor at stock speeds. On the other hand, we all agree that with no spec, the motherboard makers ship boards with "auto" settings that can "turbo-idle" to 1.39V, as does the OP's, shown from his screenies. It's the same thing I discovered about my ASUS board the first time I booted the system.

Here's a graph similar to the work IDontCare had published here at the forums (IDC is an illustrious member, who -- I think -- worked for TI). At first, I thought I recognized IDC's work, but the comma-decimal is a European convention, I think:

voltage_scaling.jpg


Consider that I have a "binned" chip, and there was no such distinction for the testing for the graphed results. This is the same thing I'd noticed about IDC's work on the Sandy Bridge and possibly Ivy (in his de-lidding thread). Either typing in these voltage milestones for "Manual mode" fixed voltage or for "Adaptive mode's" "Voltage for extra turbo" on top of a low or 0.000V Offset can likely give you panic-free stress-testing, because the voltage is so close that stressing programs will catch the error and simply stop without a BSOD. Of course, what you type in isn't "what you get," so I do a double-check after BIOS "save and reboot" into BIOS to see what the VCORE monitor shows under "Manual Mode." I adjust to match the reported VCORE against the schedule.

Cheap trick? The only other option is to edge up the voltage from some seeming stability-minimum and suffer through BSODs. I've managed to tune in 4.6 and 4.7 Ghz with only three or four BSODs during the first few hours after initial boot-up.

Wingman notes there's "no such thing" as "safe overclocking voltage," but . . . "we been aroun', you know?!" through some several generations of processors. I've personally never damaged a processor, because I make my own rules based on the incomplete information I have, and I'd rather build a "great computer" than beat an "LN2 competition." My oldest Sandy Bridge has been running 24/7/365 @ either 4.6 or 4.7 -- variously. That's 5 years, running OC'd at those speeds.

The die-shrinks mean less surface area to transfer heat to the processor-cap and cooling device. Voltage defaults have declined with wattage defaults, but this polymer TIM thing -- it's a setback if you OC.

But there's nothing worrisome about the OP's temperatures under stress at stock speeds. With the Noctua single-tower cooler, he's fine to run the memory at XMP spec, and "sync all cores." Even that -- is slight overclock. Let me tell you about the WWII sergeant who trained his men in short-wave radio. He put some 4"-wide paint-filters on the radio adjustment knobs and painted nipples on them.

"Be patient. Be very, very gentle. . . . " [Where are the icons and smilies with this new interface? I needed to make a big toothy grin here.]

CORRECTION: Just to avoid misleading anyone, you have to recheck the reported voltage IN WINDOWS once you try typing in a fixed setting in BIOS, and watch for indications at turbo speed. You can get by with lower LLC -- maybe level 3 works for 4.5 Ghz. Ultimately, level 5 (on my chip anyway) seems to match the reported value under stress-load (little or no vDROOP), and is close to the VID needed to raise that voltage for the fixed setting.
Well you have some good points and a nice chart. However every CPU Die produced has defects in different area, places of the transistors and traces that don't effect the Die running within Intel specification, so that makes every CPU produced use different voltage+amperage within a acceptable range.

If we think of electricity as water flowing through a pipe it can help us understand amps, volts. Amps would be the volume of water flowing through the pipe. The water pressure would be the voltage. This is a over simplified explanation of CPU tracer= (wire) like a light bulb filament (wire). If the filament is lager in diameter it would need more amps and less voltage for the same amount of light as having smaller filament in diameter using less amps and more voltage.

So with knowing all that information Intel uses VID (voltage identification) to set each individual CPU core voltage in the package. To recap if the CPU core has good electrical flow of Amps it will have a lower voltage VID for the same GHz as compared to a lower electrical flow through the CPU core = Amps it will have higher voltage VID for the same GHz.

let me explain the reason why inlet does not show voltage range to the public anymore is it was misused for overclocking, only the board partners have access to that information now. Intel raising an lowering the voltage is done with many variables that I have explained in this thread to achieve correct VID voltage to the VRM. Running out of specification voids the warranty, changing any voltage also the multiplier voids the warranty . If you purchase Performance Tuning Protection Plan link https://click.intel.com/tuningplan/ It will allow one CPU RMA.

So to sum it all up, not every CPU uses the same voltage+amps when overclocking to the same GHz and they don't overclock the same because of Die defects.

You have been a great help really, for saving my time actually.

Well about that power saving thing, mine is averaging at 85-90 watts, on Idle, (with monitor off that is), I would love to lower it down even more, but its all right. I just found those options on BIOS, you know Performance, Balanced and Power saving mode, given by Asus obviously, don't know if they really work, on default Balanced one anyway, but my friend told me something else, not these profiles, whatever, not a big deal at all, and thanks once again.
What are you using to measure the wattage? I directly connected to the PC to the killawatt meter then use a extension cord to connect to the outlet and it shows about 50 watts on my skylake Idle.

Are the power savings options on Automatic? If they are, like I was saying they are already working. Turn them off and see how much power your saving.
 

ithehappy

Senior member
Oct 13, 2013
540
4
81
What are you using to measure the wattage? I directly connected to the PC to the killawatt meter then use a extension cord to connect to the outlet and it shows about 50 watts on my skylake Idle.

Are the power savings options on Automatic? If they are, like I was saying they are already working. Turn them off and see how much power your saving.
Same as you, a killawatt metre or whatever them units are called. 50 watts is really low, incredibly low, I have never seen a Skylake system consuming only 50 watts, lowest I seen my friend's 6600k at 82W or something, on stock settings, Idle.

Power Saving options where, BIOS or Windows? I have not touched any power saving options.

In any case, as I am crazy, I kept doing these tests. I lowered the Core voltage to 1.18V, just to see whether it makes any difference or not, and for max temps it did! Almost by 5C!

Auto Voltage with XMP stock Intel:

http://i.imgur.com/8GN4WWg.png

Manual 1.18V with XMP stock Intel:

http://i.imgur.com/WHcIP4T.png

Prime95 26.6, SmallFFT
 

VirtualLarry

No Lifer
Aug 25, 2001
56,571
10,206
126
50 watts is really low, incredibly low, I have never seen a Skylake system consuming only 50 watts

With or without a dGPU? If you add a dGPU, then the wattage at idle, and especially at load, will be higher, and quite a bit higher, respectively.

Without a dGPU, a Skylake system can idle even lower than that. Even overclocked to 4.34, I think my G3900 with a 250X video card idles at around 56-60W, and under
CPU load (not GPU load), hits around 80-90W.
 

BonzaiDuck

Lifer
Jun 30, 2004
16,323
1,886
126
Well you have some good points and a nice chart. However every CPU Die produced has defects in different area, places of the transistors and traces that don't effect the Die running within Intel specification, so that makes every CPU produced use different voltage+amperage within a acceptable range.

Yup. But those coordinates on the graph likely have a probability distribution. Just starting clueless from the lowest stable voltage eventually suggests that.


. . . . Only the board partners have access to that information now. Intel raising an lowering the voltage is done with many variables that I have explained in this thread to achieve correct VID voltage to the VRM. . . . . .

So to sum it all up, not every CPU uses the same voltage+amps when overclocking to the same GHz and they don't overclock the same because of Die defects.

And we can only infer what the board partners know from their "Auto" configurations, or that would seem logical.
How reliable the inference, is still a guess. So once again, there's maybe two views of what's "safe" in the OC'ing community.

I think once you found a stable voltage that insures enough power so that the computer isn't losing time in stress-tests in error correction, you could re-enable spread spectrum, EIST and the C-States/reporting.

You either choose to take the risks, and incline toward calculated risks, or you don't. Even the warranty-period can figure into those guesses.
 

BonzaiDuck

Lifer
Jun 30, 2004
16,323
1,886
126
With or without a dGPU? If you add a dGPU, then the wattage at idle, and especially at load, will be higher, and quite a bit higher, respectively.

Without a dGPU, a Skylake system can idle even lower than that. Even overclocked to 4.34, I think my G3900 with a 250X video card idles at around 56-60W, and under
CPU load (not GPU load), hits around 80-90W.

Whatever happened to the "multi" feature with the bundled Lucid-Virtu software? I think it's still there. If I remember, making the iGPU the default at boot-time can use that feature with the resources of a dGPU, and there are considerable power-savings from doing it that way. But any of us would be more focused on performance than on power-savings.

But then, the iGPU is also contributing to temperature on the die.

As a rule, I make all the workstations in the house sleep and hibernate. Only my personal systems are overclocked. If one of them remains on 24/7, it's because I watch TV while I'm sleeping.
 

BonzaiDuck

Lifer
Jun 30, 2004
16,323
1,886
126
Same as you, a killawatt metre or whatever them units are called. 50 watts is really low, incredibly low, I have never seen a Skylake system consuming only 50 watts, lowest I seen my friend's 6600k at 82W or something, on stock settings, Idle.

Power Saving options where, BIOS or Windows? I have not touched any power saving options.

In any case, as I am crazy, I kept doing these tests. I lowered the Core voltage to 1.18V, just to see whether it makes any difference or not, and for max temps it did! Almost by 5C!

Auto Voltage with XMP stock Intel:

http://i.imgur.com/8GN4WWg.png

Manual 1.18V with XMP stock Intel:

http://i.imgur.com/WHcIP4T.png

Prime95 26.6, SmallFFT

Wingman seemed a little unsettled about the graph, as though I were presenting it as a set of hard and fast rules.

You could either raise the voltage to 1.2 as would be suggested by the 4.3Ghz speed, or run enough stress tests for long enough that you're satisfied with 1.18. But 1.168 at that speed seems to be a familiar number.

If I'm remembering this thread correctly, you wanted to run at base 4Ghz, Turbo 4.2 and XMP, and your temperatures were in the low/mid 70s C for Prime95 or similar to start.

It's really your choice to take comfort in the lower temperatures, but there is definitely a range of stability there beginning at 1.168V.

I neither encourage nor discourage any explorations you pursue at higher speeds, On my part, I set up BIOS profiles beginning with "stock 4200" default, "stock4200XMP" which used a voltage similar to yours, and then profiles for 4.3, 4.4, 4.5 etc.

Just as a general "announcement," I estimated the LLC setting that my board was providing in "auto" for the default setting, and it was somewhere between level 5 and level 6. I chose to reset it to level 3, but it probably involved upward adjustments of voltage in that range between 1.168 and 1.20V.

Point is, though, at stock speeds, you're just as well to leave it on "auto" and trim the voltage as you desire. Since I had plans to overclock higher than 4.5, I wanted to find a lower LLC and then adjust it upward as needed.

there had been an Anand discussion around the time of Conroe and Kentsfield on the harmonic voltage spikes (drops and spikes) that occur between CPU loading and idling. LLC simply pushed the spike's peak as you raised it -- possibly past the VID, and eventually the overclocker is getting closer to the maximum VID which runs up as high as 1.5V. (Especially if such a person is risk-prone.) At stock speed settings even with "sync all cores," it doesn't amount to any potential problem, so leave it on Auto I say.

Like I said, though, your default settings offer up temperatures that are acceptable under that level of stress. Either Wingman or I would be more concerned about temperature at higher clocks. You should have no such worries, really.
 
Last edited:
  • Like
Reactions: ithehappy

ithehappy

Senior member
Oct 13, 2013
540
4
81
You could either raise the voltage to 1.2 as would be suggested by the 4.3Ghz speed, or run enough stress tests for long enough that you're satisfied with 1.18. But 1.168 at that speed seems to be a familiar number.

If I'm remembering this thread correctly, you wanted to run at base 4Ghz, Turbo 4.2 and XMP, and your temperatures were in the low/mid 70s C for Prime95 or similar to start.

Yes, you are right. I want stock clock rate with the CPU, no overclocking at all for the time being, just XMP. But the only difference between my results in OP and my last results on Post 27 is, for OP, I used the app HeavyLoad to stress test the CPU. But for the last test I used Prime95 26.6. There is a good 10C difference between both apps. If Prime95 shows 75C as max temp for a test, then with the app HeavyLoad it will be 10C lower, ~65C. So basically now I can run the XMP settings and my temps don't above 60-61C with HeavyLoad. Basically very close to XMP disabled full stock settings, and the point of this topic, that's what I wanted to achieve at first place, no extra temp for just running on XMP, I mean to a very low extent anyway, I didn't realise when I started the topic that CPU also gets overclokced a bit with XMP.

It's really your choice to take comfort in the lower temperatures, but there is definitely a range of stability there beginning at 1.168V.
OK, thanks. I guess I will try inputting that number then, 1.168V is much lower than 1.180V :)


Point is, though, at stock speeds, you're just as well to leave it on "auto" and trim the voltage as you desire. Since I had plans to overclock higher than 4.5, I wanted to find a lower LLC and then adjust it upward as needed.

Now I don't really get this. I can leave the settings on Auto and lower/adjust the Core voltage you mean, instead of choosing Manual?


Like I said, though, your default settings offer up temperatures that are acceptable under that level of stress. Either Wingman or I would be more concerned about temperature at higher clocks. You should have no such worries, really.
Thank you mate.
 

ithehappy

Senior member
Oct 13, 2013
540
4
81
With or without a dGPU? If you add a dGPU, then the wattage at idle, and especially at load, will be higher, and quite a bit higher, respectively.

Without a dGPU, a Skylake system can idle even lower than that. Even overclocked to 4.34, I think my G3900 with a 250X video card idles at around 56-60W, and under
CPU load (not GPU load), hits around 80-90W.
Now I have no idea what is dGPU? LOL. I use my 970 with my system obviously, as my signature suggests.
 

BonzaiDuck

Lifer
Jun 30, 2004
16,323
1,886
126
Now I have no idea what is dGPU? LOL. I use my 970 with my system obviously, as my signature suggests.

Like Larry said -- your dGPU is your 970. I started OC'ing this rig of my own using only the iGPU. The dGPU of my 1070 GTX doesn't make much of a difference in CPU temperatures that I notice. Only in the total system power consumption also as Larry said.

I don't think you're wrong in trimming the voltage as we discussed. It's just not a major issue for your system and its thermal performance. It becomes an issue once you start tweaking to find a 4.3 or 4.4 speed or even higher, especially if you're fishing for something with a lower-than-auto LLC level.

Lemme check my notes a minute . . . Yeah -- I needed to bump up the voltage from the lowest point because I dropped Load Line Calibration from :"Auto" to level 3. If you leave LLC "Auto," you should be able to go lower than the 1.210V setting that gave me 20 passes running LinX and 4 to 5 hours with OCCT:CPU. I'm just saying at that 4.0-4.2 speed and all other things ceteris paribus, you could leave LLC on auto, and use that voltage even if you could trim it lower.

Also, something Wingman said about wattage, amps and voltage, and his discussion about cutting it too close or "finding a bare minimum." I think there's a point where the voltage may be just a little insufficient where the machine is straining to run a stress test and you see temperatures a tad higher than they might be at a slightly HIGHER voltage. It's something I'd always noticed trying to add a margin of safety and find a point where all the GFLOPs in a certain LinX test varied by only a fraction.

You certainly won't damage anything by experimenting with it as long as you don't overvolt -- as with "above 1.40V." If you avoid BSODs, you're eliminating only a risk -- which, for me has never materialized with hard-disk corruption. Truth is, including one elusive intermittent problem I had on the 2600K sig rig, I can't count the number of times I got BSODs through its earliest history, but it never corrupted the disk.

But either the "Auto" settings (with excessive voltage) or the lower values you're testing now, with a 5 to 10mV margin of comfort, should be just fine.

You'll know with certainty if you can do one marathon test with any of the ball-buster programs we've discussed. Personally, I think that if it simply passes OCCT:CPU for six or seven hours, that should be enough. You decide . . .
 

wingman04

Senior member
May 12, 2016
393
12
51
Same as you, a killawatt metre or whatever them units are called. 50 watts is really low, incredibly low, I have never seen a Skylake system consuming only 50 watts, lowest I seen my friend's 6600k at 82W or something, on stock settings, Idle.

Power Saving options where, BIOS or Windows? I have not touched any power saving options.

In any case, as I am crazy, I kept doing these tests. I lowered the Core voltage to 1.18V, just to see whether it makes any difference or not, and for max temps it did! Almost by 5C!

Auto Voltage with XMP stock Intel:

http://i.imgur.com/8GN4WWg.png

Manual 1.18V with XMP stock Intel:

http://i.imgur.com/WHcIP4T.png

Prime95 26.6, SmallFFT
This is what my kill a watt meter looks like. Do you have this one also do you set it to watt when you test?
http://www.p3international.com/products/p4400.html

I have a GTX 970 and it uses less power at idle than the GTX 570. 13.3watts less

Done with i5 2500k GTX 570, SSD 840 evo PC only

1.8 Watt power off

2.5 watt sleep mode

80 watt CPu idle 1.6 GHz

167 watt cpu linx 4.0 GHz

333 watt CPU crysis 3 42 FPS

Done with i5 2500k GTX 970, SSD 850 evo PC only

1.8 watt power off

2.5 watt sleep mode

66.7 watt CPu idle 1.6 GHz

84.7 watt cpu idle 4.0GHz CIE,c3/c6,EIST off.

154 watt cpu linx 4.0 GHz

291 watt CPU crysis 3 60 FPS v sync on

Done with i5 6600k GTX 970, SSD 850 evo PC only

55.7 watt CPu idle 800 MHz

I live in the USA.
55W X 8 Hour Idle per day = 440 watt-hours per day x 365 = 160,600 watt-hours per year/1000=160.6 kWh per year X $0.09 cent per kWh= $14.45 for a complete year.
 
Last edited:

BonzaiDuck

Lifer
Jun 30, 2004
16,323
1,886
126
. . . . Probably no less for setting "CPU Current Capability" above 100%.

Several of us are still running i5/i7-2x00K systems, and among "us," some are easier with both the volts and the current capability. Never heard anyone complain about "drifting overclocks" or failure.

I just re-ran 20 passes of LinX on the 2700K. No change. Rock solid.

I had forebodings about overclocking when the die-shrink has come down to 14 nm. Sooner or later, I guess either I -- or the community -- will know anything that needs to be known.

As I said, I thought I noticed that when I was edging up VCORE to get consistent GFLOPs under affinitized LinX. It seemed like suddenly, I found a "cool spot." After that, raising voltage would cause the thermometer to edge upward.

Whether one keeps the voltage within some rational limit, we're still running the speed out of spec in an overclock. I just hope I have as little problems with the Skylake as I've had with the Sandy systems. I don't think either of the processors in my signature have spent more than 3 hours of their lifespan running at stock speeds. Well -- 3 days would cover any inaccuracy about that.

This is what my kill a watt meter looks like. Do you have this one also do you set it to watt when you test?
http://www.p3international.com/products/p4400.html

I have a GTX 970 and it uses less power at idle than the GTX 570. 13.3watts less

Done with i5 2500k GTX 570, SSD 840 evo PC only

1.8 Watt power off

2.5 watt sleep mode

80 watt CPu idle 1.6 GHz

167 watt cpu linx 4.0 GHz

333 watt CPU crysis 3 42 FPS

Done with i5 2500k GTX 970, SSD 850 evo PC only

1.8 watt power off

2.5 watt sleep mode

66.7 watt CPu idle 1.6 GHz

84.7 watt cpu idle 4.0GHz CIE,c3/c6,EIST off.

154 watt cpu linx 4.0 GHz

291 watt CPU crysis 3 60 FPS v sync on

Done with i5 6600k GTX 970, SSD 850 evo PC only

55.7 watt CPu idle 800 MHz

I live in the USA.
55W X 8 Hour Idle per day = 440 watt-hours per day x 365 = 160,600 watt-hours per year/1000=160.6 kWh per year X $0.09 cent per kWh= $14.45 for a complete year.

If the CPU voltage is decrease the amperage increase at the same MHz or GHz frequency. With the voltage decrease it will increase the transistors and Tracers heat, that is that is true will all electronics, even a simple LED light bulb needs to be a dimmable light bulb to last.

Over the years I have seen some CPUs fail by undervolting causing Amperage increasing at the same clock.
 

wingman04

Senior member
May 12, 2016
393
12
51
. . . . Probably no less for setting "CPU Current Capability" above 100%.

Several of us are still running i5/i7-2x00K systems, and among "us," some are easier with both the volts and the current capability. Never heard anyone complain about "drifting overclocks" or failure.

I just re-ran 20 passes of LinX on the 2700K. No change. Rock solid.

I had forebodings about overclocking when the die-shrink has come down to 14 nm. Sooner or later, I guess either I -- or the community -- will know anything that needs to be known.

As I said, I thought I noticed that when I was edging up VCORE to get consistent GFLOPs under affinitized LinX. It seemed like suddenly, I found a "cool spot." After that, raising voltage would cause the thermometer to edge upward.

Whether one keeps the voltage within some rational limit, we're still running the speed out of spec in an overclock. I just hope I have as little problems with the Skylake as I've had with the Sandy systems. I don't think either of the processors in my signature have spent more than 3 hours of their lifespan running at stock speeds. Well -- 3 days would cover any inaccuracy about that.
I volunteer in the Intel forum and I see overclocked CPUs fail all the time, here is one. LINK https://communities.intel.com/thread/105976
 

BonzaiDuck

Lifer
Jun 30, 2004
16,323
1,886
126
I volunteer in the Intel forum and I see overclocked CPUs fail all the time, here is one. LINK https://communities.intel.com/thread/105976

That's interesting. I wouldn't think that 200 Mhz at that voltage would do any damage at all. It's an ITX motherboard, which probably has no bearing on the cause, but you never know. For voltage, if he knows what he's talking about, it's only a 10th of a volt higher than stock operation.

I'm not asking for any, but you would agree that a statistical sample or time series on these "failure events" would say more.

Maybe he had a "large cooler" on it like mine, put it in the SUV and went over a speed bump going 40mph. Speculation only yields a clearer count of holes in the sieve of possibilities.
 

wingman04

Senior member
May 12, 2016
393
12
51
That's interesting. I wouldn't think that 200 Mhz at that voltage would do any damage at all. It's an ITX motherboard, which probably has no bearing on the cause, but you never know. For voltage, if he knows what he's talking about, it's only a 10th of a volt higher than stock operation.

I'm not asking for any, but you would agree that a statistical sample or time series on these "failure events" would say more.

Maybe he had a "large cooler" on it like mine, put it in the SUV and went over a speed bump going 40mph. Speculation only yields a clearer count of holes in the sieve of possibilities.
All you have to do is call Intel Tech support/RMA services and you will have a ear full with K processors failure. Phone Number: 1-916-377-7000
 

BonzaiDuck

Lifer
Jun 30, 2004
16,323
1,886
126
It makes you wonder why they produce "K" processors in the first place. It could make you wonder about QC at Intel, but that's not a likely factor newly-emergent.

Other "stuff" can happen, though. There are still a lot of folks who don't use UPS backup power, living where there's a high incidence of electrical storms. Who knows what risks the anecdotal user subjected that 6600K to?

So I'm either "taking a walk on the wild side," or I can let everyone else in the forum membership with 6700K's be guinea-pigs on this issue. And save my overclock profiles in BIOS for the interim. . . .

Personally, I can not honestly avail myself of the "protection plan," since I got the processor from Silicon Lottery, and I lapped the IHS to bare copper! Old habits die hard. I suppose if I bork the chip, I'll just order a new one and send it to Silly Lotts to have it relidded. Even that would kill my warranty with Intel.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,571
10,206
126
I've pretty-much always overclocked, and run my Intel CPUs at around 80C or so, never had any failures. Though, a friend that I sold a 3.6Ghz Q6600 to, had some issues, after setting up the rig to run in an enclosed cabinet, and it kind of cooked itself, I think. I downclocked it to 3.3Ghz or so for him, and it was stable again, but I'm going to be way more conservative in my OCs from now on, if I'm OCing for someone else. (Preferably only friends and family, that I can check up on the overclock stability every few months for them.)
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Wow. All of this talk is completely circular and pointless. Your CPU temperature is 100% fine. You can run your CPU at 75c. Reducing that temperature will gain you nothing and it will make your processor run slower. There is no problem here of any kind. Just run it at XMP settings and be done with it.
 

BonzaiDuck

Lifer
Jun 30, 2004
16,323
1,886
126
Wow. All of this talk is completely circular and pointless. Your CPU temperature is 100% fine. You can run your CPU at 75c. Reducing that temperature will gain you nothing and it will make your processor run slower. There is no problem here of any kind. Just run it at XMP settings and be done with it.

I know I said it a couple of times and so did others. The remaining issue was whether or not the OP could (unnecessarily) lower his voltage, get stability and reduce temperatures some more (which don't really need to be -- under his type of stress, the Noctua cooler and chip with stock speed settings.)

Now this is only second-hand information, when someone spoke of talking to ASUS tech-support, who candidly said their boards on "Auto" run the chips at higher voltage than required, so there's a 100% chance the board will work with the chip at stock settings.

VirtualLarry said:
I've pretty-much always overclocked, and run my Intel CPUs at around 80C or so, never had any failures. Though, a friend that I sold a 3.6Ghz Q6600 to, had some issues, after setting up the rig to run in an enclosed cabinet, and it kind of cooked itself, I think. I downclocked it to 3.3Ghz or so for him, and it was stable again, but I'm going to be way more conservative in my OCs from now on, if I'm OCing for someone else. (Preferably only friends and family, that I can check up on the overclock stability every few months for them.)

I'm not electronically-educated -- I was a software and database guy. Just the indications I've picked up -- not exactly Fareed Zakaria and "Reliable Sources" -- indicate that Intel has made the Skylake "more resilient" to voltage, but that it's also more thermally sensitive because of the die shrink.

And back to Headfoot: Yes -- no point to reducing temperatures below some point. That's why I personally look at my cooling choices as calculated ones. Why invest $500 or $1000 to keep the CPU at 40C @ 4.8 Ghz, when it runs fine at a maximum 68C and you have to overvolt the processor to get at 4.8 anyway?

Anyone can necro the thread, but I thought it was an interesting discussion. We certainly answered the OP's question.
 

BonzaiDuck

Lifer
Jun 30, 2004
16,323
1,886
126
You still have to recognize the caveat that (a) voltage regulation has been removed to reside on the motherboard components again, and (b) the advice is coming from Intel -- not "the board partner." (c) -- ASUS admitted to more-than-ample voltage defaults themselves.

OF COURSE there are risks to running the processor outside it's speed spec, and there are risks to using the motherboard to modify voltage settings.

leaving amperage out of the equation for a moment, if the processor is "more resilient" to electromigration effects, one would also suspect that processor is more sensitive to thermal degradation than earlier 32nn and 22 nm generations.

Maybe we should just change the forum name to "CPUs." Can't say. Do I know that my warranty has been voided? Surely I do. That was my plan before I bought the processor . . .
 

BonzaiDuck

Lifer
Jun 30, 2004
16,323
1,886
126
No-no. You maybe misunderstood, and somebody can tell me that I misunderstood, but we'll sort it out. The OP noticed an increase in temperatures when he enabled XMP -- not an increase in voltage of the processor. When RAM is set to auto, it will default to 2133 Mhz and 1.20V vDIMM. When user chooses XMP in BIOS, the RAM vDIMM voltage is reset to 1.35V -- the upper boundary of its spec.

I've done enough explorations since I put my rig together in September. I didn't need to change the Offset: Auto defaults it to 0.000V. I only needed to increase VCORE to find my "pretty-good" stable clockspeed. I didn't need to adjust VCCIO or VCCSA to run my DDR4-3200 kit at their spec settings. Yet, changing from 2133 Mhz to 3200Mhz using XMP meant that the processor automatically raised VCCIO by a hair -- at most 10 to 20 mV -- and I don't think VCCSA changes by itself on Auto.