• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Water Cooling for 5820K

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I used OCCT. IBT isn't really a good test, it unrealistically heats the CPU and draws an excessive amount of power, much more than any real load will. It needlessly stresses the components. I stopped using it after my CPU passed 24hour stress test with IBT and OCC:T reported errors within minutes. Other voltages are usefull if you want to maximize memory or cache clocks but it yields very little benefits, I set my cache clock at 3.5GHz without changing any voltages.
We-ull, Lepton! You're a scholar, a prince and a gentleman.

Thanks I'm flattered
 
Last edited:
I used OCCT. IBT isn't really a good test, it unrealistically heats the CPU and draws an excessive amount of power, much more than any real load will. It needlessly stresses the components. I stopped using it after my CPU passed 24hour stress test with IBT and OCC:T reported errors within minutes. Other voltages are usefull if you want to maximize memory or cache clocks but it yields very little benefits, I set my cache clock at 3.5GHz without changing any voltages.

Well, like I said: CPU:OCCT to find instability quickly. LinX to get some samples of 10 or 15 GFLOPS readings to see if voltage is sufficient. IBT for just a few runs (a half hour would do it) to gauge cooling improvements.

They all test different things in different ways. I'd had it pass CPU:OCCT for 4 hours, and Prim95 "LFFT" would crash sooner than that -- an indicator that I should tweak the IMC voltage or something related. Those two test results were preceded by a Prime95 SFFT which would just "keep on ticking and take a licking" -- maybe 5 hours or longer.
 
Last edited:
After my very busy day of errands, I was sitting down to a "Law and Order" double-play with a deep plot.

It dawned on me, thinking about the last couple exchanges I'd had here with Lepton.

Everybody used Intel Burn Test for the Sandy chips, and probably the Ivy Bridgers. If there had been some problem with it running on the IB's, I missed seeing it, but -- complicated by the IHS and TIM -- the topic may have come up. I just don't know.

I decided to turn this around a different way to think about it. Some events in the purely-fictional plot of the Dan Brown "Digital Fortress" novel seem like this thermal "problem" running some programs on the Haswells.

So I want to state clearly and emphatically that IBT may have been fine if only for thermal comparisons on the Sandy Bridge. The Digital Fortress viewpoint suggests that it's possible to write damaging code for a processor, or that some code could be damaging.

But it also seems like you could use any number of such stresser programs for the purpose of thermal comparisons. You're comparing temperatures: you're not trying to necessarily make the comparisons with the very highest temperatures that are possible.

If anyone wants to expand on these thoughts, I'd be interested to see the discussion.
 
I'll give some real world examples at 1.3V 4.3GHz, 4.4GHz unfortunately crashes after a couple of hours with OCC:T but nowhere else.(is 1.35V save? I don't care much about longevity I bought such a good mobo because I always wanted to upgrade to the Broadwell-E, hopefully an 8 core version will be relegated to K models not X, what I'd like to see would be 8 cores K model 10 cores X model) OCC:T 78C, Linpack 85C and Dragon Age 3 56C.

Those programs are power viruses, the Linpack being the biggest culprit, with the system fully loaded my power meter shows over 700W. That is DA:Inquisition+ Linpack with 4 cores, I found a spot where my FPS hovers around 45 frames per second so it taxes the cards to the max, well at least the first the second isn't maxed about but almost and the fps doesn't lower at all when I dedicate 4 threads to Linpack, it still has 2 physical cores and 6 logical cores although Linpack maxes out the cores so I'm not sure if HT helps any.

What that means is that 4 fast cores are still enough for all games at least hyper-thread cores so they have 8 threads at their disposal. I think the time has come when buying the i7 finally makes sense for games over the i7 especially considering that the cheapest 4C/8T Xeon is just a bit more expensive than the consumer i5s. I'm going to test with other games and other real world programs but I doubt I'll exceed 60C with games and possibly not much more with actual programs. So I really wouldn't worry about any of those programs hitting temperature limit because they heats the CPU about 20C higher than anything else.

Also my first Titan hits 95C unless I set the fan at 75% so I'm thinking really hard about water-cooling those. The counterargument would be the fact that I would get close to zero resale value for those. Are there any universal blocks for just the GPUs that I could reuse for future cards? If only SLI worked better and I could rely on support for TRI-SLI I would do it. 3 Titans would be very good for a long time thanks to its huge frame-buffer.
ps. can someone provide me links for Titans water blocks?
As for damaging code if you have either not a very robust CPU or VRMs on your mobo it can easily damage your computer because it creates an unrealistic load much higher than anything else especially with AVX instructions not to mention AVX2.
ps. Will Titan Black work with my regular Titans? Because the only Titan I could find for sale is the black version.
 
Last edited:
If temps are fine, then 1.3 - 1.35v is fine

Cooling Requirements
Depending on your ambient temperatures, full-load voltages over 1.25Vcore fall into water-cooling territory (dual-radiator). With triple radiator water-cooling solutions, using up to 1.35Vcore is possible. For air cooling the value is much lower, limiting total overclock, so plan your cooling investment appropriately.

http://rog.asus.com/365052014/overclocking/rog-overclocking-guide-core-for-5960x-5930k-5820k/
 

Thanks, 1.35V might get my 4.4GHz stable and it won't overheat that for sure.
ps. I managed to get 800W from the wall with the DA3 and 4 threads of Linpack. There's room for one more card and overclocking them. 4 would be fine without OC, 4+ overything OCed would need 1.5KW PSU. That's more than my heater!!! OHOT 4.3GHz or 4.4GHz, not much of a difference, I think ocing the uncore could get me as much performance as that additional 100MHz.
 
Thanks, 1.35V might get my 4.4GHz stable and it won't overheat that for sure.
ps. I managed to get 800W from the wall with the DA3 and 4 threads of Linpack. There's room for one more card and overclocking them. 4 would be fine without OC, 4+ overything OCed would need 1.5KW PSU. That's more than my heater!!! OHOT 4.3GHz or 4.4GHz, not much of a difference, I think ocing the uncore could get me as much performance as that additional 100MHz.

At this point, and after seeing video of KarLiTos' rig (!!!), I can only stand back and watch.

Personally, I'd impose a limit on myself at around 1.3V on those cores, but IDontCare seemed to think the processors could take more voltage, or more than the limits that Intel USED TO publish -- ending with Nehalem.

And since I'm not an electronics-tech, I've only been informed that the traditional limit for a transistor is ~1.5V. So I'm beginning to wonder if there isn't more risk for simply "burning out the light-bulb," or from voltage/current leakage between the circuits in the processor. If the latter, then the lithography might provide a guideline for extrapolating spec-limits from older processors. Otherwise -- it could be higher.

I just don't know . . .

My own budgetary limits -- pretty much self-imposed -- don't apply here -- you guys are trying to trump the Maximum PC "Dream Machine of the Year." I wouldn't even bother with 2x SLI anymore. With the rate of technical change, it would become a more dire problem for me of selling used graphics cards in a feverish annual or bi-annual attempt to recoup the depreciated value of the investment.

But there's a lot to watch and learn from -- both ways I would hope. . .

Geez!! 800W??!! Or MORE?!! This is definitely Dream-Machine territory!! At LEAST it won't burn that much wattage in "business" applications, idle or sleep!!
 
Last edited:
Achieving a complete stability is harder then I thought, 4.4GHz at 1.35V crashes very sporadically so I'm trying 35x125 1.35V 4375MHZ memory at the default voltage clocked at 2.5GHz XMP, NB frequency at 3625MHz. If it's stable then I'm going to try to max out the NB frequency I know it can reach over 4GHz maybe even 1:1 ratio with the core clock like SB, memory should also be able to clock higher. Earlier I wasn't using the XMP profile and the CL timing was set to 17 cycles while now it is at 15 cycles. What's strange is that any adjustment of BCLK with the 100MHz strap resulted in the system that was unable to boot at all. I didn't try changing the frequency with the 125MHz strap.
BTW. I found a water block for the Titan that's not prohibitively expensive but there's only one available. Is it good? I want to buy it and another one but unfortunately the next won't be as cheap. Unless that water block is a complete disaster...

http://allegro.pl/xspc-razor-gtx-titan-780-780ti-blok-wodny-i4955114572.html

other blocks are a third more expensive. Maybe I'll make an another thread for titan's blocks... I think that another thread is kind of redundant but if you think making it is a good idea I'll do it or just buy that block along with a more expensive one.
Geez!! 800W??!! Or MORE?!! This is definitely Dream-Machine territory!! At LEAST it won't burn that much wattage in "business" applications, idle or sleep!!
considering that is the complete power draw before any PSU losses it's a bit more than half of what my PSU is capable of. I have a 800W Corsair laying around and even that would be fine for my current system. I'm keeping it because if I ever ran out of juice I'll just use two instead of buying an extremely expensive 1.5KW unit and dealing with the hassle of selling my current PSUs.
ps. Any instructions on how to turn the on the PSU without actually plugging it into a mobo? I had an instruction in PDF on how to do it but I somehow lost it so if anyone has something like that please provide us a link. I'd like to have that information ready.
 
Last edited:
. .
ps. Any instructions on how to turn the on the PSU without actually plugging it into a mobo? I had an instruction in PDF on how to do it but I somehow lost it so if anyone has something like that please provide us a link. I'd like to have that information ready.

I don't know the particulars, but it's called the "green wire" trick and if I remember correctly -- involves connecting two pinouts on the PSU 20/24-pin plug. Some folks were simply using a paperclip. At least -- I think . . .

Do enough web-searches, and you'll find it.
 
I don't know the particulars, but it's called the "green wire" trick and if I remember correctly -- involves connecting two pinouts on the PSU 20/24-pin plug. Some folks were simply using a paperclip. At least -- I think . . .

Do enough web-searches, and you'll find it.

Yes, you connect that pin to any ground on the same cable (the black wires), and it will turn the PSU on (if it's switched on). To avoid any paper clips falling out, I just bought an inexpensive adapter. It's pretty much half an extension adapter with a wire that connects the two pins. Simple, but given it has the plug, it stays locked in.
 
I knew it was about connecting two pins but I just didn't know which ones but I do know now. Thanks, it was totally beside the point of the thread and I don't need to know that now but it'll come in handy in the future and why make an unnecessary thread when I can get my answer here by the way. Back to the point everything seems fine at 4375MHz so everything points in the direction of that 25MHz more being the straw that broke the camel back

ps. I wish I could count on NV not turning their back on customers who payed an unprecedented amount of money on their non-professional cards and that they are still going to work on drivers for Kepler cards, especially make sure SLI works just as well as on Maxwell cards which really aren't much of a leap just an incremental improvement over GK110. The titan certainly has what it takes to make it last a bit longer at least until their 16FF/14nm flagship maybe not one but 2 or 3 of them. That 6GB of ram is even more RAM then their newest flagship has. I'm looking for a good offer on another Titan.
 
Last edited:
Any COM (black wire) will work with the PS_ON wire.
PSpoweron.jpg

LL


___________________________________
_________________________________________
____________________________________________

Are you trying to OC the memory in the same time?


I suggest staying on the 100mhz strap for daily OC.
 
Last edited:
I suggest staying on the 100mhz strap for daily OC.

Why? With 100MHz strap 4.3GHz is all I can get. What's wrong with the 125MHz strap?

ps. How is it possible that both OCC:T and LINPACK both draw about the same amount of power (385W in my case from the wall) but LINPACK heats the CPU 6-8C more by the hottest core measurement. Does OCCT draws more power from the system memory and other things whereas LINPACK only draws power from the CPU?
UPDATE: WOW, I managed to use even more power from the system without using the graphics cards I used 6 threads in IBT and 6 threads from OCCT at the same time and lo and behold, 430W(!!!) from the wall. The temp also increased by 4C compared to only using IBT.
 
Last edited:
Are you using EIST and C1E ? In other words, do you want your CPU to downclock and downvolt during idle state?
 
Are you using EIST and C1E ? In other words, do you want your CPU to downclock and downvolt during idle state?

Yes.

ps. I didn't know that you can make your CPU draw more power than with the Linpack using 1 thread per physical core. I know that using 2 threads per hyperthreaded core - one for the real core and the other one for the virtual core result in both the lower power consumption and the lower performance. And here I managed to use 50W more than the Linpack alone how much of that increase in power draw is from components other than the CPU I don't know but certainly not all of it because the CPU temperature is also higher.
 
Last edited:
I guess the mark of good empirical science is an almost anal-retentive attention to detail. So it's interesting -- the different power-draw that occurs under different tests.

IN FACT! I'm a but amazed at Lepton's daring! Intel Burn Test? Simultaneously with CPU:OCCT?!?! I never thought to try such a thing!
 
Lepton, can you tell me how your CPU behaves on idle state with the 125mhz strap? Is it downvolting and downclocking?
 
Lepton, can you tell me how your CPU behaves on idle state with the 125mhz strap? Is it downvolting and downclocking?

It certainly is downclocking but I don't know if it is downvolting because my RAID 0 with 2x3TB broke down and I lost over 4TB of data along with ASUS monitoring software. I'm downloading it now to make sure.
 
Last edited:
It certainly is downclocking but I don't know if it is downvolting because my RAID 0 with 2x3TB broke down and I lost over 4TB of data along with ASUS monitoring software. I'm downloading it now to make sure.

I'm trying to remember how this works, but I think EIST automatically down-volts just for the change in VID for the reduced clock. C1E also adjusts voltage. Is this right?

There are other methods of overclocking I haven't tried, so I'm not sure of the implications. And of course, I have a "more primitive" i7 generation.
 
I'm trying to remember how this works, but I think EIST automatically down-volts just for the change in VID for the reduced clock. C1E also adjusts voltage. Is this right?

There are other methods of overclocking I haven't tried, so I'm not sure of the implications. And of course, I have a "more primitive" i7 generation.

I don't remember I'll check that tomorrow I'm going to sleep it is 5:35AM now over here. As for a more primitive generation there were no changes for the desktop EIST or other power saving techniques between sandy CPUs and HW-E CPUs so it all should work the same except for the separate power and clock plane of the UNCORE. Only mobile the HW got new sleep states and in general HW laptops are more power optimized.
 
Isn't ethylene glycol damaging to PC water-pumps? I know you like to joke . . .

nope...

its also a corrosion inhibitor.

the premixes which most companies use are either ethly glyc, or propyl glyc.

i personally perfer the second over the first due to it being less toxic.


I personally dont like premixes tho.
I will use straight distilled, unless i intend on mixing metals outside copper / brass.

However i think the OP is worried about his warrenty, which i have someone yet file unless they were using a nickle plated block.

Your not mixing any metals on a mora + ek waterblock.
 
nope...

its also a corrosion inhibitor.

the premixes which most companies use are either ethly glyc, or propyl glyc.

i personally perfer the second over the first due to it being less toxic.


I personally dont like premixes tho.
I will use straight distilled, unless i intend on mixing metals outside copper / brass.

However i think the OP is worried about his warrenty, which i have someone yet file unless they were using a nickle plated block.

Your not mixing any metals on a mora + ek waterblock.

We-ull, sah! That's enlightening. I had long ago thought that anti-freeze would be a good way to go, then somebody told me it was a bad idea.

I can't even remember if there was an advantage in the heat-capacity of the glycol, but I wondered about the possibility with custom-water-cooling.

Someone mentioned that certain volatile and flammable liquids -- maybe isopropyl or methanol -- would be great cooling agents, but worse than using hydrogen gas in the Hindenburg. But then how would those things affect pump parts and operation? That's definitely a detour that should have a barricade and a warning sign.
 
Back
Top