Data center cooling

imported_corsec

Junior Member
Dec 21, 2007
2
0
0
So, I hope this is the appropriate forum for this type of a discussion. First post here.

A friend and I work at a large datacenter and we got to talking about ways to increase efficiency of cooling datacenters. One thing that came to mind was geothermal cooling, though not just a regular heat pump. One thing I was thinking of was basically digging a large conduit in the ground and using it to level out the air temperature of the air running through it to about the average annual temperature (typically around 55F - 70F depending on geographic location). The problem is that to even get a number for what would be needed, I would have to calculate the length of the conduit based on thermal conductivity and other factors which can all be influenced by things like soil moisture content and soil type.

Basically, I found this link PDF file which discusses it and I *think* gives the math for calculating it, but I don't know math like that. I was wondering if someone more familiar with this type of math would be able to break it down differently for me. For example, I know some C programming and if it is something that could be written programmatically then I might be able to understand it.

Anyway, thanks for any assistance. I just don't know enough about math to even really know where to begin with that.


Logan
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
I can't help with the math, but I am familiar with systems similar to what you describe, in which a pool is built underground and the reservoir is used as the basis for a heat pump cycle.
 

Nathelion

Senior member
Jan 30, 2006
697
1
0
The math really does not look complicated - you'll need to be familiar with elementary thermodynamics to be able to extract meaning from the jargon, though.
This could be fairly easily modeled in a program like mathcad, for example.
 

spikespiegal

Golden Member
Oct 10, 2005
1,219
9
76
I like the 'pool sized' reservoir idea. Mix that water with salt and you have a helluva heat pipe.

I have two answers for the problem at hand, but need to warn the poster that they are a bit off topic from his actual question. Basically, I work in a lot of different sized data centers as a contractor, and it's an increasing pet peeve of mine how they aren't efficiently energy managed. Even up here in Michigan I see corporate data center air conditioners working in mid January to cool data centers, which I think is absurd. Also, data losses incurred by SANs over-heating and hence causing faults on over-heated RAID controllers are far more wide spread than EMC and their clones will admit to. You are darn corrent to be thinking about the problem because data center managers rarely consider a disaster plan that just involves the air conditioner going out when it's one of them most damaging.

The classic scenario is;

- Power goes out
- Enterprise generator kicks in while the UPS blocks are temporarily doing their jobs
- Everything looks fine
- nasty phone call to local power company
- Everything looks fine, but data center in getting warmer
- Another nasty call to local power company
- facilities engineer reassures CIO that generator can run for days
- Data center reaching Amazon rain forrest temps. why?
- facilities engineer suddenly remembers that Data center air conditioner *is not* on generator circuit because it sucks too much juice.
- SANs RAID unit starts writing parity errors to entire volumes causing substantial data loss and data restore taking days .
- IT staff that learned their lesson gets purged during economic downsizing and new IT staff fresh from college (to save health care costs) repeats entire scenario.

I've seen the above scenario repeated multiple times - at least twice a year at different companies.

(1) The second most efficient way to cool a data center is remove vented heat from blades, computers, SANs, cabinets, etc., via air conduit that then 'dumps' the hot air into the drop ceiling above the lab. Since hot air likes to rise anyway, you'll find moving it away an upward from electronic devices into the void about the server room ceiling easier than piping cold air into the room. I see very few data centers doing this because it looks strange (like a scene from Terry Gilliam's 'Brazil') but is does work. Note that you want to use as non-conductive vent material to keep as much heat inside the vent as possible.

(2) The first most efficient way to cool a data center is not use power hogging devices in the first place, and migrate to more energy efficient multiple core platforms and use virtual machines, IDE/SATA based SANs, etc. Given that a rack of Netburst based P4s can be reduced to a single quad core or often dual core processor running a fraction the wattage on VMs it's absurd not to consider this. I feel the same contempt for high RPM SCSI based SANs using drives less than half a terrabyte.
 

imported_corsec

Junior Member
Dec 21, 2007
2
0
0
I absolutely agree and have seen the same situation with regards to the data center overheating. Fortunately for me the datacenter I work in has done just what you proposed in option (20) so we are well on our way to achieving the goal of efficiency. We also have started using some methods of cooling that I am not at liberty to discuss but I would still feel safe saying it is significantly more efficient than traditional cooling. And yet I am here asking the questions I asked. The reason is that when you are talking about some rather large datacenters at greater than 10+ megawatts, you begin to think about the fact that even as efficient as you are, you are still pouring money into cooling. I believe that it is massively inefficient trying to heat water or air and then cool it back down so you can heat it again. Instead why don't you just vent the heat and replace it with outside air which is about guaranteed to be cooler. You can also find methods to help reduce the temperature of it at a reasonable cost and not even come close to the money and energy you are putting into it as is.

The problem is that I am a peon and the people above (obviously) don't believe anything but hard data. They also are more interested in the short term bottom line, you don't look at the payoff being 10 years in the future when you finally recoup your investment. It's somewhat understandable but annoying that companies these days are completely unable to look at anything except the corporate wallet.

</vent>

Anyway, I am still trying to figure out the math and stuff so I can come up with solid numbers. Thanks for the replies guys.

Logan
 

wwswimming

Banned
Jan 21, 2006
3,695
1
0
i remember the term "ground effects" for this technology.
a fairly common technology in Finland.

"Improved heat pumps for detached houses" (or data centers)
http://www.tut.fi/units/me/ene...Heatpumps/heatpmp2.htm

*.pdf, "Low Exergy Systems for Heating and Cooling of
Buildings ? Case examples"
http://virtual.vtt.fi/annex37/...co%20tech%20krakow.pdf

"ground source heat pump" - another technical term
related to what you're proposing.
another *.pdf, "GROUND-SOURCE HEAT PUMP PROJECT ANALYSIS"
http://www.retscreen.net/downl.../130/2/Course_gshp.pdf
 

G41184b

Senior member
Aug 12, 2000
201
0
0
Originally posted by: spikespiegal
...Also, data losses incurred by SANs over-heating and hence causing faults on over-heated RAID controllers are far more wide spread than EMC and their clones will admit to...
The classic scenario is;

- Power goes out
- Enterprise generator kicks in while the UPS blocks are temporarily doing their jobs
- Everything looks fine
- nasty phone call to local power company
- Everything looks fine, but data center in getting warmer
- Another nasty call to local power company
- facilities engineer reassures CIO that generator can run for days
- Data center reaching Amazon rain forrest temps. why?
- facilities engineer suddenly remembers that Data center air conditioner *is not* on generator circuit because it sucks too much juice.
- SANs RAID unit starts writing parity errors to entire volumes causing substantial data loss and data restore taking days .
- IT staff that learned their lesson gets purged during economic downsizing and new IT staff fresh from college (to save health care costs) repeats entire scenario.

I've seen the above scenario repeated multiple times - at least twice a year at different companies.

While I certainly believe this is possible, I consider it unlikely that this is any sort of widespread occurrence. Do you have any data to back this up? SAN Arrays automatically shut down at a temperature well below that of where drives will begin to report errors because of the heat.
 

spikespiegal

Golden Member
Oct 10, 2005
1,219
9
76
I consider it unlikely that this is any sort of widespread occurrence.

That's what my Uncle said last year before he lost 1.5 terrabytes of Exchange store on a SANs after the building crew decided to shut off the roof A/C units during a retrofit on a weekend holiday. He's now unemployed - not because of the data loss but because he never anticipated losing that much data without a catastrophic event shutting production down. Maybe you can re-assure him it really shouldn't happen.

Do you have any data to back this up?

{christ}

No offense, but if you haven't been burned by an over-heated SANs or RAID 5 controller in the past couple of years you must not move around much.

EMC isn't going to publish this data and none of their damn VARs won't admit it's a problem because it's a competitive industry and they assume a server farm will be perpetually bathed in 70F dry air until eternity. If you want to pay me for my time, I can send you the transcript from their moron in tech support trying to fix the problem and having us manually rebuild the arrays, which didn't work.

SAN Arrays automatically shut down at a temperature well below that of where drives will begin to report errors because of the heat.

Pull my finger. .....And double Parity insures you'll never lose a volume either - pull my finger again. I was working at a Hospital less than 3 years ago when we lost two of the four Dell (built by EMC) SANs over a 7 month period due to thermal issues, and we had gigs of corrupted data before we even got an alert.

 

Evadman

Administrator Emeritus<br>Elite Member
Feb 18, 2001
30,990
5
81
We had a discussion along the same lines a few years ago. Check out this thread which has some cool thoughts on the subject.