• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Pros & Cons of blade servers

Cooky

Golden Member
Assuming we're looking at everything except networking...what are the pros & cons of blade servers?

My colleague and I got into a debate over whether or not we should start using blade servers as much as we can.

We know there are some limitations over the blade switches (CGESM in our case), so that part's been looked at.

In terms of heat & cooling, is it better to have blade servers or standalone?

I had thought that since we're able to pack 16 half-length blade servers into one enclosure, the total amount of heat dispensed would be less than having 16 standalone servers.
Is this true or am I totally wrong?

Also, we're constantly running out of space...we need to get rid of old servers to make room for new ones all the time.
So density is definitely a plus in our case.

What do you guys think?
 
I don't know much about blade servers, but it'd seem that since the primary consumer of power is the CPU, then the total power used by ten blades would be the same as the power consumed by ten standalone servers (with the same CPU). The total heat output will be the same.
 
The problem is not the amount of heat per blade but the fact that it is so densely generated. So, you have a lot of heat in a small space. Also along the same lines, is the dense power requirements. I have seen that some blade chassis require 60A circuits. These 2 items become problems because many data centers are only equipped to handle certain power and BTU densities.

Also, what limitations are you concerned about with blade switches? There are new versions of the CGESM that are much improved. http://www.cisco.com/en/US/pro..._sheet_c78-439133.html
 
Originally posted by: nightowl
The problem is not the amount of heat per blade but the fact that it is so densely generated. So, you have a lot of heat in a small space. Also along the same lines, is the dense power requirements. I have seen that some blade chassis require 60A circuits. These 2 items become problems because many data centers are only equipped to handle certain power and BTU densities.

Also, what limitations are you concerned about with blade switches? There are new versions of the CGESM that are much improved. http://www.cisco.com/en/US/pro..._sheet_c78-439133.html

Is there a significant difference in the amount of heat generated by a cluster of blades vs. a cluster of racks? We have a decent size farm, no blades, mostly racks, our SAN, core Cisco, and a couple of towers, and our A/C is up 99.9% of the time so I've never noticed a heat issue.

 
Heat is the biggest one. Blades essentially take all the heat/power requirements and make it much smaller. So each rack produces MUCH more heat than a rack with just standalones. If your data center can accomodate that then their great. But it isn't out of the norm to have to redo cooling and even power for your data center to put up with them. They also raise network design issues that should be addressed - ie not having a ton of L2 loops everywhere so don't interconnect the blade switches they should resemble an upside down U.
 
As nightowl and spidey07 note, the difference with blades will be the much higher heat density in the blade rack. You need much better heat transport (airflow), or that bunch of blades will get VERY hot, especially in the center of the group. The total heat energy generated in the server room will be near-identical to having an equivalent number of separate servers.
 
Back
Top