Tiny+Energy Efficieng+Well protected Server Build question

Shingi

Senior member
Oct 7, 2000
478
0
76
First off, I need to let you guys know that we are currently on dell PowerEdge 2900 with 6 15k sas 147gb in raid 6 and 2x73gb 15k sas raid 1 as boot. Yes I want to make sure boot stays up and running as well.

I just want to ask if What I had in mind does exist. We're looking at maybe just maybe running something that is quiet, because this DANG this is loud as hell and is a power hog. We are in a small business office and still could hear this thing from the other room. It isn't screaming at us, but we can hear it humming away. We're also thinking about buying our own building so energy efficiency comes into play and 950watts is not energy efficient even if we're not at full load all the time.

Lets look at we NEED. MUST HAVE I should say.

*Low power CPU *35WATT* - must support VT and ECC Ram. - one in consideration is G3220T
AMD is welcome so long as it's lower power, ecc support and VT. - We do not need a speed demon. This will be file serving primary and maybe one or two small VM. Nothing more.
*Dual power supply instead of single power supply.
*Must support raid 6 or have pciex4 or pciex8 available so we can put in a raid 6 card.
*Must support hot swap. *This is where I'm confuse a little bit. Hot swap I know has to be support by the adapter, but I like to have something like our PowerEdge where its caddy and makes hot-swap easy instead of me unplugging one cable and re plugging Do you guys know what I mean?(So any good hot swap case recommendation that house 8 bays)

I want to limit power to each power supply at 150watt - 250watt per power supply x 2.
The build above should not be using more than 150watt at full load I believe. If I'm wrong let me know otherwise.

One thing very similar to this is the HP micro server, but it's falling short obviously in dual redundancy ps and not hot swap. But the idea is to build one very similar to this micro server.

One last thing regarding the build. We do not have to have SAS drive WD red is fine so long as we are on raid 6 and we will be using very small size drive. I'm thinking 250gb, We don't need that much space, but if one drive fails, it's rebuilt time that we need quick as we don't want to risk having a second hard drive fail during rebuild, all you geeks knows that this doesn't happened at all right(sarcasm).

So CAN THIS BE BUILT? Is there a small micro ATX server motherboard that can do this Along with the right case and still be 150watt-250watt?

All inputs is appreciated.
 
Last edited:

Gunbuster

Diamond Member
Oct 9, 1999
6,852
23
81
I would still stick with a racked dell.

I've got a 720xd that's pretty quiet, rated to run at high temps (80+ degrees) and with 14 drives the Idrac shows it's usually consuming under 300Watts.

You can also check out the T620 for a tower.
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
The PowerEdge X950 series are getting pretty long in the tooth these days (Core 2 based Xeons), and came before there was a real focus on energy efficiency in servers. Anything Nehalem (or even better Sandy Bridge) based will be much quieter at idle.

Since this is for a business, I highly recommend going with an OEM server that has support. A tower server is a good idea, but they don't typically have hot swap. Looking at rack servers, the R320 looks like a good bet. I wouldn't get too bent out of shape about TDP, newer CPUs scale down their power usage quite effectively. Same for PSU size, while PSUs are less efficient at lower loads, we're talking about a 5% difference, not a 50% difference. So don't sweat it if the smallest hotswap PSU you can find is a 350W.
 

Shingi

Senior member
Oct 7, 2000
478
0
76
well I was hoping for something along the lines of 30-60 watts loaded and 10-15 idle. Is there really no good hw alternative besides going full scale back to poweredge.
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
well I was hoping for something along the lines of 30-60 watts loaded and 10-15 idle. Is there really no good hw alternative besides going full scale back to poweredge.

You're not going to get that low based solely on the fact that you want to run a RAID array. You can go lower than the R320 by getting consumer gear, but it's not going to have the enterprise-centric features that you listed as requirements (lots of hotswap HDDs and redundant power supplies).

I really think you'd be surprised how much quieter the 12G servers are than your current 10G.
 

MichaelD

Lifer
Jan 16, 2001
31,528
3
76
I agree w/what mfenn stated in post #5, above. I think your expectations are a bit unrealistic, based on the feature set you want.

I've been lurking this post from the beginning; just haven't had the time to reply. If all you wanted was a "NAS appliance"(drive failure tolerance implied) you would do fine with a low-powered CPU, OS running off an SSD and a 4-drive RAID6 array. I had this very configuration at home, running headless and probably 20 of 24 hours a day the drives were powered down and CPU in the lowest powered state. The server (NAS) was on a UPS, etc.

You're looking for a NAS (basically) which will also run VMs. You didn't say if these VMs will be production or dev servers but still, it's resource-intensive and that draws power, which makes fans run, etc.

You're also looking for PSU redundancy and HD hot-swap. Those two things right there place you squarely in the "big box" server realm. And noise is the price you pay for that, particularly with redundant PSUs.

Low-power servers have come a long way AFA power management/noise is concerned, especially if you're not running them hard 24/7. Based on your OP, you would be best served by speccing-out a Dell/HP as suggested.

Welcome to the Advanced Hardware Thinkers Club. Your membership gift is a card that simply states "There Is No Free Lunch." :D
 
Last edited:

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
well I was hoping for something along the lines of 30-60 watts loaded and 10-15 idle. Is there really no good hw alternative besides going full scale back to poweredge.

Good luck with that. That RAID6 controller you want will use more than 15 watts idle.

If you want to home brew and get all that you listed, you are likely going to pay a little less for the setup but come the day something breaks, and it will, you will be down likely for days or more as you source parts (and possibly fired.) Meanwhile you can still find parts for that 2900 all over the place. Even though they are pretty old now, like 6-7 years + for most of them. Heck even the Dell R610 series is "old" now.

Most cases I see also don't have enough 5.25 inch bays to load those SAS hot swap bays in. The most 3.5inch units can get 4-5 drives in to 3 5.25 bays and 4 2.5 inch to a single 5.25. I have a couple in my test server and they are not really cheap and the are a cabling mess. The cheap 2.5 SAS one I had was $60 and the 5.25 unit was ~150. They are also "dumb" and don't have failed drive indicators or LED flash support.

So with a case with 6 5.25 bays you can maybe get 10 3.5 inch SAS bays in it. Some of the cheaper Dells will do 15 in a smaller case. They had one unit with 25 2.5 drives in 2u of space also.

And from here I haven't even begun to talk about the RAID controllers or how you do redundant ATX PSU...
 

Gunbuster

Diamond Member
Oct 9, 1999
6,852
23
81
Also don't skimp on the Drac. (or whatever non dell's call it) The option to have full remote control on the box even if it's powered down is invaluable.
 

Knavish

Senior member
May 17, 2002
910
3
81
If you can afford to go home built (in terms of not needing 3rd party support or warranty), then you could gain some redundancy by building two complete systems and creating a clever way of mirroring them. With two redundant computer systems (and an external backup), you could forgo the dual PSU.

Also, what OS are you going to use? If you use linux you could skip the RAID card and let the OS handle the RAID.
 

Shingi

Senior member
Oct 7, 2000
478
0
76
I should've stated that we are running server 08. Really it's just server 08 with AD and basic printer and file serving. VM's are nothing intensive. no number crunching. The VM/s actually will be primarily use for maybe running an linux apache server for webserving. The main role of the server is NAS with security. We have to run server 08 due to needing AD.

I guess the primary concern is the data loss and uptime loss in the event of power/drive failure(hence the raid 6) and dual redundancy ps. Yes we know that raid is not intended for backup. We do use a nightly backup to a secondary drive.

At this point I think I'll have to pay a visit to see as 12g server in person then and see if the noise level is okay.

On the note of letting linux manage the raid. I have no linux knowledge so it is not an option at this point as I can't maintain the server.(i know i know don't shoot, i'll have to learn at some point just not yet.
My question is how reliable letting linux os manage raid vs real hardware raid?

btw, thanks for all input.
 
Last edited:

jaydee

Diamond Member
May 6, 2000
4,500
4
81
I think your storage situation is more complicated than it needs to be. I would buy something like two Crucial M550 1TB drives (for $500 each) and put them in RAID 1 for the data drives (replacing 6 150GB drives in RAID 6). For the boot drive, two 120GB M550's in RAID 1.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
I think your storage situation is more complicated than it needs to be. I would buy something like two Crucial M550 1TB drives (for $500 each) and put them in RAID 1 for the data drives (replacing 6 150GB drives in RAID 6). For the boot drive, two 120GB M550's in RAID 1.

Why would you waste SSD's like that on a Server? Who cares how long it takes to boot since it stays running all the time. After that there is no need to good performing OS drives. Those Crucial drives are also not rated for server use. They might be "ok" depending on the server use. They might not be well suited for real RAID controllers. Servers don't use the built in Intel chipset software RAID.

Now a days, the smallest SAS drives in 10k to 15k range are 2.4 inch and 300 or 450GB each. He could do RAID 6 with 4 or RAID 10 with 4 and gain more space and a lot more performance. That small group would also likely out perform a gigabit connection.
 

jaydee

Diamond Member
May 6, 2000
4,500
4
81
Why would you waste SSD's like that on a Server? Who cares how long it takes to boot since it stays running all the time. After that there is no need to good performing OS drives. Those Crucial drives are also not rated for server use. They might be "ok" depending on the server use. They might not be well suited for real RAID controllers. Servers don't use the built in Intel chipset software RAID.

Now a days, the smallest SAS drives in 10k to 15k range are 2.4 inch and 300 or 450GB each. He could do RAID 6 with 4 or RAID 10 with 4 and gain more space and a lot more performance. That small group would also likely out perform a gigabit connection.

I don't know the specifics of the RAID compatibility of that specific SSD, I guess I'm presuming that.

Here are my reasons:

1. He's concerned about energy efficiency. When you're talking 6-10 hard drives spinning at 10-15K each that adds up. Normal mechanical hard drives are going to run somewhere around 6W idle and 9W load each. What are 10-15k hard drives? M550 was tested at less than 1W idle and less than 4W load.

2. He's concerned about RAID rebuild time. Presuming the SSD is compatible with the RAID controller, the SSD will rebuild in a fraction of the time.

3. Longevity. I trust an SSD with good MLC and power loss protection to last longer over a mechanical hard drive.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
I don't know the specifics of the RAID compatibility of that specific SSD, I guess I'm presuming that.

Here are my reasons:

1. He's concerned about energy efficiency. When you're talking 6-10 hard drives spinning at 10-15K each that adds up. Normal mechanical hard drives are going to run somewhere around 6W idle and 9W load each. What are 10-15k hard drives? M550 was tested at less than 1W idle and less than 4W load.

2. He's concerned about RAID rebuild time. Presuming the SSD is compatible with the RAID controller, the SSD will rebuild in a fraction of the time.

3. Longevity. I trust an SSD with good MLC and power loss protection to last longer over a mechanical hard drive.

While power is important to consider here, 1 watt over a year of operation is less than a dollar. (assuming ~10cents KWh, national average recently). At that point I consider comparing 1w idle to 4w idle to be a non issue.

RAID rebuild time is going to restricted by the RAID controller so the extra speed in a rebuild will often be wasted.

As for longevity, there really hasn't been any industry indication that server SSD's really last any longer than the rotational disk. MLC itself is not as write tolerant as the disks would be so depending on the load types, it may fail earlier or significantly earlier than a spinning disk. SLC is typically nearly an order of magnitude more tolerant but costs a lot more. MLC drives in servers typically over provision by a very large amount to over come this limitation. That is the reason why a desktop drive like the M550 is going to be around 7% over provisioned where an Intel server MLC SSD is going to 35-55% over provisioned. Lifetime + performance reasons.
 
Last edited:

jaydee

Diamond Member
May 6, 2000
4,500
4
81
While power is important to consider here, 1 watt over a year of operation is less than a dollar. (assuming ~10cents KWh, national average recently). At that point I consider comparing 1w idle to 4w idle to be a non issue.

RAID rebuild time is going to restricted by the RAID controller so the extra speed in a rebuild will often be wasted.

As for longevity, there really hasn't been any industry indication that server SSD's really last any longer than the rotational disk. MLC itself is not as write tolerant as the disks would be so depending on the load types, it may fail earlier or significantly earlier than a spinning disk. SLC is typically nearly an order of magnitude more tolerant but costs a lot more. MLC drives in servers typically over provision by a very large amount to over come this limitation. That is the reason why a desktop drive like the M550 is going to be around 7% over provisioned where an Intel server MLC SSD is going to 35-55% over provisioned. Lifetime + performance reasons.

You're missing it on the power consumption. We're talking 5W difference between the best mechanical hard drive vs average SSD. (6W vs 1W idle, 9W vs 4W load). 5W per drive, with 4 drives at a minimum is 20W. That's best case scenario with the minimum number of drives. If he's set on 6 drives instead that's 30W. If we're talking 10 or 15K RPM mechanical hard drives that's going to be more. We could easily be talking about 40-50W depending on these factors. Which not only is part of the cost equation, but could also influence the power supply selection and the amount of heat generated, he wants this to be as small as possible.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
You're missing it on the power consumption. We're talking 5W difference between the best mechanical hard drive vs average SSD. (6W vs 1W idle, 9W vs 4W load). 5W per drive, with 4 drives at a minimum is 20W. That's best case scenario with the minimum number of drives. If he's set on 6 drives instead that's 30W. If we're talking 10 or 15K RPM mechanical hard drives that's going to be more. We could easily be talking about 40-50W depending on these factors. Which not only is part of the cost equation, but could also influence the power supply selection and the amount of heat generated, he wants this to be as small as possible.

I am not missing the point. It is pretty simple. 1 watt used = 1 watt of heat. Cooling if needed is about 1.5 x watts consumed. So 50 watts -> $125 a year. Over 12 months -> $11 a month. I highly doubt that is such a deal breaker for any business that is actually healthy. The fact that he is coming from a 2900 series, he is going to come out ahead anyway since some of those machines idle at 450watts.

Trading $125 a year a) support b) reliability sounds like a fantastic deal to me.
 

jaydee

Diamond Member
May 6, 2000
4,500
4
81
I am not missing the point. It is pretty simple. 1 watt used = 1 watt of heat. Cooling if needed is about 1.5 x watts consumed. So 50 watts -> $125 a year. Over 12 months -> $11 a month. I highly doubt that is such a deal breaker for any business that is actually healthy. The fact that he is coming from a 2900 series, he is going to come out ahead anyway since some of those machines idle at 450watts.

Trading $125 a year a) support b) reliability sounds like a fantastic deal to me.

Support is an unknown, and I will not concede that a mechanical hard drive is more reliable than an SSD with quality MLC NAND and power loss protection (be it M550, Intel 730, Seagate 600 Pro, etc) in a typical read-heavy file server.
 
Last edited:

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Support is an unknown, and I will not concede that a mechanical hard drive is more reliable than an SSD with quality MLC NAND and power loss protection (be it M550, Intel 730, Seagate 600 Pro, etc) in a typical read-heavy file server.

If you won't concede it, then you have some evidence or support for the claim right? As far as I am aware, the industry really doesn't agree that any SSD is more reliable than spinning disk right now.

These articles indicate that the failure rate is pretty close to the same right now but SSD rates are trending towards worse the older they get, as in 4-5 years of operation.

http://www.zdnet.com/blog/storage/ssds-no-more-reliable-than-hard-drives/1483

http://www.tomshardware.com/reviews/ssd-reliability-failure-rate,2923.html
 

jaydee

Diamond Member
May 6, 2000
4,500
4
81
If you won't concede it, then you have some evidence or support for the claim right? As far as I am aware, the industry really doesn't agree that any SSD is more reliable than spinning disk right now.

These articles indicate that the failure rate is pretty close to the same right now but SSD rates are trending towards worse the older they get, as in 4-5 years of operation.

http://www.zdnet.com/blog/storage/ssds-no-more-reliable-than-hard-drives/1483

http://www.tomshardware.com/reviews/ssd-reliability-failure-rate,2923.html

First, you have one article pointing to the other, so you really only have one study here. Second, it's three years old with old SSD's.

My evidence of support is with techreport.com which has been doing an on-going endurance experiment with 6 different modern ~240GB SSD's. They just passed 600 TB's of data written without failure on the 5 MLC's, while the TLC-based Samsung 840 is losing capacity, but recovering the data with its spare cells.

http://techreport.com/review/26058/the-ssd-endurance-experiment-data-retention-after-600tb/2
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
First, you have one article pointing to the other, so you really only have one study here. Second, it's three years old with old SSD's.

My evidence of support is with techreport.com which has been doing an on-going endurance experiment with 6 different modern ~240GB SSD's. They just passed 600 TB's of data written without failure on the 5 MLC's, while the TLC-based Samsung 840 is losing capacity, but recovering the data with its spare cells.

http://techreport.com/review/26058/the-ssd-endurance-experiment-data-retention-after-600tb/2

6 drives, a statistical analysis does not make. The sample set is way to small. I can set up a single HDD on my desk here and write to it for months, what exactly does that prove? 2% failure rate after 2 years from the data center running thousands of drives carries significantly more weight. If you had a pile of 5000 SSD drives, 4900 would still work. It is not statistically improbable that tech report managed to attain 1 drive that would last. It really proves nothing about the drives reliability other than this one drive has managed to work that long. It might be the outlier also, maybe it is out in the 2 or 3 sigma area. One drive can't tell us that.

Yes the blog uses the main article as a reference among others. Since the blog provides some addition references it was quoted alone.
 
Last edited:

MichaelD

Lifer
Jan 16, 2001
31,528
3
76
My two very (non-technical) pennies here.

If I was running a mission-critical server, or was told to stand one up, and I was given a budget of say $50K per box, I still would not go with SSDs for storage.

SSDs are great. Fast as a scalded spider monkey, generate zero noise and almost zero heat. Those last two points reduce the need for cooling which reduces the noise even more. Great. But they've not proven themselves at the Enterprise level yet, reliability-wise AFAIC. Not to mention the $/GB isn't even close with traditional storage.

Now, back On-Topic. ;)

I don't think it's possible to meet all of the OPs conditions as stated in the OP. He should buy one of the late-model Dell's or HPs as has been mentioned and roll with that.
 

Gunbuster

Diamond Member
Oct 9, 1999
6,852
23
81
The new Dells are funny, my server closet went totally rain forest when the AC glitched and the server DRAC on the 12G 720xd didn't even report near critical temps when it was 90+ degrees. everything else I have with a fan is louder than it too.
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
Support is an unknown, and I will not concede that a mechanical hard drive is more reliable than an SSD with quality MLC NAND and power loss protection (be it M550, Intel 730, Seagate 600 Pro, etc) in a typical read-heavy file server.

I don't really see how support is an unknown. A drive is either supported by the vendor in a given server or it is not. If we're talking about a Dell or HP, if you didn't buy it with the server, it's not supported.

The new Dells are funny, my server closet went totally rain forest when the AC glitched and the server DRAC on the 12G 720xd didn't even report near critical temps when it was 90+ degrees. everything else I have with a fan is louder than it too.

:thumbsup: to this. Dell went to some effort to make sure that all their 12G servers can run in "fresh air" conditions. I believe they even set up a rack and let it run all throughout a Texas summer.

To demonstrate the extreme temperature tolerance of Dell Fresh Air-capable hardware, we built the Dell Fresh Air Hot House: an 8’ x 10’ outbuilding with no air conditioning that includes a rack running Dell Fresh Air-capable hardware. Located at Dell Headquarters in Round Rock, Texas, where summer temperatures can top 40°C (104°F), the Fresh Air Hot House uses 100 percent air-side economization and no chiller by pulling outdoor air into the structure and passing the hot air out with simple blowers.

http://www.dell.com/learn/us/en/555/power-and-cooling-technologies-best-practices
 

Shingi

Senior member
Oct 7, 2000
478
0
76
thanks guys for all the help. I did go ahead an got a g12 dell. Regarding the ssd. While I do like the energy savings. Cost is also higher for gb/$, and we also did not need a speed demond although who doesn't like speed? I'm also not completely sold on the SSD being enterprise reliability. I did see the post on tech reports and it's not the failure of nands from repeat read/writes that I'm worry about, it's death by electrical storm. I know that even a physical hardrive can died from electrical shock, but the actual the actual internal disks could/maybe recoverable. Not sure I can say the same to the nands. Who knows, maybe I'm losing my touch but it just seems memory/nand memory seems awefully fragile to me should it be hit by uncontrol electricity.

If would be great if someone can actual similate a test of ssds in single/raid setup get eletricuted by major power surge/surges and see if they can be recovered. Of course this would come at a very expensive cost obviously because you would have to kill the ssds and then send it to pros to try and revive the data. Obviously you have to have a new motherboard memory videocard what not depending on what else died during the test. Then repeat the test with new ssds+whatever else died at least a couple times to insure more reliable test results due to oddballs just to please anandtechers. Whois with me?

Anand should contact ssd companies who is saying they got the best power surge protection and have them send some samples and similate this test. What do you guys think?
 
Last edited:

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
ESD isn't what would worries me about SSDs, it's controller bugs. Those are the more common cause of errors in any large enterprise deployment, and which "enterprise" SSDs costs more and lag behind consumer ones in features and performance. All that validation takes time and costs money.

So, how do you like the server? :)