Home file storage server components (advice please)

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
Finally, I say RAID 1, because RAID 5 beyond a certain number of disks becomes a gamble, and RAID 6 is usually not as well implemented in software. And whoever confused software RAID with mainboard RAID - smitbret it was, I think - needs to read up on the differences.

Sorry, Rick, but you are the one that doesn't understand the difference. RAID that is run from your mainboard is still software RAID. It's implentation is just run from the BIOS rather than on an OS level. It uses resources like CPU cycles and system memory to run. Hardware RAID runs from add-in cards that aren't cheap and the RAID functions are handled separately on the card.
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
Case - Fractal Design XL R2
http://www.newegg.com/Product/Produc...82E16811352029 99.99

PSU - SeaSonic S12II 620 Bronze 620W
http://www.newegg.com/Product/Produc...82E16817151096 64.99

MB - MSI FM2-A85XA-G65 FM2 AMD A85X
http://www.newegg.com/Product/Produc...82E16813130663 99.99

CPU - AMD Richland A4-4000 Dual-Core
http://www.newegg.com/Product/Produc...82E16819113343 45.99

Memory - G.SKILL Ares Series 16GB (4 x 4GB)
http://www.newegg.com/Product/Produc...82E16820231547 153.99

HDD - Seagate Barracuda ST3000DM001 3TB
http://www.newegg.com/Product/Produc...82E16822148844 109.99 each

I threw this together with expandability in mind.

The motherboard has 8 SATA ports with several PCI-E slots for add-in cards as needs dictate.

The case has 8 internal bays plus you could easily add a 5 in 3 cage.

The PSU has 8 SATA connectors and plenty of wattage headroom for expansion.

The 4 x 4GB is really only necessary if you are using a ZFS RAID setup. If you go with pretty much anything else you could easily get by with 8 or even 4GB.

Cheap dual-core APU. You could probably increase your initial investment and get an i3 setup that has a lower TDP but a similar MB and Intel CPU combo would be considerably more expensive.

I picked the Seagate HDDs strictly out of personal taste. I use them in my server and they are just FAST. Plus, they are quieter and run no hotter than the WD Greens in the same server.

9TB in RAID 5/RAIDz1 will come in under $1000 US.
9TB in RAID 6/RAIDz2 will be $1021 US.
9TB in RAID 1 would be about $1250

Don't forget to add a little more $$$ for an OS and/or OS Drive if you go that route.
 

_Rick_

Diamond Member
Apr 20, 2012
3,983
74
91
Sorry, Rick, but you are the one that doesn't understand the difference. RAID that is run from your mainboard is still software RAID. It's implentation is just run from the BIOS rather than on an OS level. It uses resources like CPU cycles and system memory to run. Hardware RAID runs from add-in cards that aren't cheap and the RAID functions are handled separately on the card.

Yes, and I was talking about pure software RAID, run on the OS level.
I never once mentioned mainboard/BIOS implementations, which are useless (I'd go so far as to say that even real RAID controllers on mainboards are useless, as hardware RAID has no performance or portability advantage over software RAID).
Therefore your rant was a bit pointless, because I never recommended mainboard RAID, but instead volume manager or mdadm to manage RAID, which is the most flexible and stable and safe way to do RAID. You yourself propose it as well, from what I can tell.

CPU cycles for RAID are basically irrelevant on anything but the lowest power ARM or Atom system, even with RAID 6. Many CPUs come with quite fast XOR or even full blown parity units. And any desktop CPU has plenty of power. I'm running two RAID 5s with encryption on top on my i5 650, and never had a RAID process run into CPU limits. The same was also true for the single core Sempron I had before....
System memory is also a complete non-issue.

In fact, the only limitation is bus bandwidth. But with PCIe2.0 even that has been mostly addressed, and the added overhead isn't that big, unless you run RAID1s with very many mirrors.


Also, with regard to your build proposition: such a large power supply is going to be a complete waste. Smaller PSUs with a higher efficiency rating will both save the odd Watt of electricity during the long idle periods and under all use cases, while being cheaper to buy as well. I'm spinning up 12 disks from 430W, and I think around 300W should be plenty for <=8.
 

386DX

Member
Feb 11, 2010
197
0
0
Windows Home Server software $40, put together any parts you want, the only thing you need to worry about is how many sata ports are on board. Can put together a decent server for under $500.

Cant' go wrong with it.

+1 some people are suggesting ridiculously over powered (ZFS, 650W power supply, etc) for what the OP needs. A simple WHS would suffice. A purposely built hardware I'd recommend for the OP is the HP Microserver Gen8

http://www.newegg.com/Product/Produc...82E16859108028

Add a bit of RAM and the hard drive you want and you've got a nice compact low power server. If you want more power swap the CPU out for a low power E3 Xeon. This is also a server based chip set and not a desktop pretending to be a server. What's the difference... Well you get ECC support, dual gigabit LAN and most importantly iLO. If you don't know what iLO is, it's a dedicated network port that allows complete remote access. This means once you give it an IP address you can completely manage (turn in/off, change bios settings etc) without having to have a keyboard, mouse, monitor connected. Yes this includes being able to install entire operating system remotely. We do this often at my work on HP servers, complete remote install of ESX Host and VMs. Some Supermicro boards and Dell servers have similar function with there own name.
 

dighn

Lifer
Aug 12, 2001
22,820
4
81
ZFS is not really over-powered. it's just a bit heavy on the RAM, which is cheap anyway. what you get is an extremely robust file system that provides excellent redundancy, error detection & recovery (a bigger deal on large arrays like the OP's) and snapshot capability. it's not that difficult to set up either if you go with something like freenas. there's really very little reason to not use ZFS. just be aware that with ZFS you cannot add new disks to existing arrays, only create new arrays out of new disks.
 

_Rick_

Diamond Member
Apr 20, 2012
3,983
74
91
there's really very little reason to not use ZFS.

One reason would be lackluster Linux support, especially with regard to encryption. But unless you have specific Linux needs (driver support mostly) you can go with Solaris/BSDs and not worry about that.

The HP Microserver looks a bit small to host 10+ TB of storage, has no encryption acceleration, because it's based on a Celeron. The proprietary form factor makes it impossible/expensive to upgrade the platform, and the price isn't that great -- especially considering what you can get for 50-100 dollars more, and the penalty you currently pay for 4TB disks.
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
Yes, and I was talking about pure software RAID, run on the OS level.
I never once mentioned mainboard/BIOS implementations, which are useless (I'd go so far as to say that even real RAID controllers on mainboards are useless, as hardware RAID has no performance or portability advantage over software RAID).
Therefore your rant was a bit pointless, because I never recommended mainboard RAID, but instead volume manager or mdadm to manage RAID, which is the most flexible and stable and safe way to do RAID. You yourself propose it as well, from what I can tell.

CPU cycles for RAID are basically irrelevant on anything but the lowest power ARM or Atom system, even with RAID 6. Many CPUs come with quite fast XOR or even full blown parity units. And any desktop CPU has plenty of power. I'm running two RAID 5s with encryption on top on my i5 650, and never had a RAID process run into CPU limits. The same was also true for the single core Sempron I had before....
System memory is also a complete non-issue.

In fact, the only limitation is bus bandwidth. But with PCIe2.0 even that has been mostly addressed, and the added overhead isn't that big, unless you run RAID1s with very many mirrors.


Also, with regard to your build proposition: such a large power supply is going to be a complete waste. Smaller PSUs with a higher efficiency rating will both save the odd Watt of electricity during the long idle periods and under all use cases, while being cheaper to buy as well. I'm spinning up 12 disks from 430W, and I think around 300W should be plenty for <=8.

Glad we got the software RAID thing out of the way. Yes we do agree that main board RAID is not very good and true hardware RAID is really not practical for home use. There's so many better options that make the expense and frustration unnecessary.

I just picked that PSU because it had 8 SATA connectors. Usually I recommend the Corsair CX430 Builder Series but it only has 4 and it really looks like he'll need more than that. Didn't want to add adapters in there just to keep it tidy.

I'm curious why you don't like larger HDDS in RAID 5 or 6. I've been running an 8TB FlexRAID 5 server for about 9 months and I love it. I've set up unRAID and Freenas RAIDz1 but not used them for an extended period of time. I've never had trouble with any of them. I did read some articles as to why RAID 5 is bad with HDDS larger than 1TB, but those ideas were largely debunked.
 

_Rick_

Diamond Member
Apr 20, 2012
3,983
74
91
I'm curious why you don't like larger HDDS in RAID 5 or 6. I've been running an 8TB FlexRAID 5 server for about 9 months and I love it. I've set up unRAID and Freenas RAIDz1 but not used them for an extended period of time. I've never had trouble with any of them. I did read some articles as to why RAID 5 is bad with HDDS larger than 1TB, but those ideas were largely debunked.

It's not the size of the disk that matters (short of economic reasons - 3TB currently has the best cost per storage ratio and potentially a riskier rebuild, but I am not personally worried about that) but the number of disks and therefore the size of the array. Going for disks >3 TB is economically unwise, going for RAID 5s with < 1/6th redundancy is also getting scary - especially with larger disks, where a rebuild take longer and the probability of having a secondary disk fail terminally (and not just local read errors, that are the common problem) is growing.

As for the economics thing: In the end it's a balance act between the added costs of going with the more expensive disks, or going with a platform that more easily accepts more disks. Since I started out with a case that can take a large number of disks, I'm a proponent of going with more, but cheaper disks, to fully exploit the advantages of going RAID.

As for PSU: The Sea sonic S12G 450 is also rated gold and comes with modular cables, allowing 8x SATA. But it looks like it's a pretty rare or new product -- availability is pretty much non-existent in the EU.
There's als a 350W Triathlor Eco from Enermax, that has 6x SATA and modular cabling - with a second cable set it might be an alternative, or using IDE-SATA adapters, which people tend to have flying about, as they used to be an addition to many mainboards or PSUs in the past.
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
The HP Microserver looks a bit small to host 10+ TB of storage, has no encryption acceleration, because it's based on a Celeron. The proprietary form factor makes it impossible/expensive to upgrade the platform, and the price isn't that great -- especially considering what you can get for 50-100 dollars more, and the penalty you currently pay for 4TB disks.

That's kind of my thought, too. You could get 10TB out of it with 3TB HDDs but that would mean saying goodbye to redundancy. I don't get where a dual-core A4-4000 is ridiculously overpowered, either. ZFS is simply the best file system for protecting your files and if OP's friend is that serious about archival storage of photos......

The WHS 2011 idea isn't a bad one. I use it with FlexRAID and like the ability to use the Windows OS and software. It's nice to have internet access to the server, too. Too bad the built-in backup systems won't let you backup anything more than 2TB so you'd need to look into different options there.