Advise on home NAS/SAN device

CosmoJoe

Member
Jul 12, 2006
25
0
0
Greetings!

Hoping to get some feedback/experience. I am planning to piece together a DIY NAS device. Ideally, I would like a device/appliance that does nothing but serve files. My current setup at home for file storage is a core 2 quad box that has 2 mirrored arrays via on board RAID. The workstation runs Win2k8 server and also serves as a VMWare Server as well (4 VMs currently), so it's doing quite a bit and as such it is extremely disruptive if I have to reboot this computer for any reason.

I would like to turn this workstation into a VMWare ESX box in order to remove the OS overhead. The file storage would be split off to a new computer/appliance. Since I am doing some home lab stuff with VMWare, I am intrigued by the thought of using iSCSI storage for the VMs, if the storage is protected by RAID redundancy (ESX is very picky about local RAID hardware).

So really, my quandary boils down to, do I buy a case with hot swappable drive bays, a budget CPU/motherboard and a decent RAID controller and then look at possibly OpenFiler as an OS, or do I go with an appliance. The appliance I was looking at that seemed to fit all of my needs was:
http://www.newegg.com/Product/Produc...82E16822122024 (Netgear ReadyNAS Pro). However, the main drawback with an appliance is that you are locked into the hardware and in the case of this specific device, it is incredibly expensive ($1500). The main advantage I can see is that you get decent support and of course, depending on the device, a user community you can get additional support from.

As an aside, has anyone here done much with iSCSI as it relates to running a VM on an iSCSI target versus local? Again, I am mainly looking at this as an option to be able to add more VMs down the road and have the redundant/protected central storage but I imagine there has to be a performance hit running your storage over a network vs. local.

Thanks in advance for any advice/feedback!
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
I usually recommend an appliance to people that are concerned with power usage. A pc really can't beat them on that. If you want performance at a lower cost a pc is the way to go. Also consider looking on sites for refurbished servers. I have seen a lot of high end servers going cheap. Ones like this are hard to beat:
http://www.geeks.com/details.asp?inv...OHDD-R&cat=SYS

For $88 you get :
Dual Intel Xeon 2.4 GHz single core processors
1 GB DDR RAM (8 GB maximum)
Intel 10/100/1000 82540EM Gigabit Ethernet controller
Single-channel Adaptec AIC-7901 wide Ultra-320 SCSI controller
Cooling fan built-in for hard drives
Removable hard drive bays

Add a SATA pci card, some drives and OS and it is ready to use .
 

CosmoJoe

Member
Jul 12, 2006
25
0
0
Thanks for the replies!

I did some reading on comparisons between FreeNAS and Openfiler and I am leaning towards Openfiler, if I go the route of a home built system.

Modelworks~
That is a great suggestion about the older servers. I didn't even really consider that route :) I would ideally like a case with 8 (or more) hot swappable bays.

I think at the end of the day I will probably just jump over to a site like Newegg, etc and slap together some configurations and a decent RAID controller, and see how a PC pans out cost-wise compared to a NAS appliance.

It is such a pity... all of the decent appliances with 6+ bays are crazy expensive.
 

pjkenned

Senior member
Jan 14, 2008
630
0
71
www.servethehome.com
Another one to look at is EON ZFS Storage.

The big drawback with the old servers is that the performance sucks, and power consumption is probably going to be higher than an i3-530 based system.

If you want to go "crazy" you could get a Xeon X3430 and Supermicro X8SI6-F (I haven't tested this yet, but my X8ST3-F is awesome with the LSI 1068e onboard.) That gives you 14 SATA ports built in, 2x Intel NICs + a Realtek KVM over IP NIC, and enough CPU to do just about anything you want on it within reason. The best value, by far, in the hot-swap enclosure world comes down to Norco RPC-4220 and RPC-4020 cases. The enclosures aren't the sturdiest, but price wise they cost about the same as 3x 5 in 3 hot swap bays.
 

child of wonder

Diamond Member
Aug 31, 2006
8,307
176
106
Be careful with an iSCSI solution that uses ietd. Right now there is an issue with vSphere using an ietd based iSCSI solution causes data queue overflows which crashes the VMs.

When you build the NAS, I would build it with a 2nd Gb NIC devoted solely to serving your vSphere box. Start out by using NFS datastores instead of iSCSI and keep an eye out for ietd to be fixed (if it hasn't already and I'm just behind the times).

EDIT:

And you don't need to go nuts bulding a NAS. Openfiler or FreeNAS will run just fine on a simple system. Since the server will run 24/7 I would look at finding a good 45W dual core CPU and about 2GB of RAM. Software RAID will work fine as well. For your NFS share that you'll host VMs on, use a 512KB chunk size for the RAID volumes, then format it with XFS and use a large block size like 512KB for best performance. I would also mount the XFS volume using noatime to help the speed.
 
Last edited:

CosmoJoe

Member
Jul 12, 2006
25
0
0
Thanks for the info!

child of wonder~
Do you have any recommendations with regards to what type of RAID array for hosting VMs? The newer RAID controllers support a wide range of RAID types, but I was looking mainly at a choice between RAID 5 or 6.

Additionally, would it make sense to have an entirely separate RAID array for general file sharing?
 

child of wonder

Diamond Member
Aug 31, 2006
8,307
176
106
RAID 10 is going to give the very best performance, but you're going to spend the most money on drives and will use up 3.5" bays fast.

Yes, I would have a pair of drives (2.5" drives inside a dual 2.5" to single 3.5" adapter work great for this) in RAID 1 for your NAS OS. Then a RAID array for your VMs and another array for your file share.

Having the VMs and file share on the same RAID array will hurt the performance of both. If you need the VMs to do something heavily disk intensive then you're going to get terrible file share throughput and vice versa. VMs are very demanding on disks so my advice would be RAID 10 or RAID 6 (more spindles than RAID 5/RAID 5 with hot spare and more drive failure tolerance).

Set up your NAS so it has one NIC dedicated to management and file sharing and one NIC for NFS/iSCSI. Give the NFS/iSCSI NIC an IP on a different sibnet that you normally use so only it and the VMware box communicate on that wire.

On your VMware box, dedicate one Gb NIC just to the vmkernel which is what will talk to your NFS/iSCSI share. I'd also recommend having one NIC for the Service Console, one for VMotion, and at least one for the VM port groups. Of course, that may be overkill for your situation so you can have the Service Console and VMotion share the same NIC.
 

CosmoJoe

Member
Jul 12, 2006
25
0
0
Great info.

With regards to the VMs, nothing at all too demanding; 2 domain controllers running win2008r2 and 2 win2003 servers, one running as a web server and the other serving as a MySQL database server (for personal blog and phpBB). It is mostly a lab environment.

The main performance yardstick I need to meet is the ability to stream video to a few media center PCs.

As long as I have your ear, what are your thoughts on a good hardware RAID controller. I was looking at this:
http://www.newegg.com/Product/Produc...82E16816151023

I suspect this might be massive overkill; basically I just need something that will support at least 8 SATA drives and do hardware RAID. It looks like all the decent hardware cards sport an Intel IOP chip, and after that you pay for the number of ports, on board RAM, etc.

EDIT: Bleh.. looks to be a bad time to shop for RAID cards, as it appears the next generation from many of the vendors is not out yet, but when it is, big price drops no doubt.
 
Last edited:

violupro

Banned
Jan 24, 2010
11
0
0
While I do run both - Freenas and Openfiler. I run Freenas for a nice, fast reliable file share.

I run all my VM's off Openfiler using iSCSI. I couldn't be happier with it. Also, I mount that iSCSI with Microsoft's free connector. This also allows me to index that store with various tools that cannot read shares.
 

pjkenned

Senior member
Jan 14, 2008
630
0
71
www.servethehome.com
BTW that Areca is a great card and highly desireable if you move to a big Raid array. The non-ix ARC-1600 Areca cards do not have onboard expanders, and therefore they work with HP SAS Expanders. Practically speaking my 8-port Areca 1680LP has over 40 drives connected at the moment and enough port space for 64 without using another expander.

I'd suggest that you look into a battery back-up unit as well for that raid card.

Realistically though, if you are looking to stream video, SATA + Raid 6 is going to be plenty so long as you have:
1. Intel Gigabit NICs on server/ client PC's (clients seem to be OK with Marvell too) + a decent switch (Dell PowerConnect or HP Procurve will work)
2. A PCIe or PCI-X in a PCI-X slot raid card. Raid controllers in 32-bit PCI slots tend to do very poorly these days.

Also remember, with traditional hard drives, spindle count makes a big difference. I did the math one day and realized that it was costing me something like $47 per port to add a disk to my servers. Considering 2TB drives are settling into the $100/ drive range, connectivity costs shoul not be underestimated both in terms of current and future needs.
 

child of wonder

Diamond Member
Aug 31, 2006
8,307
176
106
Great info.

With regards to the VMs, nothing at all too demanding; 2 domain controllers running win2008r2 and 2 win2003 servers, one running as a web server and the other serving as a MySQL database server (for personal blog and phpBB). It is mostly a lab environment.

The main performance yardstick I need to meet is the ability to stream video to a few media center PCs.

As long as I have your ear, what are your thoughts on a good hardware RAID controller. I was looking at this:
http://www.newegg.com/Product/Produc...82E16816151023

I suspect this might be massive overkill; basically I just need something that will support at least 8 SATA drives and do hardware RAID. It looks like all the decent hardware cards sport an Intel IOP chip, and after that you pay for the number of ports, on board RAM, etc.

EDIT: Bleh.. looks to be a bad time to shop for RAID cards, as it appears the next generation from many of the vendors is not out yet, but when it is, big price drops no doubt.

Honestly, you can get by with software RAID if you want to. Modern CPUs are so damn fast and RAM is so cheap that so long as you have a motherboard with several (6+) SATA ports you should be good to go. If you need more ports, there are eSATA expander bays and PCI-E cheapo cards that add more SATA ports.

Right now I'm running a Debian 5.0 64 bit server with an AMD 405e X3 45W CPU and 4GB RAM running VMware Server 2.0.2. There are two 120GB 2.5" 5400RPM drives in RAID 1 for the OS, 3x 250GB drives in RAID 5 for the virtual machines, and 5x 1TB drives in RAID 5 for my file share. For VMs I have a 2008 R2 domain controller, a Windows 7 x64 box that is a media streamer, and an Ubuntu 10.04 desktop for messing around with.

Host load average typically sits around 0.31 - 0.40 with about 750MB of RAM in use between the host OS and VMs.

Running hdparm on the virtual machine software RAID array while the 3 VMs are turned on gives me:

fs:~# hdparm -Tt /dev/md0

/dev/md0:
Timing cached reads: 3592 MB in 2.00 seconds = 1796.48 MB/sec
Timing buffered disk reads: 642 MB in 3.00 seconds = 213.82 MB/sec

However, if you want the absolute best performance and go with a hardware card, the one you posted will work great. 3ware 9650SE-8LPML would also work well. Just make sure you get a battery backup for your RAID card and use a UPS so you don't lose data in the event of a power outage.
 

CosmoJoe

Member
Jul 12, 2006
25
0
0
You bring up some very good points. I think, based on your experience and feedback, I need to reconsider the software RAID route. I've done a little shopping around and there are Core i5 motherboards available, for example, with 10 SATA ports.

Furthermore, I admittedly have not fully researched the exact numbers for current "software" RAID versus dedicated hardware RAID, but I suspect that for my situation, it might be a moot point anyhow. I am just not doing anything intensive enough to warrant the exorbitant cost of a hardware RAID solution. Right now, I am getting between 30-40mb/s copying to my current RAID1 file shares, which has been sufficient.

The other issue to iron out if I do go that route, is how to handle redundant storage if I install ESX. It seems ESX/ESXi is very picky about disk controllers. In the lab we have here at work for example, we have a Precision workstation, and I had to turn off the Intel RAID1 in BIOS for ESXi to see the disks.
I thought about giving Hyper-V a second look but after my current experience running VMWare Server 2 for about a year, I just really really do not want to deal with a host OS and the headaches of rebooting the host OS and having to bring down all the VMs running on it. Probably a good time to revisit the ESX compatibility list and see if there is something nice and lowcost in the controller department :)
 
Last edited:

child of wonder

Diamond Member
Aug 31, 2006
8,307
176
106
Most motherboards with onboard RAID use "fake RAID," which means a driver is relied upon for the OS to see the RAID array and offload all RAID calculations to the CPU. It's essentially software RAID. ESX/ESXi can't see those RAID arrays.

If you'd like redundancy for ESX/ESXi, you'll need to use a hardware RAID card on the HCL at vmware.com. For a whitebox, I would advise visiting http://www.vm-help.com//esx40i/esx40_whitebox_HCL.php and http://ultimatewhitebox.com/motherboard. There might be some motherboards that have onboard hardware RAID that vSphere will recognize and use but you'll have to do some research to find one. There are also some motherboards with onboard NICs that vSphere will see.

Research the motherboard for your vSphere server really well to ensure you don't run into any issues and what onboard NICs and RAIDs will be recognized. However, you may find you need to get a hardware RAID card in order to install vSphere on a RAID 1 array.
 
Last edited: