• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

homebuilt NAS (for backup): HW recomendations?

jwalker46

Member
After reading discussions here about the "professional" (no-name hardware) NAS units being sold, I don't quite understand why it isn't better to build your own. I would like to backup three PCs to a server, to keep the backup physically separate from the originals. I will probably use 2-4 Seagate 7200.9 250gb hdd's (I would normally opt for less expensive 1.5GB sata or IDE 7200.8, but the price difference today is minimal). The OS will be Linux.

Questions follow.....naturally ;-)

What is the minimum cpu/mobo/ram required for this purpose, i.e. to handle backups over a 1gb LAN without creating any bottleneck? (Can a Celeron or Sempron handle this task?)

Any suggestions on which Linux distro would be easiest to setup?

In general, any other hardware/network advice would be welcome.

Thank you !
 
For only three computers I think a cheap celeron / sempron processor will have no trouble keeping up with the backup. You're going to be limited by the network or hard disk performance before CPU. I just built a system for this purpose with:

~2.5Ghz Celeron (socket 478)
1GB RAM
Gigabyte GA-8IKHXT server board
(integrated video, dual GigE, 2x 64bit PCI-X slots)
3Ware 4 port Raid Card (PCI-X card)
4x 400GB WD Raid-specific HDs (1.2TB RAID5 disk)

I realize the mobo + hardware raid card is rather expensive, but I think raid is required for a backup system. If i was going to build this on a lower budget, I'd try to setup a software RAID5 system using linux and 4HDs on the motherboard.

In my experience, there is a certain overhead when copying a large amount of small files. If I copy one large file (100s of MB or more) on GigE, I can sustain rates well over 200Mbit/sec. If I try to copy a large group of small files (like user directories on a server) then my transfer rate is well under 100Mbit / sec. In this case I'm obviously not network-limited, and when I check the system my CPU is nowhere near 100% utilization.
I think that the process of copying a bunch of small files causes a LOT of extra disk utilization, so the disk becomes the bottleneck.

Linux Distro:
Try CentOS. It's the free version of Red Hat's enterprise linux. It's meant for ultra stability rather than providing all the latest patches and performance tweaks for desktop use like Fedora Core, SuSe, and Mandrake. Of course, I think any of them would do the job well. Does anyone out there have comments on which would be the easiest to configure?? (Assuming you're relatively new to linux)


-Knavish
 
I'd look at a MB with built-in video and a good gigabit networking + RAID implementation; RAID 5 if possible. I wouldn't bother with PCI-X as that would take you into server country, raising your costs and reducing your choices. If I was doing hardware RAID, I'd go PCIe (4x).

nVIDIA has good gigE networking implementation (except when the MB manufacturer screws it up and attaches it to the PCI bus), good onboard video of course, and a passable RAID, even RAID 5 on board, at least on socket 939 NForce 430. nVIDIA's RAID is yours to compare with Linux, having onboard RAID 5 at least increases your options. (There are new chipsets coming out, NF 5, which will also have RAID 5 if you care to wait.)

I haven't had any problems with SATA I, II, Maxtor drives, and NForce 430 on my Asus A8N-VM CSM, and have had good networking results. I don't like the overclocking options, so probably wouldn't buy another one, and would buy an MSI instead.

However, I haven't had much luck with a couple of Linux distributions on this MB with a dual core. Fedora 4 x64 hung during bootup after installation. Fedora 5 x64 didn't show a mouse cursor. But I'm a Linux n00b, and had installed the first FC 5, so these problems might be fixable/fixed. FC 4 x64 ran fine in VMWare Server under 2003 x64, but that'd be a silly way to run Linux for performance.

I'd plan for 3-5 drives. OS on 1 IDE, rest on SATA. If the IDE drive is large, it can also function as secondary backup for some critical files. I'd get a case that could handle so many drives with air cooling. Cases / cooling have been discussed a few times here -- search. Also watch out for power requirements for that many drives on startup. Some brand new "450 Watt" PSU's come with as little as 15A on 12V, considering what the CPU, etc., take, this can be insufficient to start the HD's.
 
Wouldn't the onboard CPU cache affect performance of the data transfer? Early on I was told this is important and why xeons and opterons have more than desktop cpu's. Maybe I heard wrong.
 
For a NAS unit, you should realize that probably 128MB of ram is more than enough unless you're running Windows Server, which should be fine with 256MB. You should consider power consumption as well as stability and chipset support for the OS. Most professional NAS devices like Snap servers and up would surprise you with how low their system specs are. Only reason for needing such powerful processors (like 2GHz Celerons and up) is if you're running heavy duty database applications.
 
Originally posted by: Madwand1
I haven't had any problems with SATA I, II, Maxtor drives, and NForce 430 on my Asus A8N-VM CSM, and have had good networking results. I don't like the overclocking options, so probably wouldn't buy another one, and would buy an MSI instead.

However, I haven't had much luck with a couple of Linux distributions on this MB with a dual core. Fedora 4 x64 hung during bootup after installation. Fedora 5 x64 didn't show a mouse cursor. But I'm a Linux n00b, and had installed the first FC 5, so these problems might be fixable/fixed. FC 4 x64 ran fine in VMWare Server under 2003 x64, but that'd be a silly way to run Linux for performance.

I've just installed FC5 on a A8N-VM CSM board ... it works ok, though you do need to install Nvidia's driver to get the network running, and the onboard sound is currently hopeless. You can apparently compile the alsa sound-server code by hand and get it to work, but that is a PITA. The onboard video also requires you to follow a few steps listed on nvforums.net to get the nvidia driver installed. You can use the line:
option "HWCursor" "off"
in the driver section of the xorg.conf file to fix the invisible mouse cursor. (I'm 75% sure that's the right line)

Generally, though, I think that nvidia's support for linux is good but not super-stable. For a cheap server I'd buy an intel board and celeron processor. (They're directly supported by the linux kernel.) For a high end server I'd go with a AMD chipset (8131).

That being said, you can run linux on an nvidia board without much trouble if you're willing to potentially throw out integrated network and sound.

------------

As for RAID cards, buy one if it's in your budget. I'm a fan of 3ware, though I'm sure other brands are good too. IMO if you're getting a $400 RAID card you might as well pay the extra $40 for 1GB of ram.


-Knavish


 
Yeah, afterall, 1GB of RAM isn't that much anymore.

I remember back in the day when . . ..

(actually I dont, only been here since 89')
 
Originally posted by: Knavish
That being said, you can run linux on an nvidia board without much trouble if you're willing to potentially throw out integrated network and sound.

Thanks for your reply, after spending the time reading about Tuttle, I think better try CentOS next. Your suggestions are useful in any case.

However, I wouldn't recommend the above to the OP -- getting a good gigabit implementation is IMO worth the effort here; and to me that means not accepting a down-grade to PCI unless absolutely necessary. FWIW, I've gotten raw networking benchmarks up to 110 MB/s (TTCP), and actual file transfers up to around 70 MB/s (RAID to RAID or cache to RAID). If you need to go Intel to get good onboard networking with Linux support, then do it, I'd say, rather than downgrading NVidia to PCI. You could get an add-on PCIe 1x gigabit card, but that'd add relatively significantly to the cost.
 
Originally posted by: bluestrobe
Wouldn't the onboard CPU cache affect performance of the data transfer? Early on I was told this is important and why xeons and opterons have more than desktop cpu's. Maybe I heard wrong.


That extra cache helps out when you're running code that will fit in the cache. 4MB of cache won't help you much when you're copying 10GB of files. When you're copying data you only look at it once -- from the network to the disk. Cache is only useful when you can either preload data for the CPU to crunch on or if the dataset you're working on will fit in the cache. Anyway, good drive controllers and network cards will copy their data directly to memory, so the CPU might not even see it. (Perhaps all it has to do is instruct the network card to copy its data to one memory address and the raid card to read from the address and write to disk. -- i'm sure this is an over simplification 🙂 )
 
Knavish,
Thanks for your comments. Re Linux, I understand that DSL and Puppy are easy to configure as servers, and will run quite well on Celeron or Sempron cpu's. I am puzzled, though, by your recommendation of Raid5. Why? The server exists to backup PCs, so I don't see the need for further redundancy, and since the backup will be daily, I don't see the need for faster writes, either. However, I may be missing something ;-) so please tell me!

Madwand1,
You may have better luck with another distro...many of the recent Asus boards have problems with Linux, all traceable to the nVidia drivers (see nVidia's forums or a Google search on the board + "linux"). Be sure that the distro runs the latest kernel (most distros don't update to a new kernel quickly). Also, you can try Mepis (or any other live CD) to see if you have any hardware/driver issues.
 
Originally posted by: jwalker46
Knavish,
Thanks for your comments. Re Linux, I understand that DSL and Puppy are easy to configure as servers, and will run quite well on Celeron or Sempron cpu's. I am puzzled, though, by your recommendation of Raid5. Why? The server exists to backup PCs, so I don't see the need for further redundancy, and since the backup will be daily, I don't see the need for faster writes, either. However, I may be missing something ;-) so please tell me!

Crap I wrote a reply to this but it looks like I lost it before submitting...

I like RAID5 b/c of the extra reliability, but it's probably excessive for a budget backup system. With raid5 the backup server will survive if one HD crashes. That means that you can survive if both a PC and the server have a single HD crash. Since the chance of that happening are rather low, using a single drive is reasonable for an inexpensive backup. RAID5 can increase your write performance, but that's not a big deal when you're just backing up 3 PCs (like you said).

One convenient thing about RAID is that it will lump all your drives together as one big volume, so you don't have to worry about which drive to put what data on, if your backup system has a few drives.

I'd definitely recommend against using RAID0 -- it makes all your drives into one big volume and gives good read / write speeds, but it has no redundancy. A RAID0 system will fail if any of it's drives fail, so a 2 drive system has 1/2 the reliability of a single HD, 4 drives has 1/4 reliability, etc.

For a basic system you could easily get by with a budget processor (celeron / semperon), 512 MB of ram, a single HD, and a pretty cheap motherboard. Having integrated GigE is good, having integrated video might save you a few dollars, and at least 4x SATA ports would be good for future expandability. Again, I'd recommend against mixing nvidia hardware (most AMD motherboards) with linux unless you want to fight with potential compatability problems.

-Knavish
 
A good backup server is very tempting to use as a shared file server, and that's what I'd end up doing (with a secondary problem to be solved -- where are the shared files backed up?). I made a small suggestion on this earlier. RAID 5 would somewhat mitigate one of the concerns about such a server, although RAID != backup because it only guards against HW issues, not software, security, and human.
 
Don't forget something for backups. DVD is probably the cheapest right now, although it can be a PITA to backup everything. 🙁

If this is for work, just buy a dell. The PERC controllers are supposed to be pretty good.
 
Originally posted by: n0cmonkey
Don't forget something for backups. DVD is probably the cheapest right now, although it can be a PITA to backup everything. 🙁

If this is for work, just buy a dell. The PERC controllers are supposed to be pretty good.

I have a 6 port SATA Dell Cerc RAID 5 controller in my server and it has been working flawlessly so far. 🙂
 
Originally posted by: n0cmonkey
Don't forget something for backups. DVD is probably the cheapest right now, although it can be a PITA to backup everything. 🙁

If this is for work, just buy a dell. The PERC controllers are supposed to be pretty good.

The Dell SCSI systems will be many $1000s and for some reason they won't let you configure a SATA RAID system with more than 250GB drives. I guess they don't want people to outfit a < $3000 system with > 1TB of RAID5 disk space. That might lose them a few sales on $10k SCSI based servers.

I've had good luck with SATA Raid systems by Pengiun Computing, where they let you put drives bigger than 250GB in the system. Unfortunately they aren't dirt cheap either 🙂
 
Originally posted by: Madwand1
I'd look at a MB with built-in video and a good gigabit networking + RAID implementation; RAID 5 if possible. I wouldn't bother with PCI-X as that would take you into server country, raising your costs and reducing your choices. If I was doing hardware RAID, I'd go PCIe (4x).

nVIDIA has good gigE networking implementation (except when the MB manufacturer screws it up and attaches it to the PCI bus), good onboard video of course, and a passable RAID, even RAID 5 on board, at least on socket 939 NForce 430. nVIDIA's RAID is yours to compare with Linux, having onboard RAID 5 at least increases your options. (There are new chipsets coming out, NF 5, which will also have RAID 5 if you care to wait.)
I recommend using Linux software RAID 5 over any of the nForce/SI/whatever fakeraid solutions. The performance and stability of the array will be much better. Plus, Linux support for the fakeraid solutions can be sketchy at best.

If you're going to do hardware, do it right and get an Areca - or a 3ware if they ever get their PCIe product out the door.

I haven't had any problems with SATA I, II, Maxtor drives, and NForce 430 on my Asus A8N-VM CSM, and have had good networking results. I don't like the overclocking options, so probably wouldn't buy another one, and would buy an MSI instead.

Don't OC your server!!! 🙂

I'd plan for 3-5 drives. OS on 1 IDE, rest on SATA. If the IDE drive is large, it can also function as secondary backup for some critical files. I'd get a case that could handle so many drives with air cooling. Cases / cooling have been discussed a few times here -- search. Also watch out for power requirements for that many drives on startup. Some brand new "450 Watt" PSU's come with as little as 15A on 12V, considering what the CPU, etc., take, this can be insufficient to start the HD's.

Get Zippy, Fortron, Seasonic, or PC&C PSU. The Zippy 460w is around $95 and will easily handle 4 drives plus the rest of the hardware.
 
Originally posted by: EatSpam
If you're going to do hardware, do it right and get an Areca - or a 3ware if they ever get their PCIe product out the door.

I'd rather get an LSI megaraid than anything by 3ware. Some OSes have raid management tools (Free/Open Source) for megaraids thanks to some documentation from LSI, but 3ware seems to be fairly antagonistic towards the F/OSS community.
 
I have a 6 port SATA Dell Cerc RAID 5 controller in my server and it has been working flawlessly so far

Turn the fans off in the box, and let the Cerc get hot for a few days. Say goodbye data.

I honestly wish you RAID 5 freaks would get a clue. RAID 5 is less fault tolerant than RAID 1, not more. If your controller fails with RAID 5, you are *screwed*, so is your data, and I've had RAID 5 controllers of all price and type fail on me. As400s, Compaq's, IBMs, you name it. Go RAID 1, and be done with it because you don't have to worry about the controller puking and it's simple to set up.

Also, I use 3ware quite a bit in my Windows servers, and just noted they seem to have pretty extensive Linux support on their site including 64-bit drivers for a pile of different distros. So, what's your your problem? LSI whores out their chips to anybody who wants to stick them on a card. 3Ware doesn't.
 
Originally posted by: spikespiegal
I have a 6 port SATA Dell Cerc RAID 5 controller in my server and it has been working flawlessly so far

Turn the fans off in the box, and let the Cerc get hot for a few days. Say goodbye data.

Try it with a RAID1 controller too, if you want. I'd back up first if I were you, though.

I honestly wish you RAID 5 freaks would get a clue. RAID 5 is less fault tolerant than RAID 1, not more. If your controller fails with RAID 5, you are *screwed*, so is your data, and I've had RAID 5 controllers of all price and type fail on me. As400s, Compaq's, IBMs, you name it. Go RAID 1, and be done with it because you don't have to worry about the controller puking and it's simple to set up.

And I've seen plenty of RAID1 and non-RAID setups get wiped out by hardware problems. A defective controller (or CPU, or sometimes bad RAM, or even a flaky power supply) can wreck ANY and ALL data on the drives, regardless of RAID protection. RAID (of any type) is an uptime solution for drive failure (and sometimes a performance improvement), NOT a replacement for backups or something that gives you immunity to any and all hardware problems.

Personally, my HTPC blew up a few months ago when my NB chipset fan died. The NB (obviously) overheated, and apparently the data going over the PCI bus was getting corrupted sporadically. When it finally crashed the filesystem was torn to shreds on both the system drive and my RAID array; I got some of it back with data recovery software, but a number of the files had been corrupted. No amount of RAID would have helped me there.
 
Matthias, Your point is well taken that hardware problems can trash your data....but what alternative is there for backups? Realistically, if you need to backup 200G of data, where are you going to do it? On DVDs?? The time spent per day doing the backup is not practical. If you have a better idea (than a NAS) for backup, please tell us. (I'm not arguing...just asking.)
 
I've never seen a bad raid controller trash my array's yet.

For this, I wouldn't buy a raid controller, I would run linux software raid (mdadm) and run raid 5 for the backup array, and just a single, small (10GB) drive for the os. Also, a large, fast, modern proc won't make a difference, as this is going to be disk/network bound, not CPU bound. Get an old P3 550 with 256 or 512 MB ram, and then a high quality NIC.


For larger backups, I use my Onstream 50GB tape drive (for my desktop) and a DLT 640 (640 gigs compressed per tape) for small servers. For a large set of backups, a tape library/changer is worth it's cost.
 
Back
Top