• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

NAS Build, Input Requested

I have a few smaller external HDs attached to my router, which I use mostly for backing up the computers in the house, as well as for streaming some media (which I don't have backed up elsewhere, but it's all replaceable if I need to).

But performance sucks. (~7MB/sec for wireless transfers and ~12MB/sec for wired.) the last full backup I did of my laptop over WiFi took 3 days.

Transfers between the computers on the network are around 15MB/Sec for WiFi and 60+ for wired. So I'm thinking it's time.

Motherboard - FoxConn D70S-P
Case - MI-008 ITX case.
Hard Drives - 3x 2TB Toshiba DT01ACA200

Throw in some generic RAM, a few cables, and throw FreeNAS on it (booting from a USB drive), and it should be good, right?

My questions/concerns: (Any other nitpicks are welcome too.)

1) Hard drive selection. I'm... pretty cheap. Is spending the extra money on WD Reds or a "NAS-oriented" HD really worth it? I have three computers with a combined total of about 1.5TB of data to back up, but after that, I'm just doing differential backups to the tune of a GB or two a day, and streaming the occasional movie. Not really heavy use. (At least it doesn't seem like heavy use to me.)

2) Is that a powerful enough CPU? Is 4GB of RAM enough?

3) ZFS is nice and all, but I understand it's also more error prone and RAM hungry. Since motherboard doesn't have ECC RAM, should I stick with UFS?

4) Motherboard doesn't support RAID. ZFS storage pools would make that a moot point, but will FreeNAS do a software RAID-5 with UFS?
 
Last edited:
1) No idea

2) Previously I was running a Pentium 4 in a NAS, and I saw samba performance in around 30-40MB/s. When I checked with top I found the CPU completely maxed out. Now I use an Intel i5-2300 (Sandy Bridge) and I can get transfers around 100MB/s for a large file across 1gbit/s and the CPU will average 38% and peak to 45%, so near enough an entire core used. I don't know how your Celeron compares but hopefully you can look that up and see the likely impact on performance.

3) I use ext4 (linux) with software RAID. ZFS is by and large a good idea with its checksumming and expansion options but when I made my array it certainly wasn't ready for use on Linux in a performant way. Depending on your backup software you may find it already uses a form of error correction that allows it to recover from some errors so you may find you don't need to worry about protection on a filesystem level as well.

4) FreeNAS or a Linux will allow the creation of a software Raid device, as incidentally so can Windows if you want to try that. You shouldn't have any issues getting the RAID setup in software, hardware is certainly not required nor do I think its desirably since a hardware failure on the motherboard could destroy the array whereas a software array you can rebuild on an entirely different machine.
 
I have a MI-008, and putting three drives in there is asking a bit too much of it. Given that it seems like you want to save money, I would probably just get a MicroATX case like the Cooler Master N200 for $40 AR and a CX430 for another $40.

The Celeron on that board is an Ivy Bridge, so it should be reasonable even at such low clocks. Don't forget to get a SO-DIMM for the board like this G.Skill 4GB module.

I think the HDDs are fine. You'll be running RAID after all.

To answer your questions:

1) See above

2) Yes and yes

3) ZFS gives you more protection against dodgy equipment than UFS on top of software RAID. ZFS writes checksums, checks integrity on read, and will automatically fail out disks that have too many errors. ZFS and UFS + RAID are equally vulnerable to bit flips in memory. ZFS does love memory, but you can tune the size of the ARC (in memory read cache) such that it uses less. Obviously you'll take something of a performance hit because you're caching less, but it should be no worse than UFS + RAID.

4) Yes, you can easily do a software RAID5 in FreeNAS and then put UFS on top of that. I think ZFS is an overall better choice for a new installation though.
 
Re: 1) I'm not sure either, but I've read about this a little bit for thinking about a similar project for myself. If you're going to be using RAID/unraid/zraid-whatever, it sounds like the TLER on the Reds and NAS oriented drives might be useful.

I think you used to be able to activate TLER on the greens, but that functionality was disabled on the newer greens.

Anyway, here's some stuff I've been reading in case you haven't seen it yet, and it's helpful:

Here's Ganesh's summary:
http://www.anandtech.com/show/6157/...iew-are-nasoptimized-hdds-worth-the-premium/5
Sort of ambiguous...low on concrete reasons for why he's sure it's a good deal...

Tom's sees that they def run cooler, and lower power than greens:
http://www.tomshardware.com/reviews/red-wd20efrx-wd30efrx-nas,3248-8.html

PCPer gives some more concrete reasoning why you might be interested in TLER, although I don't have enough experience to evaluate the argument critically:
http://www.pcper.com/reviews/Storage/Western-Digital-Red-3TB-SATA-SOHO-NAS-Drive-Full-Review
The moral of this story is that typical consumer grade drives have data error timeouts that are far longer than the drive offline timeout of typical RAID controllers, and without some form of TLER, two bad sectors (totaling 1024 bytes) is all that's required to put multiple terabytes of data in grave danger.
 
Re: 1) I'm not sure either, but I've read about this a little bit for thinking about a similar project for myself. If you're going to be using RAID/unraid/zraid-whatever, it sounds like the TLER on the Reds and NAS oriented drives might be useful.

Software RAID is generally much more tolerant of long timeouts than hardware RAID. ZFS is particularly OK with it. At work we run some ZFS boxes with 48 3TB Green drives and it's doesn't drop them.
 
Last edited:
2) Probably. For a basic file server, serving few users, and mostly handling media files and some documents, the RAM needs of ZFS are much exaggerated. You don't want to get 1GB, but 4GB should be fine. Start doing lots of random IO, deduping, compression, serve many concurrent users, etc., and you'll want more, though. It's kind of hungry, since it was designed to be the foundation of a business file server, but RAM is cheap enough that it's not a problem for a basic NAS, these days (when 4GB is basically the least you'd put in anything...).

3) ZFS is not more prone to errors, but is more prone to finding them. It is not immune to problems that ECC is made for, not by a long shot, but it's a good step in the right direction, and has been used as part of diagnosing bad hardware and software, over the years it has been in use.

4) Unless you're using Windows, hardware RAID support is not a big deal.

Case: What about the Fractal Design Node 304?
 
1) Hard drive selection. I'm... pretty cheap. Is spending the extra money on WD Reds or a "NAS-oriented" HD really worth it? I have three computers with a combined total of about 1.5TB of data to back up, but after that, I'm just doing differential backups to the tune of a GB or two a day, and streaming the occasional movie. Not really heavy use. (At least it doesn't seem like heavy use to me.)

No, IMO. See my NAS build in my sig. All of my drives are Seagate 3TB drives. Nothing fancy. Unraid spins any of the drives not in use down....
 
Well, I pulled the trigger on the motherboard and RAM, as well as the bits and pieces (drive cage, sff psu, fans, etc) to build a plywood box I can bolt to the underside of my workbench.

I'll be figuring out how stuff works using older smaller spare drives I have sitting around, and then drop in some 2TB drives after another payday or two - definitely going with ZFS. Thanks for the advice, everybody.

I do have one semi-related question. On my old D-Link NAS box, I had borked UNIX permissions render a very large folder full of DVD rips unreadable. I was able to ssh into it and mount the shared file system as a local drive, chown/chmod, and all was well with the universe. If that happened again (or something similar), would FreeNAS let me do the same thing?

Thanks.
 
Last edited:
I do have one semi-related question. On my old D-Link NAS box, I had borked UNIX permissions render a very large folder full of DVD rips unreadable. I was able to ssh into it and mount the shared file system as a local drive, chown/chmod, and all was well with the universe. If that happened again (or something similar), would FreeNAS let me do the same thing?

Absolutely. You enable SSH access (actually it might be on by default) and do whatever command-line stuff you want. It's possible to break parts of the web UI if you change configuration files in ways that it cannot parse, but cp, mv, mkdir, chmod, chown, chgrp, etc. inside a share won't cause any problems.
 
Update: So, I got the hardware, been tinkering with the software config. The case is built except for a few finishing touches and a coat of paint. (Even with a printed template for pilot holes, I managed to misplace two of the four motherboard standoffs, dadgummit.)

And I still have to screw it to the underside of my workbench. (Where it will be pretty much out of sight/mind.) So right now it's sitting on my bedroom floor.

It's... pretty fast. Although speed varies a lot, depending on... factors. I'll try to make sense of the data and post a graph later.
 
Cool, glad it went together alright. What software config did you end up going with?

I ended up with FreeNAS and ZFS.

I had some weird performance issues with iSCSI though, so after a bit of research, and reading that iSCSI, FreeNAS and ZFS have issues (recommendations on the FreeNAS forum included such wonderful tips as "use UFS," "add a dedicated SSD to share as an iSCSI device," and "get 64GB of RAM as a cache") I scrapped it and went with a CIFS share. The backup takes a bit longer, but not a huge amount - and the limiting factor still appears to be my PC or Windows backup - task manager shows that the NIC is spending more time idle than not, and when its running, it's only about 1/3 the speed of a simple drag->copy.

I manually did a couple ZFS scrubs too - it crunched through the data at about 220MB/sec with CPU use in the mid 30%s, so the Celeron is doing its job well enough. But it's idle temps, as reported by sysctl, are crazy-high (in the 70s, which, while well below TJmax, is way higher than my 3570k or my Celeron 847 at load.) BIOS confirms it. I'll probably start by reapplying thermal compound and reseating the HSF, but not tonight.

Incidentally, I added an alias to my /etc/csh.cshrc file

alias temp 'sysctl -a | grep "dev.cpu.*.temperature"'

It gives me a nice little "temp" command I can type in at any prompt.
 
Last edited:
You're not alone with having performance issues with iSCSI over top of ZFS. It is basically horrible unless you have a ZIL device. Even having one only makes it less less horrible, it definitely does not venture into the realm of "good". This is true on Linux, FreeBSD, and even Solaris, though Solaris 11 is much better in this regard.

CIFS or NFS is the way to go IMHO unless you absolutely 100% need block access.
 
I'm also using AFP for Time Machine, with my Macbook, although the performance is still pretty good. (Averaging around 80MB/sec writes on big files, 60 on more normal batches. Reads around the same.)

It looks like it's single threaded though - it completely takes over a single CPU core. Might even be CPU limited.

The CPU temps are closer to better:

There was a spacer around the die, under the HSF, that was supposed to distribute pressure to the chip instead of the die. Which is cool, except the die wasn't actually making contact with the heat sink. I removed the spacer, and applied some A.S.3.. I'm going to let it bake in a for a day or two, but it's already idling at 70C instead of 80C, so I'm a little less freaked out.

I wonder if I can find an aftermarket HSF for this thing...
 
Last edited:
1GB of ram for every 1TB drive on a ZFS.

u have 3x2TB for a total of 6TB so u should ideally use at least 6GB of Ram., if ur not using a separate cache drive.
Id go 8GB... as FreeBSD is a memory heavy OS.

This is the optimal config if ur after speed.
 
Last edited:
I'm also using AFP for Time Machine, with my Macbook, although the performance is still pretty good. (Averaging around 80MB/sec writes on big files, 60 on more normal batches. Reads around the same.)

It looks like it's single threaded though - it completely takes over a single CPU core. Might even be CPU limited.

Netatalk, like most file servers spawns one thread per client, so yeah I can see you being CPU limited there. It's fast because, like CIFS, it doesn't insist on synchronous writes.

1GB of ram for every 1TB drive on a ZFS.

u have 3x2TB for a total of 6TB so u should ideally use at least 6GB of Ram., if ur not using a separate cache drive.
Id go 8GB... as FreeBSD is a memory heavy OS.

This is the optimal config if ur after speed.

That's not really true unless you're using compression and/or dedup. All you have to do is limit the size of the ARC to something like 75% of your physical RAM and you can run many TB of vanilla storage on 4GB of RAM. You won't get as much caching, but that's not critical.
 
That's not really true unless you're using compression and/or dedup. All you have to do is limit the size of the ARC to something like 75% of your physical RAM and you can run many TB of vanilla storage on 4GB of RAM. You won't get as much caching, but that's not critical.

That is interesting to know..
still learning the complete in and out on freenas.

I was told tho, that was the best ratio... hence why i upgraded from 8 to 16GB.
Since my NAS was handling a 6 x 4TB Raid-Z array.

Which is still SLOWWWWWWWWWWWWWWWW compared to my other array's.
Well i get 115-125 MB/S on a R0 off a dedicated controller vs 80MB/S my Raid-Z performs at.
 
Well of course RAIDZ (or any parity based RAID like RAID5) isn't going to perform as well as a straight up striped RAID0. The RAID0 simply has to do less work.

That being said, 6 drives should give you way more performance in either instance. You should easily be able to push ~400 MB/s on a RAID0 and ~80% of that on a RAIDZ or RAID5. How are you testing?
 
Well of course RAIDZ (or any parity based RAID like RAID5) isn't going to perform as well as a straight up striped RAID0. The RAID0 simply has to do less work.

That being said, 6 drives should give you way more performance in either instance. You should easily be able to push ~400 MB/s on a RAID0 and ~80% of that on a RAIDZ or RAID5. How are you testing?

large file transfer tests...

Using 1TB transfer moving from 1 server to another... the R0 will easily hold and push about 115-125mb/s max on the 1gb ethernet.

On the Raid-Z it seems to cap at arround 60-80mb/s no matter what i do... thats even with LAGG enabled...

I figured it was probably the limitation of the onboard IO controller.
 
Back
Top