First up, pictures!
Inside Shot w/ Left-Angle SATA Data Cables
Close Up of Drives & Cables
New Server & Old Server Comparison
Yesterday I copied over ~ 1.2TB of data from both my main computer and my old file server. I wanted to test transfer speeds and get some data on the new server before I did a drive swap. Transferring from the old server went at ~10MB/s, limited by the 100Mb Ethernet connection. The new server didn't seem to have any issues handling this transfer and running downloads from newsgroups at the same time.
Today, I powered down the system and swapped out the single 750GB for the 2TB, giving me a total of 12TB of raw space. Booted up into napp-it, and it showed a new drive and a degraded array, as expected. However, it didn't auto-replace and auto-expand the array, complaining of a bad/missing drive label on the new drive.
My guess? New drives should be clear of any existing partitions before adding them to ZFS. Could be useful to help prevent accidental disk overwrites. Either way, I just had to run
"# zpool replace server c2t5d0p0" from the command line to start the rebuild process. Altogether it took 51 minutes to resilver the array. No errors and no data issues afterwards. Array size expanded as well!
ZFS RAIDz1 Online Expansion - 51 Minutes (1.2TB data)
Old Array: 5x2.0TB, 1x750GB = ~3.31TB (usable space)
New Array: 6x2.0TB = ~9.70TB (usable space)
I also looked at the rates that the internal disks ran during a rebuild on both servers - the old server ran at ~4.5MB/s, while the new server runs at ~64MB/s. My guess is that the old server's bottleneck is the RAID controller CPU, it simply cannot recalculate the new parity data any faster.
On the new server, I'm guessing 64MB/s is the write limitation of the 2TB drives during a rebuild. CPU stats showed ~82% idle, so that means I'm not hitting any CPU bottlenecks. Overall, 51 minutes to resilver 1.2TB of data is pretty quick! More data would mean longer rebuild times - a full array would require ~7 hours to rebuild a new drive is my guess.
Finally, here are new benchmarks with the full 6x2TB RAIDz1 array! Percentage increases are over the previous benchmark with 5x2TB + 1x750GB.
Test - Speed - %CPU (Increase over previous benchmark)
RAIDz1, 6x2TB Samsung F4EG
- Seq-Wr-Chr - 79 MB/s - 98 (-7%)
- Seq-Write - 293 MB/s - 34 (-5%)
- Seq-Rewr - 175 MB/s - 26 (-3%)
- Seq-Rd-Chr - 93 MB/s - 99 (0%)
- Seq-Read - 567 MB/s - 26 (32%)
- Rnd Seeks - 561.8 - 1 (-11%)
I expected all my benchmarks to go up! However, since the 2TB drives were previously limited to 750GB, they most likely only used the outer part of the platters, which is the fastest part. Sequential speed however went WAY up, new drives definitely make a difference!
Summary
So - hard to complain about my new setup! It's anywhere from 10-20x faster (and 5x capacity) than my old server.
I'd recommend a ZFS based storage system to anyone that doesn't mind the extra effort and research that's required. None of the hardware I picked up I would consider enterprise or performance grade - in fact I'd say it's all relatively slow. But, compare this server to commercial products in cost or performance, and it's a landslide victory IMO!