Discussion Consolodating/improving/simplifying storage. HW RAID to ZFS

Railgun

Golden Member
Mar 27, 2010
1,289
2
81
Howdy all. Been a while since I’ve posted here. Figured I’d pick your storage brains that are far smarter than mine.

So I've decided to delve into the foray that is ZFS. I'm trying to learn as much as I can and I hope there's some low hanging fruit/easy wins here. I'm consolidating/upgrading/whatever you want to call it my storage setup. In particular moving from a DAS to a NAS. Current DAS is a RAID 5 on an Areca 1880i with 4x 2TB WD Blacks. 128k stripe on 4k block. Current performance is below.

NAS, specific to the replacement, is 6x 3TB WD Reds behind an Areca 1882xi-24 in JBOD mode configured as raidz2, lz4 compression, 128k record size, with two datasets behind it on an SMB share. FreeNAS is a VM with 8 cores and 24GB memory at the moment. Host has 64GB but as one can imagine, I have a few other VMs, though I'm only dedicating another 8GB to one, so everything else is a bit dynamic. I'll eventually add another 64GB. As configured, my current performance difference from the DAS and a 10Gb link to the NAS is here...

5A72A200-5516-4A86-97F1-2B4845924BD8.png

I have a PLEX VM as well. Locally, that is to say an SMB share across VMs within the same host, my sequential numbers double at QD8 and about a 25% boost at QD1 for read. Not quite as dramatic on writes, but close. Random stays the same. So in this case, network latency is obviously to blame for a bit of it (about ~258us RTT)

I'm not concerned about the issues around these cards and smart data at the moment. I'm simply looking at it from a performance perspective and intend to get a new HBA at some point.

I've seen reference to cut VM dedicated memory and bump memory dedicated to ARC for a pretty good boost, which is to say just get a bigger cache in a manner of speaking, though I've not yet found details on how to do that. I'd assumed it used what it needs/can automatically. While the performance boost I already have with the current conf is substantial, I'm curious to know how to get more with the configuration alone. I don't know what the easy way is to provide the current configuration other than the above.

For the record, in this instance, NICs are Chelsio T4 LL versions on both sides (for now). Server is an Asrock EP2C602-4L/D16 with 2x E5-2697 v2 and 64GB ECC. NIC in the server is in slot 2. Both connected to a Unifi XG-6-POE. Haven't looked at any NUMA considerations, if even warranted at all in this scenario.
 
Last edited:

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,846
3,190
126
FreeNAS and ZFS does not like Dedicated Raid controllers even in JBOD, they much rather prefer HBA with passthough.
I would not use that Areca controller with FreeNAS, your going to get a lot of people yelling at you and maybe have some issues.
You should really look for something in a LSI 2008 chipset variant or a LSI card flashed to IT Mode as a HBA.

Also FreeNAS 11.2 has Jails allowing you virtualize, and i think it was said its better to run FreeNAS on top and then Virtualize inside jails, instead of doing it the otherway around. (not 100% sure, but i think i did read that somewhere, which is why they implimented it on 11.2 and now the current 11.3)

FreeNAS also requires a LOT of Ram... 8GB min with an additional 1.5GB / TB recommended as ZFS file systems are heavily RAM based.
Its typical of most FreeNAS servers to go crazy on RAM.


Lastly, i do not like PLEX on FreeNAS, as it has no GPU support, and if your going to be hosting 4k, you really want a NVEC card to help with decoding unless you have a extremely fast Single threaded CPU or a skylake Intel IGP with high clockspeed.
But that is just me, as my library has a lot of 4k content, and transcoding 4k to things like tablets can be a nightmare without NVEC or Quicksync support.
 
Last edited:

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
Honestly, with only 6 disks, you are not going to get that good of IO. The write IOPS won't be any more than a single disk. RAIDZ2 is fine if you are adding vdevs of the same size, but it will require 6 more disks if you want to add space. May want to look at mirrors since you can add those in two. It will also give you higher IOPS.
 

Railgun

Golden Member
Mar 27, 2010
1,289
2
81
You should really look for something in a LSI 2008 chipset variant or a LSI card flashed to IT Mode as a HBA.

I'm not concerned about the issues around these cards and smart data at the moment... and intend to get a new HBA at some point.

Also FreeNAS 11.2 has Jails allowing you virtualize, and i think it was said its better to run FreeNAS on top and then Virtualize inside jails, instead of doing it the otherway around. (not 100% sure, but i think i did read that somewhere, which is why they implimented it on 11.2 and now the current 11.3)

FreeNAS also requires a LOT of Ram... 8GB min with an additional 1.5GB / TB recommended as ZFS file systems are heavily RAM based.
Its typical of most FreeNAS servers to go crazy on RAM.

Not a lot I can do with respect to that at the moment. Using vmware as my hypervisor. Not looking to run FreeNAS on bare metal and carve VMs out of that. As for memory, that will be expanded eventually.


Lastly, i do not like PLEX on FreeNAS

I have a PLEX VM as well.

To be clear, not under FreeNAS, rather on its own Win10 VM.

Honestly, with only 6 disks, you are not going to get that good of IO. The write IOPS won't be any more than a single disk. RAIDZ2 is fine if you are adding vdevs of the same size, but it will require 6 more disks if you want to add space. May want to look at mirrors since you can add those in two. It will also give you higher IOPS.

Thanks. Essentially a RAID10 then in this instance. While I do lose capacity, I gain performance. I will take that into consideration though again, it's a pretty good boost now from my starting point. And for this particular array, I do want a bit more resiliance. As a side note, my media storage has changed form 8x 3TB disks above in a RAID50 to 8x 6TB in raidz. But I see the challenge with respect to expansion, though it's not the worst process in the world and I've gone several years with my current capacity, so I'd anticipate at least a good five more with this.

I guess for my use case, it should be good enough. Other than the overall vdev configuration, I was more curious to know whether there were some underlying options that would help improve things as configured. Having looked at other resources and various benchmarks with different configurations, I'd not seen differences, or even setups looking at the differences between, say, 2x 3x3TB vdevs in a pool as opposed to 1x 6x3TB, granted I can't do raidz2 in the former.

The 4x2TB I have that will be decommissioned, in conjunction with a spare 8TB I have will be used for a separate backup to the above except for the media array. I've yet to determine how that will happen.

But thanks. It's food for thought.
 
Last edited:

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
The best way to get performance out of raidz2 is with multiple vdevs. The data is distributed across them, but it is partially based on capacity of each vdev.

For example, say you have 3 vdevs of 6 * 3TB disk. If you started adding files, they would be fairly distributed allow for the aggregate of IOPS. If then you add a fourth 6 * 3TB vdev. That vdev would get the focus of writes until they start to balance out, meaning you won't get the aggregate of IOPS. Freenas will also not auto balance the files after adding space.
 
  • Like
Reactions: aigomorla

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
7,408
2,440
146
Just for reference, my freenas is on an old X58 board with a Xeon x5660 and 18GB of RAM. I have the OS on a 128GB SSD, with 3 4TB red pros in RAIDZ. Seems plenty fast to me, my transfer rates are generally limited by network connection (1Gb/s)
 

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
It's fairly easy with FreeNAS to max out a 1Gbs connection as even a single HDD can do well over 100MB/sec writes. It's when you go to 10Gbe+ that you start running into bottlenecks and balancing everything. I actually have 40Gbe adapater and it was painfully obvious that it takes a lot of tuning to get it working. Luckily, most of the server parts are cheap on ebay so you can experiment.
 

gea

Senior member
Aug 3, 2014
211
12
81
It's fairly easy with FreeNAS to max out a 1Gbs connection as even a single HDD can do well over 100MB/sec writes.

If you look at the specs, a modern hardisk is rated with up to 250 MB/s so at a first look you are right. But this performance is only the case with a pure sequential workload ex writing a large videofile as a single user to an empty disk with a performance optimized filesystem. More or less like the max speed of a car (in a free fall test from a bridge).

If you use not a single disk but create a ZFS raid, you no longer have a sequential stream as ZFS spreads data quite even over all vdevs/disks what reduces performance. Typically you end at 80 MB/s depending on RAM for caching, disk and SMB server (the multihreaded Solarish SMB server is often a bit faster than SAMBA)

If you try to copy the content of your mailserver with many small files as a single user to the same empty disk, you may land at 60 MB/s. If the disk is not empty, you are affected by fragmentation what may end with 50 MB/s. If you are not the only user (concurrent use) you may end with 20 MB/s as then latency and write iops (around 300 from a mechanical disk compared to 500000 of an Intel Optane 90x) are limiting.

If you enable sync write as a single user to copy a videostream to a single disk, you may end at 5-10 MB/s with a disk rated at 250 MB/s.
 
Last edited:

Fallen Kell

Diamond Member
Oct 9, 1999
6,039
431
126
I just built something much like the OP, but I have a lot more RAM on my server (192GB). I also put in a dual 40gbe NIC (only using 1 port though since it will max out the PCIe lanes before the second one can be utilized for anything other than redundancy). You would be surprised how cheap it is to pickup these (that said, getting a network switch that supports it is a different story, but can be relatively cheap). I also have a LSI based card flashed to IT mode to support my disks.

With the added memory, I have not even looked at a dedicated ZIL, or L2ARC. It is simply not needed for my use case (storage for my HTPC's DVR capabilities, so 1+GB large movie files).

If I was you, I would consider disabling the compression on your ZFS filesystems and see how that affects your performance. Given that you are limited to 8 CPUs (I am assuming because of VMWare's limit... which is why I switched mine to use Xcp-ng after hitting that limit in VMware) for handling all the IOPs, network interrupts, and compressions/decompression on the fly, you may see some real performance differences (I decided not to compress mine after such a test).
 
Last edited:

Railgun

Golden Member
Mar 27, 2010
1,289
2
81
Thanks for the info. I'm not limited to 8 cpus. It's just a starting point to see how the performance was and any operation seems to barely hit so I've got some headroom.

I've changed things up already a bit. I've swapped out the 1882 for an LSI 9305. I've created my media as 4x vdev mirrors, my DAS replacement as 3x vdev mirrors and an NVR setup as 1 mirror of 8TB disks.

It will be sufficient for the time being.

What I am having issues with for no apparent reason is as follows.

It was originally created with a single interface. All well and good there. The security setup is on a different VLAN. I've created the associated port group accordingly within ESXi and added this second interface to FreeNAS. The issue is simply I lose the SMB shares to everything after this interface is added. I've bound SMB to both interfaces. I've restricted access to the shares to their corresponding networks (deets below). FreeNAS itself is still reachable, but SMB seems to have an issue here. The goal is that I do not want to route the video stream; keep it all within the same subnet. I don't know if FreeNAS has an issue with creating interfaces after the fact; whether there's some requirement to install with the interfaces required. I can't imagine that's the case, but all things being equal.

My desktop CAN still reach it via IP. Interestingly, my Windows/Plex VM cannot. I've changed the permissions above so that the media and storage shares are restricted to 10.1.1.0/24. The NVR share is restricted to 10.1.5.0/24. The ACLs are open for the media share. Storage is restricted to a single user. The security share is restricted to a single user.

I've got to be missing something here as to why it seems to have an issue with this.

@Shmee I have the pre-SMR versions. As for these vs the pros, I wasn't going for absolute full bore performance, taking the price/performance/capacity ratio a bit more seriously. I'd picked all the 6TBs second hand.
 
Last edited:

Railgun

Golden Member
Mar 27, 2010
1,289
2
81
FWIW, somehow my router was hanging on an old IP that FreeNAS had during the changes, so it was resolving to something that didn’t exist. Cleared that and seems happy again.