Minimum requirements for a ZFS file server?

Winterpool

Senior member
Mar 1, 2008
830
0
0
I should like an archival file server -- data integrity, not speed, is the paramount goal. ZFS seems like the optimal foundation for such a server, but I've a number of concerns.

Foremost are the hardware requirements. Most enthusiasts will urge a capable cpu and 8+ GB of memory to run ZFS, but are these strictly necessary? I don't really care about superfast data transfers and responsiveness (though I do want sufficient bandwidth to play HD video on a client). I realise without 6+ GB of memory, ARC won't do me much good, but I'm not employing ZFS for that feature. What exactly would happen to a ZFS file server that only has, oh, 2 GB of RAM?

I am concerned with data integrity, and ZFS's checksums could use a respectable cpu, but would they completely overwhelm, say, an AMD K8 cpu, eg the Athlon X2 4800+? As you can probably guess by now, my preference would be to recycle obsolescent tech in this server.

Running ZFS pools of RAIDZ arrays will increase the cpu load even more, but again, how bad will it really be if it's a light use home server? At most this system is going to serve 1-3 clients (the vast majority of the time it will be serving 0!).

Additionally, is it still inadvisable to run ZFS on 4K-sector hard drives?

Note: if this discussion is better hosted under a different forum category, please migrate or advise as necessary.
 
Last edited:

Fallen Kell

Diamond Member
Oct 9, 1999
6,145
502
126
The system will craw to a halt on 2GB of RAM. You need MINIMUM of 1GB of RAM per Terrabyte of storage, on TOP of what is needed for the OS to run. So if you are looking at 4 terrabytes of storage in ZFS, you will need over 4GB of RAM as you will need at a minimum 1GB's of RAM for the OS to run. Really I would not do this on less than 8GB's of RAM.

As for CPU requirements, you will be ok with that processor if it really is light duty server. Also, make certain that you have a decent power supply and possibly a UPS. Unless you buy a small SSD for use as the ZFS ZIL log, you will need to turn the ZIL log off (as your performance will be horrible), but that leaves you open to data loss on a power failure, since the system RAM is used to as read/write cache, and the ZIL is used as a backstore for the cached files, without the ZIL on unexpected powerloss (or crash), the filesystem will have lost all operations which occurred in RAM and not yet committed to disk. I guess this comes down to how important your data is that you are storing on the ZFS system.

You can run ZFS on 4k drives, however, you need to do some tuning when you create the zpool, and ALL drives in the pool need to use 4k sectors (you can NOT mix them with older 512byte, or with drives that use 4k but emulate 512byte to the OS). Simply google "ZFS 4k sector alignment" and you will find some how to's on what you need to do.
 
Last edited:

Winterpool

Senior member
Mar 1, 2008
830
0
0
I've seen this guideline for a GB of memory for every TB of disk storage, but where does this relationship derive from? If I'm randomly reading/writing, and the data in question is not repeated, how does caching help [it's 'brand new' data each transaction]. I suppose I'm asking what does all the RAM do exactly in ZFS... [Many of the cases I've seen online are enterprise scenarios that involve a good number of clients and presumably a good amount of repeated data.]

Found and commenced reading of the epic 50-page ZFS NAS thread on Ars Technica, and as I suspected, back in 2007-8, some users were employing 2-4 GB of RAM in their ZFS boxes, because that's what was affordable/sensible then. They seemed usable as file servers.

Mind you, I haven't even got a proper Gigabit LAN in my residence. I just bought my first Gb switch this spring!

Just found out what the ZIL is this morning. Have to give that some thought...
 

nenforcer

Golden Member
Aug 26, 2008
1,767
1
76
I own an old Sun LX50 as a "test" server and have wanted to try out ZFS on it.

However, the general consensus is that you want a 64-bit server specifically to overcome the 32-bit 4GB RAM limitation and that anything less for a multiterabyte ZFS SAN server will hold you book.

It's not really an option for me with my 2 x 36GB SCSI hard drives with dual 32-bit Pentium 3 processors!
 

Fallen Kell

Diamond Member
Oct 9, 1999
6,145
502
126
The memory requirement is based on how ZFS itself operates. The writes are cached first into RAM because the system will wait for a certain amount of data is received before it can write to disk as it writes in 128k blocks, which also require a checksum to be created first before it can write it to disk. A lot of other data about the zpool is also kept in RAM which is used for keeping track of health and performance of the disks. And it is also used to perform background performance optimization such as data re-ordering (i.e. when it realizes that when you typically read data X, you will want to read data Y next).
 

ethebubbeth

Golden Member
May 2, 2003
1,740
5
91
My NAS is running an AMD E-350 CPU with two 4x 3TB drive RAID-Z arrays (8TB usable storage per array, 16TB total). FreeNAS 8 is the operating system. I have 8GB of RAM in the system and didn't notice any additional slowdown when I added the 2nd array.

I know I'm not maxing out my gigabyte ethernet link with file transfer... I'm getting 40-50 MiB/S typically. My guess is that the CPU is the limiting factor here but it could be a crappy ethernet controller.

The E-350 is a dual core 1.6 GHZ CPU and its architecture achieves ~80-90% of the performance of the old K8 architecture. Therefore, the architecture of your processor as well as its higher clock frequency means it should beat the snot out of the E-350.
 

RAJOD

Member
Sep 12, 2009
57
0
61
I've seen this guideline for a GB of memory for every TB of disk storage, but where does this relationship derive from? If I'm randomly reading/writing, and the data in question is not repeated, how does caching help [it's 'brand new' data each transaction]. I suppose I'm asking what does all the RAM do exactly in ZFS...

You ask very good questions, without the answers you seek. I think people just guess or spew off something they read but never verified.

I have over 12 cpus in my house from 16 core rack servers to old p3s. I pulled out a A64 3200+ single core with 2 gig onboard ram to see. I put a 8.00 gigabit lan card in it to. Cheap card that uses cpu as well.

I put two new WD RED 3 TB SATA drives into it.

I ran freenas 8.3 and nas4free 9 on it. It actually ran ok, it did not blow up like some have said in other forems, did not loose any data and it streamed videos with ease.

I also compared zfs to UFS on the same system. UFS raid 1 was 30 percent faster in most things than ZFS.

Just some numbers for ya.

UFS - File from NAS 47 MB/sec and to NAS 40
- Content creation 1.33 (this is due to cpu)

ZFS - from nas 38 MB/s , to nas 30MB/s
content creation .9

Its not stellar but it beats a WD NAS I have. Just run it and when you get faster hardware you can always migrate to new MB/cpu easily.

Just don't run parity usually a dedicated raid card does that, if you don't have one then your cpu becomes the raid card and has to do all the calcs. It will slow ya down, so stick with non parity raid 1 or 10 in an old system. I'd say UFS raid 1 is the best for old but give it a try. Its not as bad as many have guessed it to be.

I'll do the same tests with a ivy cpy and 16 gigs ram to see how much diff it makes.

I tried a rebuild raid and it took 8 hrs to synchronize the drives.
 
Last edited:

adairusmc

Diamond Member
Jul 24, 2006
7,095
78
91
I just upgraded my old nas server this past week to an amd c-60 board and I was on the fence on going between 8gb of ram and 16gb, and I am glad I went with 16. I haven't gotten all of my data transferred yet, but it is already using over 8gb at times.

This is with six 2tb disks in raidz1.

Performance rocks though, much better than my atom based nas with two ufs volumes.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
I should like an archival file server -- data integrity, not speed, is the paramount goal. ZFS seems like the optimal foundation for such a server
Indeed! In fact, I would say that ZFS is the only reliable and mature way to store files accessible to us mortals at this time. All other solutions have serious shortcomings or simply belong to a different era.

Foremost are the hardware requirements. Most enthusiasts will urge a capable cpu and 8+ GB of memory to run ZFS, but are these strictly necessary?
You should not go lower than 8GiB on RAM whenever possible. But ZFS is not CPU-intensive unless you will be making heavy usage of features like compression, encryption or deduplication.

What ZFS wants is to have lots of RAM memory, more than anything else. This enables ZFS to achieve higher performance; if you have 40 drives but only 4GiB RAM, then you will only achieve about 10% of potential performance. Any lower and ZFS could become unstable as well unless properly tuned.

As for performance, your most immediate problem is high latencies on transaction group cycles. Whenever a transaction group is written, your clients may have higher latencies when reading data for example when streaming a video. This could lead to pauses or stuttering every x seconds. If you encounter this, either your disks are to slow to keep up with the transaction group cycles, you have too few RAM or the system tuning is way off.

I am concerned with data integrity, and ZFS's checksums could use a respectable cpu
You can use the slowest CPU you like. AMD Brazos is the slowest you can get on desktop, but it works pretty much great. I do recommend you get something a bit more powerful. Intel chips generally are preferable to AMD due to Intel offering better CPU performance. The better GPU performance of AMD is not very beneficial to a ZFS server.

As you can probably guess by now, my preference would be to recycle obsolescent tech in this server.
Generally this is not recommended. ZFS likes new hardware in particular because DDR2 memory is expensive while DDR3 is cheap. Newer hardware is also a lot more power efficient. Buying a new system may be free if you consider 24/7 power savings over a three year period.

Additionally, is it still inadvisable to run ZFS on 4K-sector hard drives?
It has never been inadvisable to use 4K sector harddrives with ZFS. The 'mother' platform of ZFS, Solaris, just has a weak infrastructure to deal with 4K sector harddrives. That is a weakness of the operating system - not ZFS.

ZFS itself is properly designed to cope with all sector sizes, from 1 byte to 128KiB. Sector sizes 512B, 4K and 8K are the most common.

I recommend you run a FreeBSD platform to utilise ZFS. It has the best ZFS implementation in my opinion.

The system will craw to a halt on 2GB of RAM. You need MINIMUM of 1GB of RAM per Terrabyte of storage
This '1GB per 1TB' rule of thumb is simply false. You need 1GB, you want 8GB+. You don't need a lot more memory when your disks get bigger. Instead, you need more memory to utilise the performance potential of faster drives or bigger arrays.

A better rule of thumb would be '1GB RAM per 100MB/s of performance' but such statements are too generic for me to promote.

Unless you buy a small SSD for use as the ZFS ZIL log, you will need to turn the ZIL log off (as your performance will be horrible)
NEVER TURN THE ZIL OFF!

The ZIL should also not impact performance that much, depending on usage. There are exceptions. Some clients use NFS with sync writes which means performance will suck without a dedicated log device, properly called a SLOG (separate log).

The ZIL is very much misunderstood. Many people think ZIL is a write cache. It is not. It is an intent log or 'journal'. NTFS uses a comparable feature. ZFS just allows storing the journal/ZIL on a dedicated device instead. In most cases, don't worry about it. But please, do not turn it off?! If you don't care about application consistency, you can turn sync writes off: zfs set sync=disabled tank. But never disable the ZIL!

You can run ZFS on 4k drives, however, you need to do some tuning when you create the zpool, and ALL drives in the pool need to use 4k sectors (you can NOT mix them with older 512byte, or with drives that use 4k but emulate 512byte to the OS).
You can mix 4K and 512B drives just fine. You are right that tuning is recommended when you create the ZFS pool. But not strictly necessary and in some cases it is better not to optimize for 'ashift 12'.

Optimal for 4K harddrives:
Mirror: all are optimal
Stripe: all are optimal
RAID-Z: 3, 5 or 9 disks (or 17- but that is not safe with RAID-Z)
RAID-Z2: 4, 6 or 10 disks (or 18 - but that is not very safe with RAID-Z2)
RAID-Z3: 5, 7 or 11 disks (or 19)

If you stick to these optimal configurations, you will have more storage space available and general better performance. The storage space issue is not very well known, by the way.

However, the general consensus is that you want a 64-bit server specifically to overcome the 32-bit 4GB RAM limitation and that anything less for a multiterabyte ZFS SAN server will hold you book.
The 64-bit versus 32-bit is not just about RAM addressing. ZFS was basically written for 64-bit operating systems. On 32-bit OS it can even become unstable. Today, ZFS on 32-bit should be usable, probably on embedded platforms. Still, I strongly recommend not to use older 32-bit hardware like Pentium 4 and Athlon XP.

The memory requirement is based on how ZFS itself operates. The writes are cached first into RAM because the system will wait for a certain amount of data is received before it can write to disk as it writes in 128k blocks, which also require a checksum to be created first before it can write it to disk.
Writes are buffered in RAM because ZFS is a transactional filesystem. The 128K recordsize doesn't really have much to do with this. In fact, other RAID5 engines have a much harder time; they have to buffer all writes and wait until they form optimally sized blocks before writing them out. RAID5 knows read-modify-write, whereas RAID-Z does all I/O in one pass.

Just don't run parity usually a dedicated raid card does that, if you don't have one then your cpu becomes the raid card and has to do all the calcs. It will slow ya down, so stick with non parity raid 1 or 10 in an old system.
You have been taught that RAID5 is slow because of the XOR parity calculations, right? Actually, that is one big myth! You can verify it yourself. When booting one of many Linux distributions, the md software RAID driver actually performs mini benchmarks to choose what parity algorithm works fastest on your computer. Generally XOR is bottlenecked by memory bandwidth; over 10GB/s on modern systems. The truth is that XOR is one of the easiest tasks your CPU can do.

The REAL reason why RAID5 can be hard on the CPU is that all the splitting and combining of I/O requests in order to optimise for the RAID5 full block stripe, will consume lots of memory bandwidth, which shows up as CPU usage.

AMD Brazos doing RAID-Z3 (triple level parity) would work just fine. Yes, it'll be a bit slower, but not that much.

Please also let me remind others that ZFS should never be used with hardware RAID controller. You need non-RAID controllers to properly utilise ZFS. Using a controller like Areca or HighPoint to simply give raw disks to ZFS is not going to work properly in particular to bad sectors and other timeouts.

I just upgraded my old nas server this past week to an amd c-60 board and I was on the fence on going between 8gb of ram and 16gb, and I am glad I went with 16. I haven't gotten all of my data transferred yet, but it is already using over 8gb at times.
All filesystems (actually: VMM infrastructure) will use as much RAM as possible for filecaching. NTFS and Ext3 just as easily can use over 100 gigabytes of RAM if you have that much. All you need to do is read some data from your harddrive. All free RAM will be utilised as filecache.

You are right though; 16GiB RAM is very much preferable. To ZFS, you want as much RAM as you can afford. Pairing a modern quad core Intel chip with 32GiB RAM is the practical maximum, however in my assessment the proper balance for that much CPU power would be at least 128GiB of RAM.

ZFS preferring to have lots of RAM is often cited as a weakness. I see it as one of its stronger points. ZFS introduces a much more intelligent way of caching as opposed to the dumb last-access cache of virtually all other filesystems. ZFS chooses what data to cache much more intelligently, causing it to adapt to the most recent workload but also lesser recent workloads. This advantage is strengthened when using L2ARC SSD caching. The only requirement to make this work effectively is that ZFS runs 24/7.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
To add to the above, when looking for a controller, you want an HBA, not a RAID card, if you have to buy one. Almost all major RAID controller manufacturers also sell HBAs. They are cheaper since they are often not as powerful as RAID controllers, them lacking complex RAID engines, memory, and battery backups. Some controllers can perform both roles, sometimes noted as IR (Integrated RAID), and IT (Integrated Target) modes. But some controllers can't, and it's very important to examine the RAID controller that you plan on getting to see if it can be placed into Target mode or is simply an HBA. LSI 2008 and 2308 controllers are known as controllers that can come in IR mode or IT mode, and can be flashed using their firmware tool to be either one. On the other hand, the PERC controllers that can often be purchased for cheap from ebay are RAID ONLY controllers, with no JBOD, IT, or HBA functionality. Using such cards can cause a lot of problems for ZFS.

Also, as said above, memory is king for ZFS. While 2GBs or something like that will work, it can give unpredictable results. I find 8 to be a comfortable starting port for a bare minimum dev box (low performance, but it operates predictably).

Personally, I have a ZFS box with 32GB of memory, 280GB of L2ARC SSD caching (3 120GB SSD disks limited to 96GB), and a 16TB (usable) ZFS mirror array (12 3TB 7200RPM SATA disks).

If you want just a bare archival array, start with 8 GB of RAM, add a controller with the decent number of disks you want, then add HDD's (in my opinion 6 at a time) as a RAID-Z2 array. I'd say in the Enterprise market you'd be fine with RAID-Z, because there are 4 hour return support contracts, but in the home market, when you drop a drive, and go through an RMA, you could be looking at 2 weeks in getting a hard drive replaced, and if you bought all your drives at the same time there's a very real chance you'll drop another hard drive in your time. A RAID-Z2 will give you that buffer zone while getting a hard drive replaced.

That is of course, unless you have a good backup system in place and don't mind losing in-between data. In those situations, a RAID-Z system can be perfectly acceptable and save you a hard drive.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Nice additions!

Regarding RAID-Z versus RAID-Z2... most people think in terms of disk failures. I think about bad sectors.

RAID-Z means you are protected against bad sectors with all disks present, but you lose this protection when a disk is failed. During the rebuild with your replacement disk, you may discover the remaining disks have unreadable sectors. This may eat some files; though ZFS protects the all-important metadata with additional protection (ditto blocks).

RAID-Z2 is known to be able to cope with 2 disk failures, just like RAID6. However, I would argue that just as important might be the protection against bad sectors that is still present with one disk missing/failed. Thus, RAID-Z2 can protect your files more than RAID-Z can when rebuilding the array after a disk failure.

If you have solid backups, RAID-Z may offer enough protection (even RAID0 could be worth looking into). But if you want really good protection, RAID-Z2 with either 6 or 10 disks is one of the best configurations I can advise.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Indeed! In fact, I would say that ZFS is the only reliable and mature way to store files accessible to us mortals at this time. All other solutions have serious shortcomings or simply belong to a different era.
Indeed. MS is really dropping the ball, IMO, but they have enough inertia to not notice it, for now. BTRFS will be ready one day soon here, in just the next version, we mean it this time, but it's not all there, quite yet.

Generally this is not recommended. ZFS likes new hardware in particular because DDR2 memory is expensive while DDR3 is cheap. Newer hardware is also a lot more power efficient. Buying a new system may be free if you consider 24/7 power savings over a three year period.
Also, I can now say from some experience, P4/A64-era boards and such are starting to screw up en mass--more reason to hedge your bets on new gear. Before the server's lifespan is anywhere near up, that will be the case for Core 2 era parts, as well. You might get lucky, but it's really not fun when you aren't, and a new CPU+mobo+RAM surely will be overshadowed by the cost of drives (and possible cards).

Server-grade hardware from then isn't as bad about that, but then there's that power consumption to look at. Modern non-gaming PCs can idle at <30W, not counting added components (like many HDDs), while even Core 2 era and older servers are going to have a hard time getting too much under 100W. Actual load power will also be lower, even when TDPs are the same. Those improvements have been monumental (read: home server room's window unit actually shuts off in the winter, now! :)). Even with cheap electricity, the power consumption argument is compelling, these days.

Today, ZFS on 32-bit should be usable, probably on embedded platforms. Still, I strongly recommend not to use older 32-bit hardware like Pentium 4 and Athlon XP.
2-3GB usable address space (less actual usable RAM), and potential highmem/lowmem and/or PAE issues? If you really need more than 1GB, just go 64-bit.

64-bit isn't needed just for >4GB of RAM. That's just the marketing line (also, it's simple--don't underestimate, "sound bites," for conveying information). It's most needed to have breathing room in page table trees, which starts going away, as physical RAM approaches virtual RAM (the point at which there's not enough will vary by workload). When there isn't enough, the OS, or VM, or whatever you're concerned with at the time, will need to perform compaction and maybe defragging. The less headroom there is (on a typical Linux installation, FI--I don't do much *BSD--it's about 3GB per process, and typically 500-700MB for the kernel), the more often it'll need to be done, and the longer it will take each time. You can usually protect against race conditions, but some timeouts are inevitable, and not being served for a substantial fraction of a second could be very bad. In addition, in 32-bit, with 2+GB RAM, you're almost guaranteed to, if you have high memory utilization, run into problems where the application is starved for pages, but has free actual RAM, or where the kernel will get stuck compacting over and over and over again, from thrashing of mmapped contents.

All that stuff is also done in the background. With 64-bit, the background processes doing it almost always outpace the need for it (so slow-downs and panic-mode compactions and page-outs aren't needed), because there's plenty of virtual address space to go about mapping and remapping. You're using 8-16GB actual, possibly with a page file adding some, yet have 8-128TB of virtual space to comfortably work within (I'm not sure of FreeBSD's x86-64 limits, ATM).
 
Last edited:

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
Nice additions!

Regarding RAID-Z versus RAID-Z2... most people think in terms of disk failures. I think about bad sectors.

RAID-Z means you are protected against bad sectors with all disks present, but you lose this protection when a disk is failed. During the rebuild with your replacement disk, you may discover the remaining disks have unreadable sectors. This may eat some files; though ZFS protects the all-important metadata with additional protection (ditto blocks).

RAID-Z2 is known to be able to cope with 2 disk failures, just like RAID6. However, I would argue that just as important might be the protection against bad sectors that is still present with one disk missing/failed. Thus, RAID-Z2 can protect your files more than RAID-Z can when rebuilding the array after a disk failure.

If you have solid backups, RAID-Z may offer enough protection (even RAID0 could be worth looking into). But if you want really good protection, RAID-Z2 with either 6 or 10 disks is one of the best configurations I can advise.

Completely agree, This is exactly why weekly scrubs of the array are so important! Also, always export your pool before restarting your appliance. This is why all-in-one ARM based NAS units are awful when thought of in the way of data integrity, and also why RAID in general is archaic when faced with modern file systems like ZFS. No checksumming, always faced with read-on-writes, and separation of the file system from the hardware (A good thing in fields like virtualization, but that's the opposite of what you want here).