ZFS Server Upgrade

seamusmcaffrey

Junior Member
Nov 5, 2012
10
0
0
Hello,

I currently run a server of shared media drives for my HTPC through a windows 7 tower and am transitioning to a dedicated ZFS box using FreeNAS.

my issue is i have 3 FULL 2tb drives and only two empties, and i'm not sure how best to transition them to the ZFS box. i'd like to use RAIDZ1 or 2 here, but my understanding is i need to have three drives just to set that up. that won't be possible for me because then i'd lose all the data on one of my drives during the reformatting process.

i'm wondering how best to navigate data here. can i boot up the ZFS system and load drives into it one by one? i.e. start with one of the empties, move data using network from teh windows shared drive to the new ZFS-formatted drive, then pop that disk into the machine and format it to ZFS and add it to the pool of disks? i'm not sure i'm explaining myself well here. i guess i just don't know whether, when i put new drives into the storage pool for ZFS, if i'll still be able to use RAIDZ or not since it seems like you need at least 3 blank drives to even set up a pool as raidz1 or 5 for raidz2.

just trying to figure out how best to move from old system to new, improved system! thanks.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Do a full backup, reconfigure and restore. Adding disks to ZFS will blow away any existing data on drive as ZFS won't want to touch an NTFS partition. The fact that you are going from Windows -> BSD pretty much requires wiping the drives for "NAS use."
 

seamusmcaffrey

Junior Member
Nov 5, 2012
10
0
0
ah, yeah. that's what i figured/was afraid of. i don't have enough spare drives to back up all the data. i would need to have three blank drives, right?

am i correct in assuming if i can free up/move data around such that i can start with three drives in the ZFS system, i'll be able to add more drives to it afterward without losing data that has been migrated to those three ZFS drives?

i.e.

1. set up ZFS system with 3 hard drives
2. move data from NTFS drives to the ZFS drives
3. put now empty NTFS drives into ZFS array
4. format NTFS to ZFS and add to total disk pool? no data loss?
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
ZFS supports adding devices to pool without loss of data but creating the vdevs the pool works with can't (yet) be expanded. So basically... If you had 3 TB drives and did RAID-Z2 with those 3 drives ie -> 3 drives -> 1 vdev -> add to a pool, copied all the data in. Then build a new vdev of RAID-Z2 out of the other 3 drives and added them to the pool, you would end up with a total pool capacity of 8TB. IE (2+2+2) / z 2 -> 4TB x 2.
 

seamusmcaffrey

Junior Member
Nov 5, 2012
10
0
0
you're losing me a bit but it sounds like i will NOT just be able to add the new drives into the body of the pool.. my issue here is i have a total of 5 drives (all same size) that i want to get into this server; 3 of them are completely full of data, all are NTFS.

i can borrow a drive from a friend so as to back up data one by one and slot into the ZFS machine, but it doesn't sound like i can do that. it sounds like i need a minimum of 6 drives on hand to get full use of this... i may have bitten off more than i can chew here.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
ZFS works on the concept of "zpools" which is a pool of "vdevs."

A single zpool can be comprised of a group of vdevs.

Each vdev itself is comprised of block devices.

Redundancy is based at the vdev level.

I hope you follow at this point.

So to build a zpool, you need a vdev. I suspect that you want redundancy. The issue you have is that you don't have all 5 block devices (HDD's) available. ZFS does not quite yet support adding block devices to the vdevs (they say "soon" whatever that means) so you have to build the vdevs once and live with that decision. If you had all 5 block devices, you could add all 5, tell the vdev to use RAID-Z2 which would then build a vdev of 8TB. You then would add that to the pool and get an 8TB pool.

Since you don't have all 5, you would likely build something like: 3 block device vdev, add that to the pool, then create a 2 block device vdev and then add that to the pool. You would then end up with 3 drive "raid5" (4TB) and 2 drive Mirror (2TB) for 6TB.

Make sense I hope? Now if the vdev patch was main line you could just add devices and let it rewrite but you would have to wait for that.
 

KillerBee

Golden Member
Jul 2, 2010
1,753
82
91
Have you tried compressing files first - maybe can squeeze them down enough to fit everything on 2 drives?
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
OP, I hope I can clarify what imagoon said... though I am typing this over lunch and may make a mistake somewhere. I'm new to ZFS myself. Anyway: think of a zpool as a convenience factor.. it is basically a binding together of vdevs. Like herding cats into one big super-drive. The zpool provides no redundancy; that's the responsibility of each vdev.

Vdevs are where the real action occurs. They can be single drives (not recommended because you will have no redundancy for that vdev), mirror, or RAIDZ.

You need at least three drives to get to RAIDZ1 which is like RAID5, and four drives to get to RAIDZ2 which is like RAID6.

You can add vdevs to a zpool at will, but each vdev needs its own redundancy. Therefore you could theoretically add one RAIDZ1 vdev followed by another, but the granularity is THREE drives at a time which sucks.

Therefore, imho, the most painful solution is the best one in the long run, which is to borrow backup space (maybe a friend's external drive or free trial of a cloud storage service or something, if you have high upload speeds) and back everything up, then format all your drives to ZFS and vdev them into whatever you want (RAIDZ2 is recommended for large-TB arrays because you can suffer two drives going down and still be okay... but if you're hard pressed for space RAIDZ1 is okay too... I personally used mirrored vdevs for simplicity and performance). That way you wind up with one big vdev and stick it into a zpool and you are done.

Otherwise you will be stuck doing some unwieldy split vdev type of thing and have granularity of 3 drives minimum if you want to use RAIDZ1.

ZFS works on the concept of "zpools" which is a pool of "vdevs."

A single zpool can be comprised of a group of vdevs.

Each vdev itself is comprised of block devices.

Redundancy is based at the vdev level.

I hope you follow at this point.

So to build a zpool, you need a vdev. I suspect that you want redundancy. The issue you have is that you don't have all 5 block devices (HDD's) available. ZFS does not quite yet support adding block devices to the vdevs (they say "soon" whatever that means) so you have to build the vdevs once and live with that decision. If you had all 5 block devices, you could add all 5, tell the vdev to use RAID-Z2 which would then build a vdev of 8TB. You then would add that to the pool and get an 8TB pool.

Since you don't have all 5, you would likely build something like: 3 block device vdev, add that to the pool, then create a 2 block device vdev and then add that to the pool. You would then end up with 3 drive "raid5" (4TB) and 2 drive Mirror (2TB) for 6TB.

Make sense I hope? Now if the vdev patch was main line you could just add devices and let it rewrite but you would have to wait for that.
 
Last edited:

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Thanks for the clarification / correction (RAID Z2 vs Z1)

My ZFS is rusty. It is a cool tech but doesn't have a whole lot of "value" for me since I mostly run Windows / Redhat. This is the main reason I am rather giddy about storage spaces and ReFS. It is essentially the same thing. The storage spool lets you group the devices (vdevs/pools) and ReFS gives you the resilience. It even has similar performance issues such as RAID-Zx driving CPU usage high etc.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
I have heard mixed things about ReFS performance... is it even available for Win8 peons or is it only available for Win Server 2012?

Personally I will probably eventually switch over to btrfs a decade from now once it's native to linux and its bugs are worked out.

Thanks for the clarification / correction (RAID Z2 vs Z1)

My ZFS is rusty. It is a cool tech but doesn't have a whole lot of "value" for me since I mostly run Windows / Redhat. This is the main reason I am rather giddy about storage spaces and ReFS. It is essentially the same thing. The storage spool lets you group the devices (vdevs/pools) and ReFS gives you the resilience. It even has similar performance issues such as RAID-Zx driving CPU usage high etc.
 

KillerBee

Golden Member
Jul 2, 2010
1,753
82
91
Personally I will probably eventually switch over to btrfs a decade from now once it's native to linux and its bugs are worked out.

According to this graph by 2023 we should all have @300TB per disk
http://en.wikipedia.org/w/index.php?title=File:Hard_drive_capacity_over_time.svg

700px-Hard_drive_capacity_over_time.svg.png


Maintaining backups is going to be a bitch! :)

Haven't found a graph showing SSD capacity over time - guess they will overtake spinning HD's eventually
 
Last edited:

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
I have heard mixed things about ReFS performance... is it even available for Win8 peons or is it only available for Win Server 2012?

Personally I will probably eventually switch over to btrfs a decade from now once it's native to linux and its bugs are worked out.

From what I could tell, ReFS performance wasn't horrible, but adding in storage spaces with parity and 3 disks killed it down to around 25MB/s when the original testing was around 90ish MB/s.

2012 only atm.
 

seamusmcaffrey

Junior Member
Nov 5, 2012
10
0
0
i think i understand how it all fits together; and I did end up going with the "painful" solution. my drives are with a friend who is backing them up to his hardware RAID array at the moment.

i guess i just don't understand why it isn't possible to add more drives to a pool after the pool has been created. it seems not having that functionality really kills the usefulness of the system- what if i buy more storage in the future that i want to add to the pool?... i'm pretty sure another friend set up that functionality using unRAID on a linux box, and i thought ZFS/FreeNAS was supposed to be more advanced in a simplistic way or something. clearly a noob here, and for that i apologize.

all i'm really trying to accomplish here is some sort of software raid that will have redundancy (without mirroring because i dont have that many drives and don't want to buy them at the moment) and also allow me to add more storage to it without complications...

i really do appreciate all the help here, your guys' experience/knowledge is invaluable.
 

seamusmcaffrey

Junior Member
Nov 5, 2012
10
0
0
are there ZFS builds besides FreeNAS that will allow me to add drives into the pool one at a time and without building a new RAIDZ vdev?
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
i think i understand how it all fits together; and I did end up going with the "painful" solution. my drives are with a friend who is backing them up to his hardware RAID array at the moment.

i guess i just don't understand why it isn't possible to add more drives to a pool after the pool has been created. it seems not having that functionality really kills the usefulness of the system- what if i buy more storage in the future that i want to add to the pool?... i'm pretty sure another friend set up that functionality using unRAID on a linux box, and i thought ZFS/FreeNAS was supposed to be more advanced in a simplistic way or something. clearly a noob here, and for that i apologize.

all i'm really trying to accomplish here is some sort of software raid that will have redundancy (without mirroring because i dont have that many drives and don't want to buy them at the moment) and also allow me to add more storage to it without complications...

i really do appreciate all the help here, your guys' experience/knowledge is invaluable.

You can always add more drives to the pool. You can't add drives to a vdev. If you create a new vdev, you can add that to the pool and extend the pool.

ZFS was designed for resilience and to combat bit rot by placing checks all up and down the structure while doing it on cheap hardware. You need some pretty high end RAID gear to even get "close" to what ZFS offers in long term storage stability.

It has become a serious issue, even MS has come up with ReFS for the same reasons.
 

_Rick_

Diamond Member
Apr 20, 2012
3,935
68
91
Simple answer:
Always have a backup.
If that requires adding additional drives, so be it.
3TB drives currently have the best price/TB ratio.
 

seamusmcaffrey

Junior Member
Nov 5, 2012
10
0
0
You can always add more drives to the pool. You can't add drives to a vdev. If you create a new vdev, you can add that to the pool and extend the pool.

but if i add a single drive to the pool, it will not be redundant with the RAIDZ1 set up, correct?
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
but if i add a single drive to the pool, it will not be redundant with the RAIDZ1 set up, correct?

Correct. The limitation you are talking about (not being able to alter vdevs) is typical of everything but the high end enterprise gear that can do redistribution and the like. ZFS is supposed to get that feature someday.
 

seamusmcaffrey

Junior Member
Nov 5, 2012
10
0
0
alright, thank you very much. sorry it takes me so long to understand...

looks like i'll look into unraid instead then...
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
It bears mentioning that there is a way you can upgrade the size of an existing vdev one disk at a time, sort of: upgrade to higher-data disks.

For example, a RAIDZ1 with three 2TB drives can gradually turn into a RAIDZ1 with three 3TB drives if you swap them out one at a time and let them resilver. Same number of physical hard drives so vdev rules are not violated, but more data-dense so you get more data per disk, and thus per vdev.

If you do mirroring you can simply add two drives at a time (as a mirrored vdev).

btrfs, ReFS (maybe, I need to read more into this) and ZFS are your top choices for block checksums and anti-bitrot stuff, but btrfs is still being developed and Microsoft's ReFS is not available unless you want to shell out $425 for a license.

According to this graph by 2023 we should all have @300TB per disk
http://en.wikipedia.org/w/index.php?title=File:Hard_drive_capacity_over_time.svg

700px-Hard_drive_capacity_over_time.svg.png


Maintaining backups is going to be a bitch! :)

Haven't found a graph showing SSD capacity over time - guess they will overtake spinning HD's eventually

Kinda puts things into perspective lol.
 

seamusmcaffrey

Junior Member
Nov 5, 2012
10
0
0
i'm kind of okay with paying for it if it does what it's supposed to do i guess. frustrating but, it is what it is.
 

seamusmcaffrey

Junior Member
Nov 5, 2012
10
0
0
OK... after extensive reading and comparison i think i'm back on the ZFS wagon. partially don't feel like paying for unRaid, partially feel like the data is better protected with ZFS.

so if I set up FreeNAS with my 5 2TB drives; what is the best config? raidz1 or raidz2? and what are the available storage options for both of those (i.e. i assume raidz2 is taking 2 parity drives, thus leaving me with 6tb for storage, but can i even put 5 drives in a raidz1?)
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
I would suspect the "RAID10" would be better performing. Anything to keep XOR off the CPU would make it faster. Then again if the ZFS box has a decent CPU it might be irrelevant.