Go Back   AnandTech Forums > Hardware and Technology > Memory and Storage

Forums
· Hardware and Technology
· CPUs and Overclocking
· Motherboards
· Video Cards and Graphics
· Memory and Storage
· Power Supplies
· Cases & Cooling
· SFF, Notebooks, Pre-Built/Barebones PCs
· Networking
· Peripherals
· General Hardware
· Highly Technical
· Computer Help
· Home Theater PCs
· Consumer Electronics
· Digital and Video Cameras
· Mobile Devices & Gadgets
· Audio/Video & Home Theater
· Software
· Software for Windows
· All Things Apple
· *nix Software
· Operating Systems
· Programming
· PC Gaming
· Console Gaming
· Distributed Computing
· Security
· Social
· Off Topic
· Politics and News
· Discussion Club
· Love and Relationships
· The Garage
· Health and Fitness
· Merchandise and Shopping
· For Sale/Trade
· Hot Deals with Free Stuff/Contests
· Black Friday 2014
· Forum Issues
· Technical Forum Issues
· Personal Forum Issues
· Suggestion Box
· Moderator Resources
· Moderator Discussions
   

Reply
 
Thread Tools
Old 11-05-2012, 10:57 AM   #1
seamusmcaffrey
Junior Member
 
Join Date: Nov 2012
Posts: 10
Default ZFS Server Upgrade

Hello,

I currently run a server of shared media drives for my HTPC through a windows 7 tower and am transitioning to a dedicated ZFS box using FreeNAS.

my issue is i have 3 FULL 2tb drives and only two empties, and i'm not sure how best to transition them to the ZFS box. i'd like to use RAIDZ1 or 2 here, but my understanding is i need to have three drives just to set that up. that won't be possible for me because then i'd lose all the data on one of my drives during the reformatting process.

i'm wondering how best to navigate data here. can i boot up the ZFS system and load drives into it one by one? i.e. start with one of the empties, move data using network from teh windows shared drive to the new ZFS-formatted drive, then pop that disk into the machine and format it to ZFS and add it to the pool of disks? i'm not sure i'm explaining myself well here. i guess i just don't know whether, when i put new drives into the storage pool for ZFS, if i'll still be able to use RAIDZ or not since it seems like you need at least 3 blank drives to even set up a pool as raidz1 or 5 for raidz2.

just trying to figure out how best to move from old system to new, improved system! thanks.
seamusmcaffrey is offline   Reply With Quote
Old 11-05-2012, 01:07 PM   #2
imagoon
Diamond Member
 
imagoon's Avatar
 
Join Date: Feb 2003
Location: Chicagoland, IL
Posts: 4,787
Default

Do a full backup, reconfigure and restore. Adding disks to ZFS will blow away any existing data on drive as ZFS won't want to touch an NTFS partition. The fact that you are going from Windows -> BSD pretty much requires wiping the drives for "NAS use."
imagoon is offline   Reply With Quote
Old 11-05-2012, 01:14 PM   #3
seamusmcaffrey
Junior Member
 
Join Date: Nov 2012
Posts: 10
Default

ah, yeah. that's what i figured/was afraid of. i don't have enough spare drives to back up all the data. i would need to have three blank drives, right?

am i correct in assuming if i can free up/move data around such that i can start with three drives in the ZFS system, i'll be able to add more drives to it afterward without losing data that has been migrated to those three ZFS drives?

i.e.

1. set up ZFS system with 3 hard drives
2. move data from NTFS drives to the ZFS drives
3. put now empty NTFS drives into ZFS array
4. format NTFS to ZFS and add to total disk pool? no data loss?
seamusmcaffrey is offline   Reply With Quote
Old 11-05-2012, 01:24 PM   #4
imagoon
Diamond Member
 
imagoon's Avatar
 
Join Date: Feb 2003
Location: Chicagoland, IL
Posts: 4,787
Default

ZFS supports adding devices to pool without loss of data but creating the vdevs the pool works with can't (yet) be expanded. So basically... If you had 3 TB drives and did RAID-Z2 with those 3 drives ie -> 3 drives -> 1 vdev -> add to a pool, copied all the data in. Then build a new vdev of RAID-Z2 out of the other 3 drives and added them to the pool, you would end up with a total pool capacity of 8TB. IE (2+2+2) / z 2 -> 4TB x 2.
imagoon is offline   Reply With Quote
Old 11-05-2012, 01:33 PM   #5
seamusmcaffrey
Junior Member
 
Join Date: Nov 2012
Posts: 10
Default

you're losing me a bit but it sounds like i will NOT just be able to add the new drives into the body of the pool.. my issue here is i have a total of 5 drives (all same size) that i want to get into this server; 3 of them are completely full of data, all are NTFS.

i can borrow a drive from a friend so as to back up data one by one and slot into the ZFS machine, but it doesn't sound like i can do that. it sounds like i need a minimum of 6 drives on hand to get full use of this... i may have bitten off more than i can chew here.
seamusmcaffrey is offline   Reply With Quote
Old 11-05-2012, 01:49 PM   #6
imagoon
Diamond Member
 
imagoon's Avatar
 
Join Date: Feb 2003
Location: Chicagoland, IL
Posts: 4,787
Default

ZFS works on the concept of "zpools" which is a pool of "vdevs."

A single zpool can be comprised of a group of vdevs.

Each vdev itself is comprised of block devices.

Redundancy is based at the vdev level.

I hope you follow at this point.

So to build a zpool, you need a vdev. I suspect that you want redundancy. The issue you have is that you don't have all 5 block devices (HDD's) available. ZFS does not quite yet support adding block devices to the vdevs (they say "soon" whatever that means) so you have to build the vdevs once and live with that decision. If you had all 5 block devices, you could add all 5, tell the vdev to use RAID-Z2 which would then build a vdev of 8TB. You then would add that to the pool and get an 8TB pool.

Since you don't have all 5, you would likely build something like: 3 block device vdev, add that to the pool, then create a 2 block device vdev and then add that to the pool. You would then end up with 3 drive "raid5" (4TB) and 2 drive Mirror (2TB) for 6TB.

Make sense I hope? Now if the vdev patch was main line you could just add devices and let it rewrite but you would have to wait for that.
imagoon is offline   Reply With Quote
Old 11-05-2012, 01:55 PM   #7
KillerBee
Golden Member
 
KillerBee's Avatar
 
Join Date: Jul 2010
Posts: 1,098
Default

Have you tried compressing files first - maybe can squeeze them down enough to fit everything on 2 drives?
KillerBee is offline   Reply With Quote
Old 11-05-2012, 02:51 PM   #8
blastingcap
Diamond Member
 
blastingcap's Avatar
 
Join Date: Sep 2010
Posts: 5,865
Default

OP, I hope I can clarify what imagoon said... though I am typing this over lunch and may make a mistake somewhere. I'm new to ZFS myself. Anyway: think of a zpool as a convenience factor.. it is basically a binding together of vdevs. Like herding cats into one big super-drive. The zpool provides no redundancy; that's the responsibility of each vdev.

Vdevs are where the real action occurs. They can be single drives (not recommended because you will have no redundancy for that vdev), mirror, or RAIDZ.

You need at least three drives to get to RAIDZ1 which is like RAID5, and four drives to get to RAIDZ2 which is like RAID6.

You can add vdevs to a zpool at will, but each vdev needs its own redundancy. Therefore you could theoretically add one RAIDZ1 vdev followed by another, but the granularity is THREE drives at a time which sucks.

Therefore, imho, the most painful solution is the best one in the long run, which is to borrow backup space (maybe a friend's external drive or free trial of a cloud storage service or something, if you have high upload speeds) and back everything up, then format all your drives to ZFS and vdev them into whatever you want (RAIDZ2 is recommended for large-TB arrays because you can suffer two drives going down and still be okay... but if you're hard pressed for space RAIDZ1 is okay too... I personally used mirrored vdevs for simplicity and performance). That way you wind up with one big vdev and stick it into a zpool and you are done.

Otherwise you will be stuck doing some unwieldy split vdev type of thing and have granularity of 3 drives minimum if you want to use RAIDZ1.

Quote:
Originally Posted by imagoon View Post
ZFS works on the concept of "zpools" which is a pool of "vdevs."

A single zpool can be comprised of a group of vdevs.

Each vdev itself is comprised of block devices.

Redundancy is based at the vdev level.

I hope you follow at this point.

So to build a zpool, you need a vdev. I suspect that you want redundancy. The issue you have is that you don't have all 5 block devices (HDD's) available. ZFS does not quite yet support adding block devices to the vdevs (they say "soon" whatever that means) so you have to build the vdevs once and live with that decision. If you had all 5 block devices, you could add all 5, tell the vdev to use RAID-Z2 which would then build a vdev of 8TB. You then would add that to the pool and get an 8TB pool.

Since you don't have all 5, you would likely build something like: 3 block device vdev, add that to the pool, then create a 2 block device vdev and then add that to the pool. You would then end up with 3 drive "raid5" (4TB) and 2 drive Mirror (2TB) for 6TB.

Make sense I hope? Now if the vdev patch was main line you could just add devices and let it rewrite but you would have to wait for that.
__________________
Quote:
Originally Posted by BoFox View Post
We had to suffer polygonal boobs for a decade because of selfish corporate reasons.
Main: 3570K + R9 290 + 16GB 1866 + AsRock Extreme4 Z77 + Eyefinity 5760x1080 eIPS

Last edited by blastingcap; 11-05-2012 at 02:54 PM.
blastingcap is offline   Reply With Quote
Old 11-05-2012, 03:11 PM   #9
imagoon
Diamond Member
 
imagoon's Avatar
 
Join Date: Feb 2003
Location: Chicagoland, IL
Posts: 4,787
Default

Thanks for the clarification / correction (RAID Z2 vs Z1)

My ZFS is rusty. It is a cool tech but doesn't have a whole lot of "value" for me since I mostly run Windows / Redhat. This is the main reason I am rather giddy about storage spaces and ReFS. It is essentially the same thing. The storage spool lets you group the devices (vdevs/pools) and ReFS gives you the resilience. It even has similar performance issues such as RAID-Zx driving CPU usage high etc.
imagoon is offline   Reply With Quote
Old 11-05-2012, 05:05 PM   #10
blastingcap
Diamond Member
 
blastingcap's Avatar
 
Join Date: Sep 2010
Posts: 5,865
Default

I have heard mixed things about ReFS performance... is it even available for Win8 peons or is it only available for Win Server 2012?

Personally I will probably eventually switch over to btrfs a decade from now once it's native to linux and its bugs are worked out.

Quote:
Originally Posted by imagoon View Post
Thanks for the clarification / correction (RAID Z2 vs Z1)

My ZFS is rusty. It is a cool tech but doesn't have a whole lot of "value" for me since I mostly run Windows / Redhat. This is the main reason I am rather giddy about storage spaces and ReFS. It is essentially the same thing. The storage spool lets you group the devices (vdevs/pools) and ReFS gives you the resilience. It even has similar performance issues such as RAID-Zx driving CPU usage high etc.
__________________
Quote:
Originally Posted by BoFox View Post
We had to suffer polygonal boobs for a decade because of selfish corporate reasons.
Main: 3570K + R9 290 + 16GB 1866 + AsRock Extreme4 Z77 + Eyefinity 5760x1080 eIPS
blastingcap is offline   Reply With Quote
Old 11-05-2012, 06:41 PM   #11
KillerBee
Golden Member
 
KillerBee's Avatar
 
Join Date: Jul 2010
Posts: 1,098
Default

Quote:
Originally Posted by blastingcap View Post

Personally I will probably eventually switch over to btrfs a decade from now once it's native to linux and its bugs are worked out.
According to this graph by 2023 we should all have @300TB per disk
http://en.wikipedia.org/w/index.php?..._over_time.svg



Maintaining backups is going to be a bitch!

Haven't found a graph showing SSD capacity over time - guess they will overtake spinning HD's eventually

Last edited by KillerBee; 11-05-2012 at 07:23 PM.
KillerBee is offline   Reply With Quote
Old 11-05-2012, 06:57 PM   #12
imagoon
Diamond Member
 
imagoon's Avatar
 
Join Date: Feb 2003
Location: Chicagoland, IL
Posts: 4,787
Default

Quote:
Originally Posted by blastingcap View Post
I have heard mixed things about ReFS performance... is it even available for Win8 peons or is it only available for Win Server 2012?

Personally I will probably eventually switch over to btrfs a decade from now once it's native to linux and its bugs are worked out.
From what I could tell, ReFS performance wasn't horrible, but adding in storage spaces with parity and 3 disks killed it down to around 25MB/s when the original testing was around 90ish MB/s.

2012 only atm.
imagoon is offline   Reply With Quote
Old 11-06-2012, 06:45 AM   #13
seamusmcaffrey
Junior Member
 
Join Date: Nov 2012
Posts: 10
Default

i think i understand how it all fits together; and I did end up going with the "painful" solution. my drives are with a friend who is backing them up to his hardware RAID array at the moment.

i guess i just don't understand why it isn't possible to add more drives to a pool after the pool has been created. it seems not having that functionality really kills the usefulness of the system- what if i buy more storage in the future that i want to add to the pool?... i'm pretty sure another friend set up that functionality using unRAID on a linux box, and i thought ZFS/FreeNAS was supposed to be more advanced in a simplistic way or something. clearly a noob here, and for that i apologize.

all i'm really trying to accomplish here is some sort of software raid that will have redundancy (without mirroring because i dont have that many drives and don't want to buy them at the moment) and also allow me to add more storage to it without complications...

i really do appreciate all the help here, your guys' experience/knowledge is invaluable.
seamusmcaffrey is offline   Reply With Quote
Old 11-06-2012, 07:21 AM   #14
seamusmcaffrey
Junior Member
 
Join Date: Nov 2012
Posts: 10
Default

are there ZFS builds besides FreeNAS that will allow me to add drives into the pool one at a time and without building a new RAIDZ vdev?
seamusmcaffrey is offline   Reply With Quote
Old 11-06-2012, 08:21 AM   #15
imagoon
Diamond Member
 
imagoon's Avatar
 
Join Date: Feb 2003
Location: Chicagoland, IL
Posts: 4,787
Default

Quote:
Originally Posted by seamusmcaffrey View Post
i think i understand how it all fits together; and I did end up going with the "painful" solution. my drives are with a friend who is backing them up to his hardware RAID array at the moment.

i guess i just don't understand why it isn't possible to add more drives to a pool after the pool has been created. it seems not having that functionality really kills the usefulness of the system- what if i buy more storage in the future that i want to add to the pool?... i'm pretty sure another friend set up that functionality using unRAID on a linux box, and i thought ZFS/FreeNAS was supposed to be more advanced in a simplistic way or something. clearly a noob here, and for that i apologize.

all i'm really trying to accomplish here is some sort of software raid that will have redundancy (without mirroring because i dont have that many drives and don't want to buy them at the moment) and also allow me to add more storage to it without complications...

i really do appreciate all the help here, your guys' experience/knowledge is invaluable.
You can always add more drives to the pool. You can't add drives to a vdev. If you create a new vdev, you can add that to the pool and extend the pool.

ZFS was designed for resilience and to combat bit rot by placing checks all up and down the structure while doing it on cheap hardware. You need some pretty high end RAID gear to even get "close" to what ZFS offers in long term storage stability.

It has become a serious issue, even MS has come up with ReFS for the same reasons.
imagoon is offline   Reply With Quote
Old 11-06-2012, 08:48 AM   #16
_Rick_
Diamond Member
 
_Rick_'s Avatar
 
Join Date: Apr 2012
Posts: 3,383
Default

Simple answer:
Always have a backup.
If that requires adding additional drives, so be it.
3TB drives currently have the best price/TB ratio.
_Rick_ is offline   Reply With Quote
Old 11-06-2012, 09:52 AM   #17
seamusmcaffrey
Junior Member
 
Join Date: Nov 2012
Posts: 10
Default

Quote:
Originally Posted by imagoon View Post
You can always add more drives to the pool. You can't add drives to a vdev. If you create a new vdev, you can add that to the pool and extend the pool.
but if i add a single drive to the pool, it will not be redundant with the RAIDZ1 set up, correct?
seamusmcaffrey is offline   Reply With Quote
Old 11-06-2012, 10:41 AM   #18
imagoon
Diamond Member
 
imagoon's Avatar
 
Join Date: Feb 2003
Location: Chicagoland, IL
Posts: 4,787
Default

Quote:
Originally Posted by seamusmcaffrey View Post
but if i add a single drive to the pool, it will not be redundant with the RAIDZ1 set up, correct?
Correct. The limitation you are talking about (not being able to alter vdevs) is typical of everything but the high end enterprise gear that can do redistribution and the like. ZFS is supposed to get that feature someday.
imagoon is offline   Reply With Quote
Old 11-06-2012, 10:45 AM   #19
seamusmcaffrey
Junior Member
 
Join Date: Nov 2012
Posts: 10
Default

alright, thank you very much. sorry it takes me so long to understand...

looks like i'll look into unraid instead then...
seamusmcaffrey is offline   Reply With Quote
Old 11-06-2012, 11:29 AM   #20
imagoon
Diamond Member
 
imagoon's Avatar
 
Join Date: Feb 2003
Location: Chicagoland, IL
Posts: 4,787
Default

Quote:
Originally Posted by seamusmcaffrey View Post
alright, thank you very much. sorry it takes me so long to understand...

looks like i'll look into unraid instead then...
You do have to pay for unRAID when using more than 3 disks.
imagoon is offline   Reply With Quote
Old 11-06-2012, 12:20 PM   #21
blastingcap
Diamond Member
 
blastingcap's Avatar
 
Join Date: Sep 2010
Posts: 5,865
Default

It bears mentioning that there is a way you can upgrade the size of an existing vdev one disk at a time, sort of: upgrade to higher-data disks.

For example, a RAIDZ1 with three 2TB drives can gradually turn into a RAIDZ1 with three 3TB drives if you swap them out one at a time and let them resilver. Same number of physical hard drives so vdev rules are not violated, but more data-dense so you get more data per disk, and thus per vdev.

If you do mirroring you can simply add two drives at a time (as a mirrored vdev).

btrfs, ReFS (maybe, I need to read more into this) and ZFS are your top choices for block checksums and anti-bitrot stuff, but btrfs is still being developed and Microsoft's ReFS is not available unless you want to shell out $425 for a license.

Quote:
Originally Posted by vtx1300 View Post
According to this graph by 2023 we should all have @300TB per disk
http://en.wikipedia.org/w/index.php?..._over_time.svg



Maintaining backups is going to be a bitch!

Haven't found a graph showing SSD capacity over time - guess they will overtake spinning HD's eventually
Kinda puts things into perspective lol.
__________________
Quote:
Originally Posted by BoFox View Post
We had to suffer polygonal boobs for a decade because of selfish corporate reasons.
Main: 3570K + R9 290 + 16GB 1866 + AsRock Extreme4 Z77 + Eyefinity 5760x1080 eIPS
blastingcap is offline   Reply With Quote
Old 11-06-2012, 12:23 PM   #22
seamusmcaffrey
Junior Member
 
Join Date: Nov 2012
Posts: 10
Default

i'm kind of okay with paying for it if it does what it's supposed to do i guess. frustrating but, it is what it is.
seamusmcaffrey is offline   Reply With Quote
Old 11-07-2012, 08:04 AM   #23
seamusmcaffrey
Junior Member
 
Join Date: Nov 2012
Posts: 10
Default

OK... after extensive reading and comparison i think i'm back on the ZFS wagon. partially don't feel like paying for unRaid, partially feel like the data is better protected with ZFS.

so if I set up FreeNAS with my 5 2TB drives; what is the best config? raidz1 or raidz2? and what are the available storage options for both of those (i.e. i assume raidz2 is taking 2 parity drives, thus leaving me with 6tb for storage, but can i even put 5 drives in a raidz1?)
seamusmcaffrey is offline   Reply With Quote
Old 11-07-2012, 10:06 AM   #24
blastingcap
Diamond Member
 
blastingcap's Avatar
 
Join Date: Sep 2010
Posts: 5,865
Default

RAIDZ2 all the way.

http://www.zdnet.com/blog/storage/wh...ng-in-2009/162

http://www.zdnet.com/blog/storage/wh...ng-in-2019/805

Some prefer mirrored pairs vdevs (and at least two such vdevs to get RAID10) for various reasons though, including simplicity and speed.

http://www.techrepublic.com/blog/dat...ou-choose/2689
__________________
Quote:
Originally Posted by BoFox View Post
We had to suffer polygonal boobs for a decade because of selfish corporate reasons.
Main: 3570K + R9 290 + 16GB 1866 + AsRock Extreme4 Z77 + Eyefinity 5760x1080 eIPS

Last edited by blastingcap; 11-07-2012 at 10:16 AM.
blastingcap is offline   Reply With Quote
Old 11-07-2012, 10:24 AM   #25
imagoon
Diamond Member
 
imagoon's Avatar
 
Join Date: Feb 2003
Location: Chicagoland, IL
Posts: 4,787
Default

I would suspect the "RAID10" would be better performing. Anything to keep XOR off the CPU would make it faster. Then again if the ZFS box has a decent CPU it might be irrelevant.
imagoon is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 11:01 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.