ZFS and 512n vs 512e vs 4kN drives

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

FlawlessMind

Junior Member
Dec 3, 2015
12
0
0
@Essence_of_War As you suggested getting 4kN earlier do you think there is any difference in terms of ZFS performance involved justifying the extra cost? My understanding is that 512e drives would be used as 4k drives and new vdevs with such drives would be automatically created with ashift=12, thus having ZFS do 4k reads and writes.

Also do you know what exactly happens when URE is hit during re-mirroring? You said these are loud errors which makes sense, I just wonder what's the exact behaviour and what output you get from ZFS.

~Cheers~
 

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
@Essence_of_War As you suggested getting 4kN earlier do you think there is any difference in terms of ZFS performance involved justifying the extra cost? My understanding is that 512e drives would be used as 4k drives and new vdevs with such drives would be automatically created with ashift=12, thus having ZFS do 4k reads and writes

It's not just new vdevs, it's also a question of replacing drives in existing vdevs. I think it would probably be OK to just get the 512e drives and then create your pool with ashift=12. The only issue I can think of off-hand might be related to zvols. Basically, if you have a lot of small files on a zvol backed by vdevs with raidz1/2/3 with ashift=12, and the default volblocksize, you can get some rather shocking space usage. The default volblocksize works well for ashift=9, but not so well for ashift=12.

https://github.com/zfsonlinux/zfs/issues/548

formula = (disks_in_vdev - raidz_level) * 2^ashift:
  1. Set the default volblocksize to max(2^(4 + ashift), formula[vdev0], formula[vdev1]...)

Also do you know what exactly happens when URE is hit during re-mirroring? You said these are loud errors which makes sense, I just wonder what's the exact behaviour and what output you get from ZFS.
Yeah, unlike something like mdraid (not to pick on mdraid excessively, it's great!), re-silvering a raidz1/2/3 isn't necessarily a binary operation that can be ruined by a URE. If you hit a URE during a resilver, 'zpool status' will tell you which specific files are affected (corrupted) by the dead block, but you should otherwise be able to go about your business as usual, restore the file from back-up or what have you.

Here's an example of what that might look like:

https://blogs.oracle.com/erickustarz/entry/damaged_files_and_zpool_status
 

frowertr

Golden Member
Apr 17, 2010
1,372
41
91
Even I admit that is pretty cool and I'm generally not on the ZFS bandwagon for SMB or home use.
 
Last edited: