Windows RAID5 vs. Solaris ZFS/RAIDZ

Discussion in 'Memory and Storage' started by daniel1113, Oct 10, 2008.

  1. daniel1113

    daniel1113 Diamond Member

    Joined:
    Jun 6, 2003
    Messages:
    6,448
    Likes Received:
    0
    Right now I am building a fileserver and am torn between using a RAID5 array with Windows Server and a ZFS/RAIDZ pool with Solaris (I am not looking at hardware RAID options for many reasons which I'm not going to explain here). While I could do a lot more with a Windows box (Exchange server, IIS, etc.), I can't get over how logical and simple ZFS/RAIDZ is in comparison to RAID5 in Windows, especially in terms of data mobility.

    If I understand everything correctly, I can basically remove my HDDs and mount them to any Solaris system and instantly rebuild the ZFS store since all the necessary information is stored right on the disks themselves. It doesn't matter which SATA controller the disks are connected to as long as it is supported within the OS.

    I don't get the same impression when it comes to RAID5 arrays in Windows. While I can't really say for sure, it just seems like there is more to it in Windows.

    So, am I missing something when it comes to RAID arrays in Windows, or is ZFS really that much of an improvement?

    Thanks in advance.
     
  2. Loading...

    Similar Threads - Windows RAID5 Solaris Forum Date
    how to backup data on windows server 2012 with 500GB HDD? Memory and Storage Thursday at 5:07 AM
    New OS: HDD has to be formated before use! Memory and Storage Oct 5, 2017
    high disk usage when running apps for the first time after windows startup Memory and Storage Sep 3, 2017
    RAID5 status normal, but Windows shows "uninitialized" Memory and Storage Dec 26, 2009
    RAID5 Virtual Disk Not Showing In Windows (8344ELP) Memory and Storage May 14, 2008

  3. syadnom

    syadnom Member

    Joined:
    May 20, 2001
    Messages:
    144
    Likes Received:
    0
    Windows RAID is done with dynamic disks that can be imported to any other windows machine that runs XP, 2k3, vista, wss, etc. This is a straight forward processes so you shouldnt have any issues with it.

    ZFS equally as mobile between solaris, opensolaris, freebsd, osx, and linux under fuse.

    On native platforms (not linux) solaris is faster that NTFS. ZFS is also MUCH faster at RAID-Z that windows is at software RAID5. in fact, ZFS will usually be faster at RAID Z2(like raid6) than windows is at RAID5.

    Additionally, ZFS is more flexible. See the following:

    You can add devices to the RAIDZ, you cannot in RAID5 without rebuilding the array
    You can have seperate transaction log disks. This means you can use small but very fast SSD to cache writes and let ZFS write to the slower hard disks in a more organized and faster manner.
    You can have a local read cache on SSD disks as well.
    ZFS can export NFS and ISCSI natively
    ZFS can compress volumes with lza or gzip. Great for document stores. Make sure to get a smokin fast CPU for this. Quad cores are nice but single threaded lza or gzip will do better on higher Ghz CPUs. Consider a 3Ghz dual over a 1.6Ghz quad.
    ZFS doesnt have the raid5 write hole. Thats where a raid5 drive can loose data when power is lost because all non-commited partiy writes are lost.
    ZFS has raid5+ and raid6+ modes called raidz and raidz2 respectively. They are similar in function but ZFS has a more robust functionality.

    All things being equal, ZFS will be somewhat faster. I ideal circumstances, ZFS will smoke the windows server because it can use very fast cache disks to essentially hide slow disks.

    I should also not that I would suggest using linux software RAID with LVM on top long before windows software raid.

    Good luck
     
  4. daniel1113

    daniel1113 Diamond Member

    Joined:
    Jun 6, 2003
    Messages:
    6,448
    Likes Received:
    0
    Thanks for the response. You covered most of the stuff I've read about ZFS, and I am definitely convinced that its the ideal filesystem.

    You mentioned that devices can be added to RAIDZ, but everything I've read so far makes it sound like that is not possible (at least not yet). In order to add devices, one must either destroy the ZFS pool and start over or simply create another pool, in which case you would have two parity disks (one for each pool). I was also given the impression that this feature is being developed as we speak, so perhaps it has been completed and I just don't know about it.

    Now, if only ZFS could be used with Windows Server... I'd be one happy guy.
     
  5. Nothinman

    Nothinman Elite Member

    Joined:
    Sep 14, 2001
    Messages:
    30,672
    Likes Received:
    0
    I wouldn't say that you could do more, just different stuff with a Windows box. While Exchange is nice for a company that wants shared calendars it's overkill for home and everything else can be done with other mail servers. All you really need is a good IMAP server like Cyrus or Courier and Webmail interface like RoundCube or Horde IMP and you're pretty much set.

    Sure dynamic disks can be imported on any version of Windows that supports them, but you can only use RAID levels that the OS supports and client OSes like XP don't support any form of redundant RAID. So while you could import the disks you wouldn't be able to activate the array.

    This is probably the route I would go just because I wouldn't want to use Solaris.
     
  6. daniel1113

    daniel1113 Diamond Member

    Joined:
    Jun 6, 2003
    Messages:
    6,448
    Likes Received:
    0
    By "more" I meant that I could do more as I am a .NET/MSSQL developer and do a lot with Microsoft apps, including Exchange server. I realize there are many other alternatives that are just as viable, but not for what I am trying to accomplish. It's for this very reason that I am trying to justify using Windows Server over Solaris or a Linux distro. However, I think I'm pretty much sold on ZFS/RAIDZ.
     
  7. leobaby

    leobaby Junior Member

    Joined:
    Oct 15, 2008
    Messages:
    1
    Likes Received:
    0
    How well would samba work for a solution like this? Would it be better to use nfs?
     
  8. kzrssk

    kzrssk Member

    Joined:
    Nov 13, 2005
    Messages:
    111
    Likes Received:
    0
    I wonder how well it would work if you ran Windows Server 2003, running VMWare or VirtualBox, and ran whatever flavor of Solaris in a VM, but connected it directly to your hard drives. I've been thinking of trying this for a while, but don't know how crazy it sounds, or if it's even possible. What do y'all think?

    EDIT: Or perhaps the other way around. Running Windows in VirtualBox on Solaris. Then I suppose you wouldn't have so much VM<->I/O translation overhead, and you could completely virtualize the Window's VM's storage to a local file...
     
  9. Nothinman

    Nothinman Elite Member

    Joined:
    Sep 14, 2001
    Messages:
    30,672
    Likes Received:
    0
    Either way you're stuck dealing with Solaris at some point which is a major con. I know VMware lets you give a guest a full block device, I don't know about VirtualBox though.
     
  10. kzrssk

    kzrssk Member

    Joined:
    Nov 13, 2005
    Messages:
    111
    Likes Received:
    0
    Still, the gains from ZFS are so freakin' attractive though. If someone made a Windows ZFS driver, I'd be in heaven. Guess I better start learning C or something.

    Or I suppose I could be a tool and buy an XServe when Snow Leopard comes out.
     
  11. Nothinman

    Nothinman Elite Member

    Joined:
    Sep 14, 2001
    Messages:
    30,672
    Likes Received:
    0
    IMO the tradeoffs aren't worth it. I'd much rather use Debian, Linux software RAID and LVM than Solaris and anything.
     
  12. taltamir

    taltamir Lifer

    Joined:
    Mar 21, 2004
    Messages:
    13,578
    Likes Received:
    0
    in addition to all of that, ZFS has space limit, filesize limits, character limit, etc so high they are practically unreachable for at least the next 20 years.

    Also ZFS has end to end parity, protecting you from bit rot, bit flip by cosmic rays, and random write errors (a typical server class SCSI HDD will write one bit wrong of every TB of data it writes, a regular drive will do so more often). With raidz and z2 you can actually repair damaged files in such a case, while in a raid5 or 6 you would just have a corrupt file (i had a few corrupt files on raid5, at month 3 after construction my raidz2 I had a single flipped bit which was corrected by ZFS.. so no lost file there).
    If you just do a single drive you get notification of errors, if you have any sort of parity (raid1, raidz, or raidz2) you can actually recover corrupt files. (rather then just protecting against HDD failures).

    I have tested it among various operating systems... A clean format of the OS drive, install a new OS, and then you get the array back as simply as typing:
    # zpool import -f tank

    I recommend you use the LATEST opensolaris, found here:
    www.genunix.org
    I would avoid the global version (aka all languages)... because it takes SIGNIFICANTLY longer to install due to significantly higher compression rate in order to cram all the languages on one disk

    learn how to use ZFS itself here:
    http://opensolaris.org/os/community/zfs/

    And learn how to share the files over smb (for windows access) using CIFS here:
    http://www.genunix.org/wiki/in...e_Solaris_CIFS_Service