jwilliams4200
Senior member
- Apr 10, 2009
- 532
- 0
- 0
But what about my 3 vdev RAID1 pairs suggestion? that isn't terrible choice at all.
It is only 50% efficient for your drive capacity. I cannot believe you actually asked me what I meant by that.
But what about my 3 vdev RAID1 pairs suggestion? that isn't terrible choice at all.
It is only 50% efficient for your drive capacity. I cannot believe you actually asked me what I meant by that.
The process you described is NOT online capacity expansion, which refers to the capability to add one or more drives to an existing RAID device, and have the RAID expanded to the new drive(s).
I am still not understanding how you are getting that figure. The capacity depends on the RAID level and array size.
And ZFS actually can do the form YOU described. Its drawback is that it cannot do online capacity shrinking in any form (no vdev removal, no parity reduction, no array shrinking)
I can only assume you are incorrectly describing your zpool.
I thought you meant that you had a zpool that consisted of 6 drives, in 3 mirrored vdev pairs. In other words, if you had 6 1TB drives, then you would be able to store only 3TB of data. 3/6 = 50% efficiency.
In contrast, a 6 drive dual-parity distributed parity RAID of 1TB drives can store 4TB, so the efficiency is 66.7% (4/6)
I cannot believe I had to explain that.
3. You said that doing inline upgrade gives me 50%... that is, you said that taking my 5 drive RAID6 array and replacing it drive by drive from 750GB drives to a new bigger drive (say 3TB) gives me 50% efficiency. Being too huffy to explain yourself we wasted both our times on what is a simple grammar error (you were actually referring to the hypothetical setup being 50% efficient, not the upgrade of the RAID6 array being 50% efficient upgrade).
Wrong again. ZFS is incapable of expanding a RAIDZx vdev.
4. While it is true that my hypothetical setup is 50% efficient rather then 66.(6)%, it is far better overall because you can perform rolling upgrades 2 drives at a time. You end up paying less money to handle your growing space needs and more flexibility.
You know, you made errors more then twice as often as I did this last discussion and I didn't start every post with WRONG AGAIN...
Please try to be a little more civil here.
Wrong again. I was referring to your comment about adding a mirror vdev to an existing pool with a RAIDZ2 vdev. I said that was only 50% efficient for the added capacity.
But either is far worse than a snapshot RAID setup for a media server, which is what this thread is about.
Say it with me now. ZFS is a terrible choice for a media server.
I made no errors. You made multiple errors and are now wrongly accusing me of making errors.
Please try not to post so much misinformation and nonsense here.
jwilliams4200 said:With ZFS, the most you can do is add another RAID vdev to your existing zpool.
taltamir said:1. You can do online capacity expansion on ZFS. Simply swap each drive in turn with a bigger one and then perform a resliver.
1. Who would be crazy enough to waste their time doing that? And who wants to upgrade 5 drives at a time?
It happens, people make mistakes.taltamir said:Not me, that is why I am not doing it. But you said IMPOSSIBLE not "inconvenient"
As far as your first quote goes. You misread my own post.
You used 1 & 2 to denote your replies. But what you labeled "2" what was actually a reply to an unnumbered hypothetical I posted later in that post based on points 1&2.
...
As per your second point. Having a slightly superior setup to ZFS (which it might be, I need to look into that specific one more) doesn't make the second best a "terrible choice"
...
You made errors. For example:
2. You can add a RAID1 2 disk vdev to an existing RAID6 5 disk array, for example.
ZFS is an even worse choice for a media server than most other distributed-parity RAIDs, since most others (mdadm, most hardware RAID) allow online capacity expansion (OCE). With ZFS, the most you can do is add another RAID vdev to your existing zpool.
Wrong again. And again. And yet again. Do you ever write anything without errors?...
I did a little reading about "snapshot RAID", and it seems to me that a redundant storage system that only becomes redundant after a cron job runs is a fundamentally flawed idea.
copy-on-write, full end to end checksumming, automatic scrubbing, instant snapshots,
COW: ooh, COW certainly provides a great benefit for a media server. Actually, it is at best neutral, and at worst, counterproductive for a media server, since it can result in filesystem fragmentation which slows reads.
end-to-end checksums: this has always been a false claim of ZFS zealots; since ZFS does not allow you to manually enter a checksum for a new file you have just added, it is clearly NOT end-to-end; in reality, if your data gets scrambled before or during being copied onto ZFS, you will not know it. So the real claim is ZFS keeps checksums on all your data. Nice. Except, so does snapraid (and flexraid, and disparity, and btrfs...)
instant snapshots: not needed for a media server filesystem, can be useful for an OS filesystem, but of course, snapshots are hardly unique to ZFS
o new drives can be easily added, 1 drive at a time
o works with HDDs of different capacities, utilizing all space on each drive
o no data migration necessary since it just uses drives formatted with your chosen filesystem
o if you lose more drives than you have parity, you only lose the data from the dead drives
o power efficient: during movie playback, only one drive needs to spin up
o works with your existing OS, not need to change to a different OS

 
				
		