Drobo alternatives?

alizee

Senior member
Aug 11, 2005
501
0
86
I'm looking at a drobo, but they are very pricey (especially the 8-bay drobo pro). There are certain featurers of it, though, that I really like and am wondering if I can get something similar in ZFS or something else. The features I'm looking for:

1) Appears as one large volume (like RAID or concatenation).

2) Easily expandable. This seems like the hardest part. I'm using Drive Extender on my WHS, and it works fine, but I guess it's going away in v2 unless Microsoft changes heart. RAID 5 or 6 don't offer a similar feature, right? If I wanted to expand those I would have to backup, destroy the RAID and then create a new one, is that right? They also require I use the same size hard drives, lest I lose the extra space...

3) Redundancy. Not a completely necessary feature, but it would make things easier in single drive failures.

Should I just suck it up and buy a drobo?

Thanks for the help!
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
the base drobo is usb or iscsi ; so i suppose you can use whatever filesystem you want on it since its block storage. the FS models add filesystem but not ZFS.

single drive failures are usually never the problem its the multi-drive failures that really kick you in the balls
 

Lorne

Senior member
Feb 5, 2001
873
1
76
We just started using the DROBO elite and I am 50/50 with it.
Notes to you
1. It does not work over a open network, There documentation is a lie, You have to have a host system and heres something there notes dont tell you, Set it up first time with the USB cable (you have to), Then disconnect the USB to get the ISCSI to work or it uses the USB all the time.
2. There 2TB xp/2k compatability mode is a bit flakey and some older software does not like it as it reports back a zero space available across the networks and you get "No space availble to save" issues and you can get that with all modes in some cases.
3. Its a JBOD system even in raid mode and you have no direct control of any of the drives.

As of yet I have no Idea how there protection scheme or recovery works.

In my opinion and was going to set up simular, I would build my own with Win2008 and possibly save money but I know I would have control and the abillity to upgrade in the future a whole lot better and cheaper.
 
Dec 26, 2007
11,782
2
76
I'm with you OP, and in exact same spot basically. Where I'm leaning is building my own server based on comments from the thread in OS subforum about DE/Vail (http://forums.anandtech.com/showthread.php?t=2122405).

UnRaid
FlexRaid
FreeNAS
Nexentastor

All of those are options I'm looking into, but haven't decided on any yet. Unfortunately none offer what WHS does in terms of being a more complete server product (add-ins, built in, etc). All the rest appear to be more of JUST a file server product. What I'm really looking for is a server type OS (say like Server 2008R2 or something else) that supports add ins like a torrent client, music/video server (for streaming, although I can just load files off the file server instead), among others which I can't remember right now.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
drobo is raid-3 - one drive contains all the ecc to reconstruct the rest of the drives, and it uses some sort of leveling to allow you to have different size drives.

i have the qnap with NTFS support and the ntfs and usb run in userland so they are useless. if you need NTFS support you need a block storage device or windows based device otherwise you will get a half-azz implementation that doesn't follow the latest (2008 R2/win7) spec fully. smb != ntfs. smb=network protocol. the underlying filesystem is very important too for file locking and perms.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
2) Easily expandable. This seems like the hardest part. I'm using Drive Extender on my WHS, and it works fine, but I guess it's going away in v2 unless Microsoft changes heart.

The reason MS is doing away with it is that the current implementation is fundamentally flawed which causes data loss, and the new and improved implementation that fixes that fundamentally flawed implementation isn't stable yet and isn't going to be stable in time for release (unless they postpone the release).

I suggest you just use ZFS on FreeNAS. But then again, I am cheap. Do you think the features drobo gives that ZFS on freenas doesn't have are worth the extra money? I don't, but if you do then just go ahead and buy it, it is your choice.

keep in mind though that there are features of ZFS that drobo does not have (end to end checksumming)
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
how does ZFS deal with the variety of drives. QNAP AND DROBO ignore all error correction (or lack of) and deal with issues on their own. this stabilizes systems with asymetric error correction/drive types/etc.

drive 1 reports an error immediately = tler=0 does nothing
drive 2 takes 60 seconds then sends nothing
drive 3 takes 7 seconds then tells the system it's done

drobo and qnap can handle all 3 cases mixed. how does zfs fair?
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
ZFS doesn't have any problem with or without TLER. Any of the 3 situations will not be an issue for ZFS (while being a big deal for many "standard" raid controllers).
Regardless of what the drive finds, ZFS would be able to repair the error on the fly (fixing the data on the disk as well) as long as there is redundancy.
 

alizee

Senior member
Aug 11, 2005
501
0
86
Thanks for everybody's replies, I think I'm started in the right direction.

I suggest you just use ZFS on FreeNAS. But then again, I am cheap. Do you think the features drobo gives that ZFS on freenas doesn't have are worth the extra money? I don't, but if you do then just go ahead and buy it, it is your choice.

I don't really think Drobo is worth the extra money, but the one killer feature of it (and Drive Extender, and maybe Greyhole) is the ability to add drives "on-the-fly", and to mix and match drive sizes. If ZFS can do that, then my project this weekend is installing FreeNAS, for reals. Or, is there a better way to do it than ZFS?
 

RebateMonger

Elite Member
Dec 24, 2005
11,586
0
0
The reason MS is doing away with it is that the current implementation is fundamentally flawed which causes data loss...
You've mentioned this a couple of times. Do you have any links pointing to discussions of this idea? I'd honestly like to find an in-depth discussion of the topic.

As I'm sure you know, MS spent a couple of months going over the first Drive Extender in H1 2008 because of issues with data loss under certain circumstances. MS claimed to have found the problem, corrected it with PP1, and there's been no repeat of that particular issue. What other data loss issues have you seen or heard about?

Thanks.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
here is your link: http://arstechnica.com/microsoft/news/2010/11/has-microsoft-just-ruined-windows-home-server.ars

I don't really think Drobo is worth the extra money, but the one killer feature of it (and Drive Extender, and maybe Greyhole) is the ability to add drives "on-the-fly", and to mix and match drive sizes. If ZFS can do that, then my project this weekend is installing FreeNAS, for reals. Or, is there a better way to do it than ZFS?
what do you mean on the fly? with ZFS you can add as many drives as you want to a pool at any time that you want, but you cannot remove drives from a pool once added. thus I recommend that you stick to one vdev per pool (a vdev is either a single disk, or a raid array... so a 5 disk raid5 array would be a vdev, a 4 disk raid10 array is a second vdev, you can make one single pool of the two but it is a bad idea because it cannot be shrunk.
I hardly consider that a killer feature... end to end checksumming? that is a killer feature.

As for use of mismatched drives... you can in some ways. a pool can contain a bunch of vdev of whatever size or type you want. if you were to do something like RAID1 a 1TB and 2TB drive (lets call it vdev1) then vdev1 will have 1TB of available space. If you want you can pool vdev1 with vdev2 where vdev2 is 3x750GB raid5 array (1.5TB of data) for a total of 2.5TB. If you were to replace the 1TB drive in vdev1 with a 2TB drive you would now have 2TB in vdev1. If you were to replace one with a 3TB drive, let it "heal", then replace the other with 3TB you will have vdev1 expand to 3TB.
it is rather smart and robust it just lack shrinking capability at the moment.
 

RebateMonger

Elite Member
Dec 24, 2005
11,586
0
0
I read that article a few days ago. At the time, my impression was that the author had never actually USED Windows Home Server. His familiarity and understanding of the product appeared to be quite limited.

The quote below, which is the only place that I can find that mentions data loss, comes from the late-2007 data corruption problem that was resolved by MS with WHS PP1 in mid-2008:

"This worked, more or less, but it wasn't entirely robust. Data loss bugs cropped up when, for example, using Office documents on the pooled share afflicted Windows Home Server, and these persisted long after the product's initial release. To this day, Drive Extender can change the timestamps of files that it duplicates. These flaws are likely the reason that the technology was never used in any other products."

The timestamp issue, which can occur if files/folders are moved from disk to disk under certain conditions, is a recognized potential problem, but users have suggested that it only affects folder timestamps. That's a fairly common issue in several versions of Windows and isn't normally considered a showstopper.
https://connect.microsoft.com/Windo...igrator-changes-date-timestamp-on-directories
 
Last edited:

sxr7171

Diamond Member
Jun 21, 2002
5,079
40
91
I'm looking at a drobo, but they are very pricey (especially the 8-bay drobo pro). There are certain featurers of it, though, that I really like and am wondering if I can get something similar in ZFS or something else. The features I'm looking for:

1) Appears as one large volume (like RAID or concatenation).

2) Easily expandable. This seems like the hardest part. I'm using Drive Extender on my WHS, and it works fine, but I guess it's going away in v2 unless Microsoft changes heart. RAID 5 or 6 don't offer a similar feature, right? If I wanted to expand those I would have to backup, destroy the RAID and then create a new one, is that right? They also require I use the same size hard drives, lest I lose the extra space...

3) Redundancy. Not a completely necessary feature, but it would make things easier in single drive failures.

Should I just suck it up and buy a drobo?

Thanks for the help!

1 and 3 are handled well in ZFS (I use Nexentastor - I went from knowing nothing to installing it in about 1 hour). The thing with the free Nexentastor is that it now has a 18TB limit. So ideally you would build up the entire 18TB with redundancy right now. However if you chose to create an array of 9TB, you could "expand" it by adding another 9TB array, but they would be managed as 2 arrays. So it isn't like you made that one array 18TB. IMHO it doesn't really matter unless you need one single share to exceed 9TB, in which case I assume you would create an 18TB array in the first place.

I used to use WHS for my storage and this thing has shown me just how retarded WHS is for storage. I still use WHS however, I use it only for PC/Mac backups and for storing my Windows Media Center recorded TV shows. For any real storage I recommend ZFS any day of the week.

The whole thing is also much cheaper than commercial solutions. Just go to the site and get the Supermicro server board of your choice and put ECC RAM in it. Buy the 10-11 2TB HDDs and you're done. Quite inexpensive for how good it is (better than commercial solutions that cost less than $5-7 thousand).

If you ever need more than 18TB it is cheaper to build a second machine. In fact the cost is negligible compared to any other solution. All the entry NAS machines cost $1K+ and they are primitive.


EDIT: I realize I got the terminology wrong. Vdev vs. Array 2 different things. I guess that means you could have a single 18TB share if you wish upon expansion. The only thing I suppose is that you will need to provide the number of spare RAID drives you wish to have each time you expand (1,2, or 3 depending on how many drive failures you wish to protect against).
 
Last edited:

taltamir

Lifer
Mar 21, 2004
13,576
6
76
I read that article a few days ago. At the time, my impression was that the author had never actually USED Windows Home Server. His familiarity and understanding of the product appeared to be quite limited.

The quote below, which is the only place that I can find that mentions data loss, comes from the late-2007 data corruption problem that was resolved by MS with WHS PP1 in mid-2008:

"This worked, more or less, but it wasn't entirely robust. Data loss bugs cropped up when, for example, using Office documents on the pooled share afflicted Windows Home Server, and these persisted long after the product's initial release. To this day, Drive Extender can change the timestamps of files that it duplicates. These flaws are likely the reason that the technology was never used in any other products."

The timestamp issue, which can occur if files/folders are moved from disk to disk under certain conditions, is a recognized potential problem, but users have suggested that it only affects folder timestamps. That's a fairly common issue in several versions of Windows and isn't normally considered a showstopper.
https://connect.microsoft.com/Windo...igrator-changes-date-timestamp-on-directories

yet just about a week ago someone in this very forum was complaining about dataloss on WHS. People making such a complaint is a fairly regular occurance. Your data and your choice, do what you want.
 

alizee

Senior member
Aug 11, 2005
501
0
86
what do you mean on the fly? with ZFS you can add as many drives as you want to a pool at any time that you want, but you cannot remove drives from a pool once added. thus I recommend that you stick to one vdev per pool (a vdev is either a single disk, or a raid array... so a 5 disk raid5 array would be a vdev, a 4 disk raid10 array is a second vdev, you can make one single pool of the two but it is a bad idea because it cannot be shrunk.
I hardly consider that a killer feature... end to end checksumming? that is a killer feature.

This is what I mean by on the fly (I guess I should have said ad-hoc):
from the ars technica article said:
For a home fileserver, this is obviously a very handy capability. It allows simple ad hoc expansion of storage—no RAID rebuilding, no need to match disk capacities, no need to stick to any drive interface—and does so without the inconvenience of multiple drives, each of which has to have its free space managed manually.

Can I do this with ZFS? For example, in my WHS I currently have 2x2TB and 1x1TB hard drive for a ~5GB share before redundancy. I just purchased 2 more 2TB drives on Black Friday, and to add them to the share is as easy as installing them and then using the console to add them, giving me ~9GB before redundancy. Is there something similar on FreeNAS, Nexentastor, etc.? It doesn't have to be that easy, I merely want the same functionality.


As for use of mismatched drives... you can in some ways. a pool can contain a bunch of vdev of whatever size or type you want. if you were to do something like RAID1 a 1TB and 2TB drive (lets call it vdev1) then vdev1 will have 1TB of available space. If you want you can pool vdev1 with vdev2 where vdev2 is 3x750GB raid5 array (1.5TB of data) for a total of 2.5TB. If you were to replace the 1TB drive in vdev1 with a 2TB drive you would now have 2TB in vdev1. If you were to replace one with a 3TB drive, let it "heal", then replace the other with 3TB you will have vdev1 expand to 3TB.
it is rather smart and robust it just lack shrinking capability at the moment.

Can you give me an explanation of what a vdev is? It, along with pool, are terms I've read a lot about since I've really gotten started looking into ZFS.

vdev = 1 drive OR 1 raid set
pool = 1 or more vdevs

Is that right?

The missing "shrink" capability doesn't bother me at all, I just imagine I will add drive after drive when my storage capacity is at 80% or so, just buying whatever is close to the largest capacity at the time (i.e., I don't want to be buying 2TB hard drives when 4TB is affordable so my RAID sets match).

I guess I'll be giving some OS with ZFS a try and see if it will work for me.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
vdev = 1 drive OR 1 raid set
pool = 1 or more vdevs

Is that right?
Yes

The missing "shrink" capability doesn't bother me at all, I just imagine I will add drive after drive when my storage capacity is at 80% or so, just buying whatever is close to the largest capacity at the time (i.e., I don't want to be buying 2TB hard drives when 4TB is affordable so my RAID sets match).

I guess I'll be giving some OS with ZFS a try and see if it will work for me.

If you add them as individual drives then this is a problem because you cannot remove or replace a drive without losing data. Think about it for a second. You have a pool of 3 drives, a 750GB, a 1TB and a 2TB. You decide to get a fourth 2TB drive and take out the 750GB, well you can't. You can put in an extra 2TB, but to take out the 750GB one means losing whatever data is on it.

But if you have sensible vdevs (say, raid arrays of various sorts) then the issue is not as bad, you can replace them drive by drive without data loss.

so for example if you have the following config:
pool name: tank
vdev1: 2x750GB RAID1
vdev2: 2x1TB RAID1

You can upgrade capacity by replacing one of the drives in vdev1 with a 2TB and letting it resliver, then when done replace the other and let it resliver, and it will automatically expand vdev1 to be 2x2TB raid1 array (all of which available to the pool)

you can also add to it vdev3: 2x2TB RAID1

What you can't do is add vdev3 of 2x2TB raid1 and then remove vdev1 from the pool, even though you have the space to.

You shouldn't be using individual drive vdevs anyways. All vdevs should have redundancy (raid1, raid5, raid6...). Because redundancy allows you to enjoy a feature currently unique to ZFS, checksumming with automatic data repair. Random errors are very real, and ZFS is the only file system that can both catch and fix them. But it can only fix them if you have redundancy.
 
Last edited:

RebateMonger

Elite Member
Dec 24, 2005
11,586
0
0
yet just about a week ago someone in this very forum was complaining about dataloss on WHS. People making such a complaint is a fairly regular occurance. Your data and your choice, do what you want.
I expect I read every thread on these Forums about WHS. monkey333 recently had a drive failure in WHS, but lost no data (at least none that was set for redundancy). I don't recall any threads here in the past two weeks where data was lost on a Windows Home Server. In fact, I don't recall ANY threads here where data was lost on a WHS server set for redundancy. I'm not saying it never happens. S**t happens.

I really don't care WHAT kind of server people use, as long as they keep backups. My standard recommendation for people needing a home file server is to keep the shared data on a DESKTOP computer and use Windows Home Server to back up the desktop. I'm not a big fan of storing data on ANY single disk or any redundant array without backups.

BTW, haven't heard from pjkenned for a while, but earlier this year he was showing his WHS server(s) with 30 TB of data and had just moved to 60 TB of storage. I'm guessing that, at least at that point, WHS was working fairly well for him as a storage device. I don't know what's happened since.
http://www.servethehome.com/big-whs-update-60tb-edition/
 
Last edited:

RaiderJ

Diamond Member
Apr 29, 2001
7,582
1
76
Yes



If you add them as individual drives then this is a problem because you cannot remove or replace a drive without losing data. Think about it for a second. You have a pool of 3 drives, a 750GB, a 1TB and a 2TB. You decide to get a fourth 2TB drive and take out the 750GB, well you can't. You can put in an extra 2TB, but to take out the 750GB one means losing whatever data is on it.

But if you have sensible vdevs (say, raid arrays of various sorts) then the issue is not as bad, you can replace them drive by drive without data loss.

so for example if you have the following config:
pool name: tank
vdev1: 2x750GB RAID1
vdev2: 2x1TB RAID1

You can upgrade capacity by replacing one of the drives in vdev1 with a 2TB and letting it resliver, then when done replace the other and let it resliver, and it will automatically expand vdev1 to be 2x2TB raid1 array (all of which available to the pool)

you can also add to it vdev3: 2x2TB RAID1

What you can't do is add vdev3 of 2x2TB raid1 and then remove vdev1 from the pool, even though you have the space to.

You shouldn't be using individual drive vdevs anyways. All vdevs should have redundancy (raid1, raid5, raid6...). Because redundancy allows you to enjoy a feature currently unique to ZFS, checksumming with automatic data repair. Random errors are very real, and ZFS is the only file system that can both catch and fix them. But it can only fix them if you have redundancy.

Exactly what I've been looking for: how to increase capacity of a storage system on-the-fly with a set number of storage drives. I figured this would mean a temporary loss of redundancy while rebuilding the array, which shouldn't matter in the short-term as theoretically I'd have a proper backup in place anyway.

Check my logic here, see if I'm correct in my thinking: I have a 4x1TB RAIDZ array (aka RAID5). Currently provides ~3TB of usable space. I purchase 4x2TB drives for replacement.

Pool Name: tank
vdev1: 4x1TB RAID 5


  1. Replace a single 1TB drive with a new 2TB drive
  2. Let the array resilver with the new 2TB drive
  3. Repeat steps 1 & 2 until all four drives have been replaced
  4. ??? Automatic step here ???
  5. The pool "tank" now will have ~6TB of storage space
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
I'm looking at a drobo, but they are very pricey (especially the 8-bay drobo pro). There are certain featurers of it, though, that I really like and am wondering if I can get something similar in ZFS or something else. The features I'm looking for:

1) Appears as one large volume (like RAID or concatenation).

2) Easily expandable. This seems like the hardest part. I'm using Drive Extender on my WHS, and it works fine, but I guess it's going away in v2 unless Microsoft changes heart. RAID 5 or 6 don't offer a similar feature, right? If I wanted to expand those I would have to backup, destroy the RAID and then create a new one, is that right? They also require I use the same size hard drives, lest I lose the extra space...

3) Redundancy. Not a completely necessary feature, but it would make things easier in single drive failures.

Should I just suck it up and buy a drobo?

Thanks for the help!

The proprietary data storage system on the drobo concerns me a little bit. My personal preference is to use a standard (albeit a de facto standard) in case of catastrophic system failure. Therefore I chose a NAS box that uses a standard linux RAID/filesystem. I've tested, my 'oh crap' recovery plan which involves pulling out the drives, plugging them into a PC and booting a linux live CD. It worked fine - RAID and filesystems detected and mounted automatically and all 4 TB of data was accessible.

ZFS is probably the most reliable file system available at the moment - but it has a few disadvantages:
- potentially quite slow. Benches show it to be substantially slower than a straight RAID. Whether this is an issue is moot - you may not require high performance for a home server.
- Limited upgradability of storage space. There is no way to add a drive to a ZFS 'RAID-Z' vdev array (you can't go from 6 drives to 8 in RAID-Z). You have to destroy the vdev and rebuild with the extra drive. NB: You can replace the drives one at a time with larger drives, any new capacity will become available once the last drive is replaced.
- Limited OS support - may be an issue if you want your server to run other apps and services.

So:
1. This is a standard feature of all NAS devices - drobo, and cheaper ones which use standard linux features.

2. Drobo has the best expandability - additional space becomes available as soon as a drive is installed, and redundancy is available.
However, standard NAS units allow individual drives to be added when required. (E.g. When I bought my NAS, one hard drive was DoA. I built the RAID with 3 drives. Dropped the 4th in after the RMA came through and it was added to the RAID - system remained online through the whole process - no need to shutdown, all files remained accessible and downloads continued during the upgrade process). The only catch now that the drive bays are full is that I can't upgrade the capacity unless I replace all the drives (a process that will take about 1 week - replace drive wait 24 hours for system to stabilise, replace next drive, wait...). With drobo, some additional capacity will become available once the 2nd replacement is installed. Hopefully, it should be apparent that during the upgrade process there is reduced (or no) redundancy, so this represents a vulnerable period for the data - but this is the case whatever system you use.

3. Redundancy is a standard feature on all NAS devices (either RAID1, RAID5, RAID6, or proprietary systems like drobo, or via sophisticated FSs like ZFS).
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Exactly what I've been looking for: how to increase capacity of a storage system on-the-fly with a set number of storage drives. I figured this would mean a temporary loss of redundancy while rebuilding the array, which shouldn't matter in the short-term as theoretically I'd have a proper backup in place anyway.

Check my logic here, see if I'm correct in my thinking: I have a 4x1TB RAIDZ array (aka RAID5). Currently provides ~3TB of usable space. I purchase 4x2TB drives for replacement.

Pool Name: tank
vdev1: 4x1TB RAID 5


  1. Replace a single 1TB drive with a new 2TB drive
  2. Let the array resilver with the new 2TB drive
  3. Repeat steps 1 & 2 until all four drives have been replaced
  4. ??? Automatic step here ???
  5. The pool "tank" now will have ~6TB of storage space

you got it... it would be a good idea to perform something called a "scrub" before starting though, that would scan all files for corruption and fix any it finds. To date mine found and fixed 3 errors during scrubs. if it only finds them while re-slivering it will lack the parity to fix those individual files.
also, after every drive you replace you should actually type a command. The command is zpool replace and it is explained here: http://docs.sun.com/app/docs/doc/817-2271/gbcet?l=Ja&a=view

zfs will recognize that the drive currently in place is not the same as the old one, because you might want to use a drive at a different spotso you have to tell it that your intention is to have it which drive to rebuild with. It is possible to enable something called "auto replace" which will automatically replace every drive with any new drive plugged into the same location.
 

sxr7171

Diamond Member
Jun 21, 2002
5,079
40
91
ZFS may be slower than straight RAID but it walks all over WHS. I used to have stuttering when playing Blu-Rays from the WHS (and I tested after running that drive "equalizer" program, after defragmentation, after server restore, both versions of Vail, tons of network troubleshooting etc.). When I put the Blu-Rays on the ZFS system I got smooth playback and never any stuttering. A 6 drive configuration in RAID-Z1, saturates a gigabit port easily.

BTW, even though it is a moot point now, Vail had pathetic performance. About half of WHS V1 at best. I think that's why they have discontinued DE in Vail. They just couldn't create a software RAID that could perform half-way decently.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
zfs handles? with opensolaris

1. drives with TLER=0
2. drives with TLER=7
3. drives with TLER=(30,60,90)

which?