- Jul 2, 2012
- 8,173
- 524
- 126
What are some recommendations for good drive pooling software for Windows Server? I dont need RAID.
Another vote for Stablebit Drivepool. I've had it running for 2 years on a WHS 2011 box and it's rock solid.
For unduplicated files, StableBit DrivePool forwards all file I/O to the individual disk that the file is on.
- If the file exists on more than one disks (i.e. is duplicated), then:
- All file modification requests (such as writing to the file, setting its attributes, etc...) go to all of the disks that the file is on.
- Read requests will either go to the first disk that the file is on (when read striping is disabled), or to one of the disks that the file is on, as determined by the read striping algorithm.
- A directory listing operation works by querying all of the disks in parallel. This will force NTFS to read the MFT directory indexes on all of the disks where the directory being listed exists. These indexes can be cached and can theoretically be served entirely from RAM.
So in short, StableBit DrivePool may spin up disks in many circumstances, I haven't really done testing to see how often this is.
- Opening a file is similar to a directory listing. This will tell NTFS to query its directory indexes on all of the disks part of that pool in order to locate the disks that contain that file. This also is done in parallel and can be served by the system cache.
It gives you the convenience of seeing a large, contiguous storage space ("drive") that doesn't require you to actively manage the physical locations of files.how is this better than just adding a new HD, and assigning a drive letter?
how is this better than just adding a new HD, and assigning a drive letter?
The other thing I don't like is the idea of what it might be like having to wait for drives to spin up whithin the pool. Again, with files scattered across drives and not knowing where they're located, I can see how browsing might incur multiple waits for drives.
I'm using combination of Stablebit Drivepool and open source Snapraid software. The Stablebit Drivepool is there for hard drive pooling, Snapraid is there for data integrity. You can read up on Snapraid, but what it does is it essentially emulates snapshot based RAID4 in software. Or in other words it's an executable that you can run and that will calculate parity information on your data at your request. If one of your drives dies or gets corrupted, all you need is to reconfigure the configuration file and tell executable to fix missing data. I've tested it by deleting a couple of files and telling it to rebuild, and it works.I keep going back and forth about whether or not drive pooling would be desirable for me. One of the things that concerns me is the loss of a hard drive. I can easily replace any file I like, so I don't need RAID or even a backup of any of the content. But if I were using a drive pool and don't know which drives contain which files, if I lose a drive, it would be chaos trying to replace individual files scattered across drives. Right now, if I lose drive two and lose all of my 1970s shows, I know to download all of Barney Miller or the Rockford Files again.
That may be it, but I thought they sold the pooling software and RAID as two separate products. I don't see that on their web site, but they may have changed their model a little.FlexRAID?
That may be it, but I thought they sold the pooling software and RAID as two separate products. I don't see that on their web site, but they may have changed their model a little.
You can buy them separately, but I suspect that most get the whole package.
Personally, I chalk FlexRAID up as one of my top 5 tech purchases of the last 5 years
With FlexRAID's snapshot RAID (using RAID-F), how do you determine how many drives are needed for the parity data? I'd be looking at putting 11 drives with 23 TB into the drive pool, or possibly creating two pools, with 4 drives and 7 drives, respectively.
Are there any hard rules for the size of parity drive(s), such as they must be at least as large as the largest data drive?
Not much of that explanation sounds plausible to me, but maybe I really don't understand.
If it's to be able to reconstruct data in the event of a drive failure (not just detect an error) then it needs to store much more than just checksums.
I would think there has to be a limit to how much data a drive with X bytes of storage on it can protect. I can't see how a single 3TB drive can store enough information to protect 20 x 3TB data drives, for instance.
Not much of that explanation sounds plausible to me, but maybe I really don't understand.
If it's to be able to reconstruct data in the event of a drive failure (not just detect an error) then it needs to store much more than just checksums.
I would think there has to be a limit to how much data a drive with X bytes of storage on it can protect. I can't see how a single 3TB drive can store enough information to protect 20 x 3TB data drives, for instance.