Originally posted by: drag
What your probably looking for is more like Linux's "Logical volume manager" support, commonly called LVM.
So say you have a logical volume on one disk that is 20gig, and you've maxed out that drive with other stuff you can add a second 80gig drive and combine that with the existing volume and make a single 100gig logical volume. Also allows you to resize stuff and all that.
Actually it's quite nice, and does away with having to deal with crap like partitions and /dev/sda and stuff like that. A much higher level of abstraction then what you'd normally deal with.
Of course if you fail to mirror or have your setup on a RAID 5 array you risk losing your data if you span multiple drives.
logical volume howto
IF you want something bigger and badder then just a single machine, and you want a cluster you can take a look at Redhat's Global Filing system.
It was developed to allow many redhat servers to deal with many SANS clusters to create a single cluster filesystem image.
It is 64bit filesystem, and is scalable from one to hundreds of servers. It takes care of moving data around for higher performance and higher reliability.
It's completely open source, but it has only been recently introduced by Redhat so Redhat's commercial servers are going to be the best and easiest way to set it up. Very expensive though. Still very expensive, but it's completely top-notch 100% hardcore best-of-the-breed industrial strength blahblabhablah setup. (at least that's the claim)
redhat's stuff on GFS
Redhat says it's great with gigantic Oracle databases.
I'd never know it though, I don't have a couple dozen SANS to try it out on.
😉
Lineox is a Redhat enterprise clone that uses sources from redhat to support GFS, but is quite a bit more affordable. Like 10-20 bucks vs 5000
Then there are other things besides that you could try out.
Intermezzo is a clustering filesystem that's new.
Coda is another one that's fine.
Then their always is OpenAFS.
All those are supported buy linux. Coda and Intermezzo aren't production ready, I beleive, mostly academic exercises. OpenAFS is a bit limited compared to the full AFS stuff you can get from places like IBM, but itself is production class. It's used in real-life type stuff in various universities and campuses.
Plus OpenAFS has a Windows client that works well enough.
openAFS website
These clustered filesystems would allow you to create one single big network filesystem. My knowledge of them is quite limited though.
Then if you don't want to get fancy you could setup several machines with several harddrives each and then use something simple like NFS. You mount several NFS shares to subdirectories on your main Linux server.
Those subdirectories are inside a single directory that you share out to your Windows client using SMB....
Something like:
harddrive 1 gets mounted /served/harddrive1
harddrive 2 gets mounted /served/harddrive2
...
harddrive 6 gets mounted /served/harddrive6
Then you have remote mounts using NFS to your main server,
harddrive 1 /served/remotedrive1
...
harddrive 3 /served/remotedrive3
So that would give you a total of 9 drives mounted to your /served directory.
For speed I would probably have 2 ethernet ports in your main server. 1 to the larger network with your client(s), and the other to the secondary server(s). Just to keep everything seperate.
Then just share out the /shared directory to your client machine(s) and then you have a single mount point that accesses all 9 harddrives.
Of course NFS/SAMBA lacks the advanced features that are used in stuff like GFS or OpenAFS, but it is much easier to deal with. (You'd run into problems with file locks and multiple people accessing the same file and that sort of thing.)
Of course if you want to stick everything into just one big machine with bunches of raid cards and whatnot then LVM is the way to go and aviod all that network file system and clustering files system stuff.