Originally posted by: Brazen
Originally posted by: hasu
Originally posted by: drag
You'll probably want to stick with just using SMB protocol. It's fast enough, it's easier, and it has much less chance of something going wrong.
Of course! For my home network, I tried both Samba on Debian as well as FreeNAS. I liked the simplicity and user interface of FreeNAS and that is my file server (being re-built to make a nice enclosure and to boot from a CF-card). With 1GHz Celery the machine consumes only 45W! Couldn't be happier!
We are in fact getting a new SAN at work and I was curious to know the difference. Sometimes you are too alienated from all these unless you actually work with servers. You won't get a chance even to try them out!
There is also fibre channel SAN which is what I admin at my job
🙂 It's a lot like iSCSI but much faster and much, much more expensive. I think it's a waste though, for a small or medium sized business. We use it because at the time, VMWare ESX only supported fibre channel SANs, but now ESX supports iSCSI. A talented admin (which I think I am) could set up an iSCSI SAN using linux boxen and Gigabit hardware that would work just as well for us.
Some day I would like to set up a redundant iSCSI SAN behind the free VMWare Server boxen using GFS for shared storage
😀
If you use bonded gigabit ethernet, with good switches and good ethernet cards with everything that supports 'jumbo packets' you can get damn near fiberchannel performance and very close to local storage performance.
If you do something crazy like stripe 3 iSCSI shares on 3 ethernet ports you can actually get a sizable improvement over local storage speeds.
It's kinda cool all the different sort of things you can do with this. Especially when you start to throw virtualization into it.
For example... For a while I was running iSCSI and was doing network booting with 100% remote storage. On my file server I was running Software RAID5 on 4 disks and then using LVM.
For different logical volumes I would export them as iSCSI drives over to my desktop. Then I would use them for my home, and root, and then extra ones would be drives for running machines in virtual environments. So that worked out pretty slick.
Or how about some VM environment built for high aviability and load balancing on a budget?
So your hardware would be...
2 Linux boxes to act as shared storage. Both using RAID 10, and identical setups configured in a sort of mirror. Your using a lot of disks for little storage, but remember this is for high aviability. These have 2 nic ports each.
Then you have 2 switches. Both the same, both provide good performance.
Then you have your Xen/Linux boxes. Say 5 of them. They have 3 nic ports each.
All this is on one rack.
So you take the two ports from each storage PC. One nic goes to one switch, the other nic goes to the other switch. They are bonded.
Then you have your Xen/Linux boxes. 1 nic port from each is for external network. The other 2 ports go to their respective switches.
The Xen/Linux boxes boot up on their own local disks. The storage boxen exports their entire RAID array as one big GNBD or iSCSI Lun.
You use CLVM (cluster-aware LVM) to setup those exports a volume groups and mirror logical volumes between them. (Or maybe something other then CLVM mirroring like DRDB or something. Not sure...)
Then the Xen/Linux hosts use those LV to be used as disks for Virtual machines. (although I am not sure if live migration features would work in a situation like that)
Then you use Linux-HA to keep track of the state of VMs. If a Xen/Linux host goes down then another Xen/Linux host will start up the VM itself. Also the Xen/Linux hosts only expose the enternal NIC card to the VMs so that if one of those VMs gets rooted the attacker will still have no access to your vunerable storage array networks.
So you have redundant everything. 2xRAID10 arrays mirrored themselves. Two switches. 2 networks.
You could loose up to 4 drives (even 6 drives depending how it goes) or one one entire storage box. You could loose a switch or whatever. You could loose all your Xen boxen except one... all at the same time. And if you have everything tested and configured properly (so you do things like kill off less critical VMs to free up RAM resources) you will still end up having all your critical services running, even if it's at a much reduced capacity.
So your getting multimillion dollar-level aviability at 10-20 grand budget and get pretty good performance and cost effectiveness when everything is healthy.
Or imagine if your into having very quiet computers. Like your a audiophile or need a environment were things have to be dead quiet. Harddrives can be a pain in the neck.. the only realy quiet ones are notebook drives which are expensive and don't have much capacity.
So you can do something like have a big storage array in the basement using some loud, cheap Dell server with dual cpus. Set it up as mythv box and other such things. On your silent PC you can take a 2-4gig flash drive, give 256 megs for your /boot/ partition and then the rest for swap partition and then boot up over NFS or iSCSI.
Could you imagine having a computer that is so quiet that the only way you know it is on is by looking at the keyboard LEDs? It would be perfect for HTPC or a music appliance or something like that.