Originally posted by: RebateMonger
Originally posted by: nweaver
I'm not sure about tons of people, I actually don't know anyone who runs a business server on Windows S/W Raid....
I experimented with it briefly, but decided there were too many negatives, even with RAID 1, to make it useful for my Windows servers. If I need an ultra-economy job, I just buy a $50 IDE or SATA hardware/software RAID card for RAID 1. If I need RAID 5 with Windows, I install a full-hardware RAID card.
I understand that there are a lot of Linux software-RAID afficionados...but that's Linux RAID.
Yep. Linux software raid is pretty fantastic as far as these things go. Faster then most hardware raid as far as I/O goes and even occasionally cpu usage.
The downside is that you saturate your PCI bus bandwidth and thus limit your scalability and usefullness to mostly a pure storage capacity. Sort of like turning your computer into a very fancy harddrive controller.
There is even a 'iSCSI enterprise target' (ya, that's the name of the project) for linux now that aims for high speed emulated I/O. Faster then the Intel supplied open source target since that one was designed more for testing rather then real-world usage. I don't know how well it works, haven't played around with it much myself.
http://iscsitarget.sourceforge.net/
Think of 'target' as 'server'.
But as you can imagine having half a dozen or so Linux storage boxes running software raid and hosting iSCSI target services on a nice gigabyte lan with high quality switches and jumbo packets and all that connecting to various application servers that do the real work is about as close as your going to get to a SAN using regular commodity hardware.
(Of course real hardware raid for SATA devices isn't that expensive either. If your going to throw down 5-10k for something like that then it makes sense to upgrade to hardware raid. But whatever.)
Even Windows would work out pretty good. Microsoft has some iSCSI client add-on stuff for their servers that is aviable at no-cost. And since iSCSI works on the block level there is nothing stopping you from formatting the Linux-hosted shares as NTFS file systems.
Microsoft iSCSI software initiator:
http://www.microsoft.com/downloads/deta...-4585-b385-befd1319f825&DisplayLang=en
Supported Operating Systems: Windows 2000 Service Pack 4; Windows Server 2003; Windows Server 2003 Service Pack 1; Windows Server 2003 Service Pack 1 for Itanium-based Systems; Windows Server 2003, Datacenter Edition for 64-Bit Itanium-Based Systems; Windows Server 2003, Datacenter x64 Edition; Windows Server 2003, Enterprise Edition for Itanium-based Systems; Windows Server 2003, Enterprise x64 Edition; Windows Server 2003, Standard x64 Edition; Windows Small Business Server 2003 ; Windows XP 64-bit; Windows XP Professional 64-Bit Edition (Itanium) ; Windows XP Professional 64-Bit Edition (Itanium) 2003; Windows XP Professional Edition ; Windows XP Service Pack 1; Windows XP Service Pack 2
All modern Linux distros should have a software initiator aviable by default since it's in the kernel. Iscsi-related utilities may be a different matter.
Keep in mind that I've played around with it much myself. I just think it's interesting.
Also keep in mind that with remote storage you can end up in race conditions when your systems begin exhausting their memory. This is a unsolved problem. Local storage is a bit safer in that regards.
This is mostly Linux-related, but it's usefull for Windows because if you want to purchase real hardware raid for SATA it can get confusing. It's a chart of 'real' hardware SATA raid vs 'fakeraid' device by device.
http://linuxmafia.com/faq/Hardware/sata.html
A lot of hardware manufacturers are not as honest or open as they should be about disclosing weither or not their system is REAL hardware raid (were they have a co-processor) vs BIOS-assisted RAID (called fakeraid) were they use hardware hacks to present a real raid controller to your system but do all the work on special drivers. Sort of like a winmodem vs 'real' modem.
edit:
It would be VERY interesting to see how the pretty-much-now-unlimited bandwidth presented by PCI express' switched bus stuff has a impact on software raid scalability, cpu usage, and speed. I can see a time in the not-to-distant-future were spending a bit more money on a extra dual or quad core CPU would be much more preferable to going out and getting a 'real' raid device.