Windows 2003 software Raid-5

Tran

Member
Jan 29, 2001
46
0
0
My computer setup:
AMD XP1600 Socket A
Abit KG-7
768 MB ram
1x120gb IDE
4x320gb SATA
PCI SiI3114 Sata Controler
Windows 2003

The problem:
I have the 4x320gb disks set up in a windows 2003 software raid-5 setup, which is extremely slow.
It starts off at ~20mbyte/sec and after about 10 seconds slows down to 1mbyte/sec.
Doesn't matter if i'm copying over a network or from the system disk (120gb IDE)
These speeds are AFTER the array has Synched, and is labeled as healthy in disk management.
I have tried deleting the array and setting it up again as a spanned or striped array and the speeds with that setup are fine, so it's not the PCI SATA controler that is limiting me.
I'm using NTFS formatted with "default" cluster size.
The Win2003 setup is pretty much brand new, i haven't changed any settings, so they are all set to default values.

Any ideas? (OTHER than buying a hardware raid controler, not an option)
 

btha

Junior Member
Jul 28, 2005
6
0
0
When i go to device manager i can't control the DMA mode anywhere but when i check the properties for the controler i can flip through the devices connected to it and they all say Ultra DMA mode 6.

There is no option to change that however.

Besides, the drives work fine when i set up a striped/spanned array.

Ps, this is the original poster.
 

Brazen

Diamond Member
Jul 14, 2000
4,259
0
0
For the money you could've saved buying Windows XP instead of Server 2003, you could have gotten a nice LSI MegaRAID Sata card. Regardless, I would say scrap the Sil3114 and get an LSI MegaRaid.
 

spyordie007

Diamond Member
May 28, 2001
6,229
0
0
Have you tried an actual performance benchmark (rather than just copying files)? Windows explorer is notoriously bad about reporting accurate speed and time remaining on file operations.

How are the devices physically connected? Are you sure you arent saturating the PCI bus or other single chanel?
 

btha

Junior Member
Jul 28, 2005
6
0
0
I have used 2 diffrent FTP clients for testing as well, but that's over a network (that should be 100% fine though, i get 10mbyte/sec speeds when i write to the 120 gb disk).

I thought at first that i was saturtating the PCI bus because i had 4 disks in one PCI slot but i don't think that's the reason because when i re-setup the array to be a spanned array the write speeds were excelent.
 

Thor86

Diamond Member
May 3, 2001
7,888
7
81
Software raid is slow, but 768mb for a RAID-5 server is kinda low.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
Originally posted by: Thor86
Software raid is slow, but 768mb for a RAID-5 server is kinda low.

yeah, its not a lot but I'm leaning toward the CPU. @Tran is file hosting the server sole responsibility? how about it's CPU usage?

 

Tran

Member
Jan 29, 2001
46
0
0
I did try to reinstall win2003 on my desktop computer, which is a 3500+ AMD with 1 gbyte RAM, with the exact same issue. My 3500+ computer even has 8 built in SATA connectors and i tried spreading the drives over those and no diffrence.

Good write speeds for about 5 seconds, then everything deadlocks.

The cpu usage is pretty high for those first 5 seconds, then drops to below 10%.

It does however completly eat up my ram in those first 5 seconds, goes from 750 mb ram free to 100 mb ram free in 5 seconds, then everything stops.. It's really weird..

I've tried increasing the cluster size to 8mb from 32 kb, which in theory would make the CPU have to do 250x less parity calculations, but it didn't change a thing.

The issue defenitly lies in the Raid-5 part though, as i don't expirience this when i set the array up in spanned or striped mode.


I'm really at a loss here, i've read about tons of people using win2003 raid-5 without any issues on mid range home servers, but no matter what i do it sucks for me :(
 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
I'm not sure about tons of people, I actually don't know anyone who runs a business server on Windows S/W Raid....
 

RebateMonger

Elite Member
Dec 24, 2005
11,586
0
0
Originally posted by: nweaver
I'm not sure about tons of people, I actually don't know anyone who runs a business server on Windows S/W Raid....
I experimented with it briefly, but decided there were too many negatives, even with RAID 1, to make it useful for my Windows servers. If I need an ultra-economy job, I just buy a $50 IDE or SATA hardware/software RAID card for RAID 1. If I need RAID 5 with Windows, I install a full-hardware RAID card.

I understand that there are a lot of Linux software-RAID afficionados...but that's Linux RAID.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Originally posted by: RebateMonger
Originally posted by: nweaver
I'm not sure about tons of people, I actually don't know anyone who runs a business server on Windows S/W Raid....
I experimented with it briefly, but decided there were too many negatives, even with RAID 1, to make it useful for my Windows servers. If I need an ultra-economy job, I just buy a $50 IDE or SATA hardware/software RAID card for RAID 1. If I need RAID 5 with Windows, I install a full-hardware RAID card.

I understand that there are a lot of Linux software-RAID afficionados...but that's Linux RAID.

Yep. Linux software raid is pretty fantastic as far as these things go. Faster then most hardware raid as far as I/O goes and even occasionally cpu usage.

The downside is that you saturate your PCI bus bandwidth and thus limit your scalability and usefullness to mostly a pure storage capacity. Sort of like turning your computer into a very fancy harddrive controller.

There is even a 'iSCSI enterprise target' (ya, that's the name of the project) for linux now that aims for high speed emulated I/O. Faster then the Intel supplied open source target since that one was designed more for testing rather then real-world usage. I don't know how well it works, haven't played around with it much myself.
http://iscsitarget.sourceforge.net/

Think of 'target' as 'server'.

But as you can imagine having half a dozen or so Linux storage boxes running software raid and hosting iSCSI target services on a nice gigabyte lan with high quality switches and jumbo packets and all that connecting to various application servers that do the real work is about as close as your going to get to a SAN using regular commodity hardware.

(Of course real hardware raid for SATA devices isn't that expensive either. If your going to throw down 5-10k for something like that then it makes sense to upgrade to hardware raid. But whatever.)

Even Windows would work out pretty good. Microsoft has some iSCSI client add-on stuff for their servers that is aviable at no-cost. And since iSCSI works on the block level there is nothing stopping you from formatting the Linux-hosted shares as NTFS file systems.
Microsoft iSCSI software initiator:
http://www.microsoft.com/downloads/deta...-4585-b385-befd1319f825&DisplayLang=en
Supported Operating Systems: Windows 2000 Service Pack 4; Windows Server 2003; Windows Server 2003 Service Pack 1; Windows Server 2003 Service Pack 1 for Itanium-based Systems; Windows Server 2003, Datacenter Edition for 64-Bit Itanium-Based Systems; Windows Server 2003, Datacenter x64 Edition; Windows Server 2003, Enterprise Edition for Itanium-based Systems; Windows Server 2003, Enterprise x64 Edition; Windows Server 2003, Standard x64 Edition; Windows Small Business Server 2003 ; Windows XP 64-bit; Windows XP Professional 64-Bit Edition (Itanium) ; Windows XP Professional 64-Bit Edition (Itanium) 2003; Windows XP Professional Edition ; Windows XP Service Pack 1; Windows XP Service Pack 2

All modern Linux distros should have a software initiator aviable by default since it's in the kernel. Iscsi-related utilities may be a different matter.

Keep in mind that I've played around with it much myself. I just think it's interesting.

Also keep in mind that with remote storage you can end up in race conditions when your systems begin exhausting their memory. This is a unsolved problem. Local storage is a bit safer in that regards.

This is mostly Linux-related, but it's usefull for Windows because if you want to purchase real hardware raid for SATA it can get confusing. It's a chart of 'real' hardware SATA raid vs 'fakeraid' device by device.
http://linuxmafia.com/faq/Hardware/sata.html

A lot of hardware manufacturers are not as honest or open as they should be about disclosing weither or not their system is REAL hardware raid (were they have a co-processor) vs BIOS-assisted RAID (called fakeraid) were they use hardware hacks to present a real raid controller to your system but do all the work on special drivers. Sort of like a winmodem vs 'real' modem.


edit:

It would be VERY interesting to see how the pretty-much-now-unlimited bandwidth presented by PCI express' switched bus stuff has a impact on software raid scalability, cpu usage, and speed. I can see a time in the not-to-distant-future were spending a bit more money on a extra dual or quad core CPU would be much more preferable to going out and getting a 'real' raid device.
 

DaiShan

Diamond Member
Jul 5, 2001
9,617
1
0
Originally posted by: RebateMonger
Originally posted by: nweaver
I'm not sure about tons of people, I actually don't know anyone who runs a business server on Windows S/W Raid....
I experimented with it briefly, but decided there were too many negatives, even with RAID 1, to make it useful for my Windows servers. If I need an ultra-economy job, I just buy a $50 IDE or SATA hardware/software RAID card for RAID 1. If I need RAID 5 with Windows, I install a full-hardware RAID card.

I understand that there are a lot of Linux software-RAID afficionados...but that's Linux RAID.


The major downside to software RAID in Linux is that if (when) a drive fails, you've got to reboot to rebuild the array, thus negating most of the benefit of having redundant storage on a server.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
You can have hotspares if you'd like.

Also in the not-to-distant future Linux libata-based SATA drivers should support hotplugging. It's part of the basic SATA specifications that you should technically be able to hotplug a drive without special hardware... Although I expect SATA controllers that are based on PATA equivelents won't support it very well.


edit:
looks like libata drivers supports hotplug currently in development. Will be part of the 2.6.18 series kernel.
http://linux-ata.org/driver-status.html
http://linux-ata.org/software-status.html#hotplug

with PCI Express devices they should also technically support hotplugging. So theoreticly you should be able to not only hotplug drives, but also drive controllers. It will be interesting to see how well this all works out in the real world.
 

RebateMonger

Elite Member
Dec 24, 2005
11,586
0
0
Originally posted by: DaiShan
The major downside to software RAID in Linux is that if (when) a drive fails, you've got to reboot to rebuild the array, thus negating most of the benefit of having redundant storage on a server.
The issue that most concerned me was a PC BIOS issue. I found that my BIOS (most likely an Award BIOS, but I don't recall) wouldn't allow a boot if a Master IDE drive failed. So even though I had a "working" array, the Server wouldn't be bootable 50% the time with a single drive failure of a Software-RAID 1 array on a built-in IDE controller.

This was with an IDE-based BIOS. I don't know how the new SATA BIOSes handle this. Hopefully better. And I imagine that SCSI must allow a boot even with a single failed drive. But if you have the money for SCSI drives and a SCSI controller, then you can probably afford a RAID card.

Regarding SATA controllers:
I'm looking at hot-swap SATA drives for backups for clients who don't want to use tape. From what I read, not every SATA controller works with hot-swap. And SATA RAID controllers don't seem to mention hot-swap at all in their specs. As far as I can tell, a PCI SiI3114-based SATA controller like the OP has should support hot-swap in Windows 2003. But you'd have to test it to be sure.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Regarding SATA controllers:
I'm looking at hot-swap SATA drives for backups for clients who don't want to use tape. From what I read, not every SATA controller works with hot-swap. And SATA RAID controllers don't seem to mention hot-swap at all in their specs. As far as I can tell, a PCI SiI3114-based SATA controller like the OP has should support hot-swap in Windows 2003. But you'd have to test it to be sure.

Look at one of the links I posted.
http://linux-ata.org/driver-status.html

From there you can see which ones will never support hotswap at least. Doesn't having anything to do with what devices can theoriticaly support hotswap AND have windows drivers support, of course.

For the BIOS bootload problem.. If your worried about that you can use alternative media if it worries you. A rescue floppy is pretty easy to make. As are boot cdroms.

One of my favorite (currently using it in my 'play' server at home for root) is Flash-based stuff. A usb flash key, for instance. Also I've noticed companies selling 'industrial' flash devices that plug directly into a ATA port.

Like these guys:
http://www.logicsupply.com/index.php/cPath/44


edit:

Another downside to software raid, at least in Linux is that the MD error detection/correction is kept intentially simple. If it senses a situation were it can't handle it will simply refuse to boot in a attempt to protect your data.

You can end up in a situation were you can't boot up a degraded RAID array. Happenned to me once (due to some flaky ram as it turned out) were a degraded raid setup refused to boot.

(I tested a drive and couldn't figure out why it failed. So I got a bit pissed and tried to force it to rebuild and it locked up the machine. The OS refused to activate the array after that and since I had the root on the array it wouldn't boot. Ran some memtest tests on it later and it turned out to had ram and I haven't had problems since I replaced it)

Which is why I like having a flash-based backup for situations like that. Knoppix cdrom is what I used to rebuild the array and fsck everything which caused it to boot up nicely after that, but it was a bit of a PITA.
 

Kniteman77

Platinum Member
Mar 15, 2004
2,917
0
76
I have the same problem with my Raid 5 array being incredibly slow on Windows Server 2003. I haven't run Sandra yet, but when I am writing to the array, pulling from the array, and torrenting, I get disk overloads all the time.

I'm running a Dual Xeon 3.0 2 Gigs of DDR2 ECC, a 3ware 8500 12 port RAID card and 6 400 gig Hitachi SATA150 drives in a Raid5. Waaaay slower than I think it should be for some reason.
 

TSCrv

Senior member
Jul 11, 2005
568
0
0
i have (had) the same problem, or at least similar.... what was happening was with 512mb of ram, and 5 drives an a raid5 array, i would copy a cd image , ~700mb to the drive over network.... it would puff along at near the speed of my gigabit network, and as soon as it hits the 500mb mark it would slow to a halt, and commonly it would timout.... it was as if it was loading to ram instead of to disk, which in my logic is a big no no... and the procesing for the parity data couldnt keep up.... this was a dual 500mhx p3 dell poweredge, running 2k server.... that problem i have long given up on, that server has been sitting in the trunk of my car since june...
 

RebateMonger

Elite Member
Dec 24, 2005
11,586
0
0
Originally posted by: drag
Look at one of the links I posted.
http://linux-ata.org/driver-status.html
Thanks for that link. It goes along with what I was told: That Silicon Image SATA chipsets are the way to go for hotswap.
For the BIOS bootload problem.. If your worried about that you can use alternative media if it worries you. A rescue floppy is pretty easy to make. As are boot cdroms.
At the time, the inability to reboot when a drive fails seemed like a big deal to me. That's because all of my servers are remote.

In theory, I guess I could argue that if a drive fails, it needs to be replaced immediately, anyway. But if I'm not available, and if that server gets rebooted for some reason, they are going to be without a server until somebody replaces that drive or finds another way to boot up.

So, in my case, I moved Software-RAID to the category of "something interesting to play with, but not something I'd ever want to use in business". Especially when I can buy IDE or SATA RAID cards that will do RAID 1 for fifty bucks. I sure wouldn't want to explain to a client that their RAID server won't boot because we didn't spend fifty dollars on a PCI RAID card.

RAID 5, of course, is an entirely different issue. I can see why folks would want to avoid spending $300-$700 on a RAID 5 controller.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
At the time, the inability to reboot when a drive fails seemed like a big deal to me. That's because all of my servers are remote.

Ya. That would be a big deal. :)

I would expect that there would be a sane way around it, but I am not sure about it. I like the 40 buck flash drive though, it would be nice to have a little linux command line environment for doing stuff like imaging drives, being a bootloader, and such.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
Originally posted by: drag
At the time, the inability to reboot when a drive fails seemed like a big deal to me. That's because all of my servers are remote.

Ya. That would be a big deal. :)

I would expect that there would be a sane way around it, but I am not sure about it. I like the 40 buck flash drive though, it would be nice to have a little linux command line environment for doing stuff like imaging drives, being a bootloader, and such.

You know, the way you phrased makes it sound like: "why isn't there a cheap reliable mechanism to recover from degraded software raid in a remote server?"

Just trying to put things in perspective. ;)
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Originally posted by: kobymu
Originally posted by: drag
At the time, the inability to reboot when a drive fails seemed like a big deal to me. That's because all of my servers are remote.

Ya. That would be a big deal. :)

I would expect that there would be a sane way around it, but I am not sure about it. I like the 40 buck flash drive though, it would be nice to have a little linux command line environment for doing stuff like imaging drives, being a bootloader, and such.

You know, the way you phrased makes it sound like: "why isn't there a cheap reliable mechanism to recover from degraded software raid in a remote server?"

Just trying to put things in perspective. ;)

Ya that seems about right. :)

Usually on 'real' hardware (vs the elcrapo commodity x86 stuff we all use) there usually is extensive management, monitoring, and repair facitities built directly into the hardware. Of course like everything else x86 is slowly catching up to were mainframes were 20 years ago and were Unix servers were 10 years ago.

Like Sun with their Opteron server systems (well realy newisys opteron systems) have a embedded Linux system built directly into their servers. This would be nice since you can probably access the Linux stuff remotely while the system is otherwise down. So that's the first I've realy seen something like that used on x86 machines. (although obviously I don't have much experiance with higher-end stuff)

Something like that would be very cool. A sort of failsafe root system you could access remotely if your system is otherwise fubar'd.

Linux developers are kicking around a concept on increasing the capabilities of the x86 boot loader, a sort of new third bootloader that would replicate the functionality provided by default on higher end machines.

Intel is pushing some remote management stuff on their newer motherboards. I don't know a whole lot about it, but it may be usefull for that sort of situation. It's one of the things they are pushing to make Intel hardware more appealing for the enterprise compared to AMD Stuff. Reduce administration overhead and all that.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
Originally posted by: drag
...Something like that would be very cool. A sort of failsafe root system you could access remotely if your system is otherwise fubar'd.

Linux developers are kicking around a concept on increasing the capabilities of the x86 boot loader, a sort of new third bootloader that would replicate the functionality provided by default on higher end machines.
I know they are working on it, but there just isn't any way of telling how much time it will take for it to get to main x86 hardware (motherboard, HDD firmware, addon cards), although they have achieved some milestones.
Their second headquarters.

[edit] It seems i jumped the gun on this one.
O.N.E. Technologies (http://www.onelabs.com/) offers LinuxBIOS on all their x86 products.

Tyan (http://www.tyan.com/) offers LinuxBIOS on all their Opteron based products.
w00t, if there is anything i like better then new toys is finding new ways of playing with the ones i already have :). [/edit]

Intel is pushing some remote management stuff on their newer motherboards. I don't know a whole lot about it, but it may be usefull for that sort of situation. It's one of the things they are pushing to make Intel hardware more appealing for the enterprise compared to AMD Stuff. Reduce administration overhead and all that.
There is a much better project going on (which Intel is also taking part of)- EFI (wiki).
Although a lot of time has pass since i have heard of it for the first time (2002), no substantial advances has been made (exept for microsoft Vista that, coupled with a HDD with a flash chip, can boot faster), but that is probably just because even mainstream x86 hardware has to give some kind of backward compatibility (which is kind of LOL "our new hardware junk is backward compatibile with our old hardware junk so you can still use you'r ancient legacy crappy software", sh!t, now i've depressed myself).