is my RAID killing my system throughput

jorl

Junior Member
Apr 30, 2008
2
0
0
Here is the setup...

supermicro x7sba motherboard
2 gb ecc memory
xeon 3065 dual core
pioneer dvd+-rw (sata)
promise fastrax tx4310 raid controller
supermicro case sc743 w/hotswap bay
dual GB lan on GB network
win server 2k3 R2 sp1 - all updates

i am running the the following HDD arrays, all use sata seagate drives

onboard controller (ich9r)
3 x 160gb drives in RAID 5 (boot)
2 x 750gb drive in RAID 1 (mirrored)

promise controller
3 x 250gb drives in RAID 5

This is what is happening. I use the server as a file/print server using server 2003 R2. I just built this system and the throughput is horrible. When I plug in an external usb drive to transfer files from my old server, it chokes. It take 30-40min to transfer a few gigs, maybe ~10-15gb. I had to quite because it was so terribly slow, and if I transfer in windows, the file transfer time fluctuation greatly... it will read 24 minutes, then 60, then 40, then 75. It takes forever, way too slow.

I also automate backups from my two workstations to this system nightly. both the server and workstations have GB nics and I use a GB switch. I was transferring about 8 gigs and it was taking 26-30 HOURS. My old setup could transfer about 75 gigs of data, over a 100mbs network to my old server (athlon 2600) in about 4 to 4.5 hours.

Something is terribly wrong here. I have had the system running for only a few days. I am thinking my problems are stemming from choking the ICH9R chipset. It runs (from my understanding) the USB, the sata and possible some of the PCI slots. Even the raid 5 array on the promise controller stinks, transferring to those drives is also slow, but not quite as bad. I haven't tested that array as much as the others, but I do it is not on par.

I am open to buying a real hardware RAID card, but dont want to spend $500+ if it isnt the solution. I am looking at something like a 3ware 9550SXU-8LP. I would like a card that supports 3 arrays, at least for now - at some point I might consolidate arrays as I purchase bigger drives... but for now.

I would appreciate suggestions to the throughput issues and possible RAID cards. I realize now I should not have tried to build my own server, or at least did a little more research before starting. I'm in to this project for too much $$ at this point to run away, but don't want to keep feeding it money if it wont help. At this point this server is useless.

I really appreciate any help.
 

supaxi

Member
Sep 4, 2005
26
0
0
I'm no raid expert, but you have 8 drives and are only getting about 1.5 TB of storage. Also, you have a strange mix of sizes and are using the min. number of drives to get raid 5 at all off of two controllers.

If it were me, I would get 2 more 750's drives and raid the 4 of them into a raid10 on one controller. Then you get 1.5 TB of storage and the performance of striping and the ability to lose any one drive (2 max). Then, you can raid the remaing 6 drives if you like or just sell them...
 

mooseracing

Golden Member
Mar 9, 2006
1,711
0
0
Should have went AMD for that extra bandwidth :) Noticed how your old one performed so well.

What is the bandwidth that is avalable on that mobo? And are all the lanes being shared? I don't know much about the intel architecture. To have a card support 3 arrays I would spend some good money on it as it's going to have to work hard. Check out the storage section on 2cpu forums. there is alot of good discussions on the different cards and bandwidth.

 

MerlinRML

Senior member
Sep 9, 2005
207
0
71
Originally posted by: jorl
Something is terribly wrong here. I have had the system running for only a few days. I am thinking my problems are stemming from choking the ICH9R chipset. It runs (from my understanding) the USB, the sata and possible some of the PCI slots. Even the raid 5 array on the promise controller stinks, transferring to those drives is also slow, but not quite as bad. I haven't tested that array as much as the others, but I do it is not on par.
It definitely sounds like you've got a problem, however I don't think it's with you overloading the southbridge. The chip itself should be capable of handling everything you've got. Also, the block diagram for your motherboard shows the northbridge handles all the expansion slots. The southbridge handles the SATA and USB plus your printer ports.

Originally posted by: jorl
onboard controller (ich9r)
3 x 160gb drives in RAID 5 (boot)
2 x 750gb drive in RAID 1 (mirrored)

promise controller
3 x 250gb drives in RAID 5
I would start by simplifying things down. You've got a lot of complexity on a system that is new and unknown. It's possible you've got a bad or failing drive on your ICH9R that is causing signaling problems or retries for all the other devices on the chip. It's also possible you need to load some drivers or you've got some bad drivers loaded. Make sure you're running the latest BIOS (Supermicro doesn't tell you what they fix in their updates, so that's always a crapshoot).

So I would start by removing the Promise card and the RAID 1. Also remove any other expansion cards, USB devices, etc. See if things are better with just your boot RAID array. If not, maybe try running the ICH9R in Legacy or IDE mode with a different drive and a clean OS install if you can, and see if things get better. Build up the base install with new chipset, graphics, sound drivers, etc.

Good luck.
 

jorl

Junior Member
Apr 30, 2008
2
0
0
I wanted to update this post with my findings. I never was able to try any of the suggestions posted due to business travel. I left it alone (running) for about a month and when I returned it worked just fine. I haven't been able to figure this one out. I don't know if the arrays were building still - even though the system reported no such activity - but i left it alone and now it works.

I appreciate everyone who commented on this situation.