SCSI Raid-5 Write Performance Problems (Plz Help)

Nemmeh

Senior member
May 13, 2003
209
0
0
Hey everyone,

I've got a server at the office that was purchased and configured by a former admin, I've been attempting to troubleshoot the problems with the system and have kind of hit a deadend at this point. I'd rather not throw money at it trying to resolve the problems so any advice you can lend would be appreciated.

System Specifications:
Dell PowerEdge 1600SC, updated Bios A10
Dual 2.8ghz Xeon
4gb pc2100 ECC Registered
(3) 36gb Fujitsu 10,000rpm SCSI hdds (RAID 5)
LSI Logic Megaraid 320-1 Controller 64mb PCI-X, 1-internal channel, updated firmware 1L33
Windows 2003 Enterprise Server

The system is solely a Microsoft SQL Server box. That is it's primary and sole function in our datacenter.

I've done some benchmarking on this system as the performance on the drive write speeds just is extremely worrying. I've recently updated the system bios, controller firmware, and formatted and reinstalled Windows 2k3 Enterprise server.

The write speeds are horrible, from the benchmarks I've conducted the read access speeds average around 70mb/sec. However, the write speeds average around 6-7mb/sec. Yes, you heard me correctly when I said 6-7mb/sec. This isn't some phantom number being generated by the benchmark application either. I have thoroughly tested this with real-time extraction benchmarking of rar and zip archives. I have compared the extraction times to a desktop running a single 80gb 7200rpm ATA drive and the desktop outperformed the server by tenfold in extracting and archiving data. I did notice that write-caching is not enabled on this array, could this be the issue with slow performance. I don't think it would be the direct cause, as these speeds are horribly slow and quite warying.

Does anyone know of any issues or array configurations that I may need to check or change to get the performance where it's supposed to be. Could a bad drive cause this type of thing? If so, is there an easy way to check the reliability of the drives? How about memory, would a bad memory module cause something like this? I've tried pulling memory from the system and moving the memory from slot to slot with no luck in resolving the problem.

The scsi connections to the drives and controller seem fine. Is it likely I could have a bad cable? Also, it seems that the former admin purchased a LSI Logic Megaraid 320-1 controller for this system, so the 3 drives are attached to the card directly instead of going to the motherboard. The PowerEdge 1600SC supports SCSI onboard (motherboard) and Dell recommends using the Megaraid 320-0 card which does not have any internal channels, instead it merely uses the card to incorporate the RAID functions into the SCSI interface already built into the system board.

I'm really just looking for any ideas anyone may have in running down the cause / solution of the problems.
 

Vegito

Diamond Member
Oct 16, 1999
8,329
0
0
It is probably write caching.. but raid 5 never had good write performance..

http://www.twincom.com/raid.html



RAID 5: Striping and Parity
In RAID level 5, both parity and data are striped across a set of disks. Data chunks are much larger than the average I/O size. Disks are able to satisfy requests independently which provides high read performance in a request rate intensive environment. Since parity information is used, a RAID 5 stripe can withstand a single disk failure without losing data or access to data.

Unfortunately, the write performance of RAID 5 is poor. Each write requires four independent disk accesses to be completed. First old data and parity are read off of separate disks. Next the new parity is calculated. Finally, the new data and parity are wntten to separate disks. Many array vendors use write caching to compensate for the poor write performance of RAID 5.

Advantages:
Average data availability
Cost effective - only 1 extra disk is required
Disadvantages:
Poor write performance
No performance gain in data transfer rate intensive applications
Complexity
Requires special hardware.
 

Arcanedeath

Platinum Member
Jan 29, 2000
2,822
1
76
Write cache is extremly important for a raid 5 array w/out it they tank in write performance, enable write cache and your transfer rate issues should be much improved.
 

Nemmeh

Senior member
May 13, 2003
209
0
0
thanks for the suggestions guys, I will try this out.. I've read that enabling write performance does add some fears... in that, if any type of power failure, i.e. power goes out and ups does not take up the slack quick enough anything that is within the cache can be lost.. is this true? I guess the odds in something like this happening are pretty slim..
 

sharkeeper

Lifer
Jan 13, 2001
10,886
2
0
Write performance is slow on RAID5 regardless of controller used.

No getting around that.

NEVER enable write back cache on RAID5 arrays if your controller does not have a battery backup. Even if your system has a UPS, it is still risky.

WB will help if average writes are within the cache size. Once they grow past this, the cache does little to help performance.

Cheers!
 

jose

Platinum Member
Oct 11, 1999
2,078
2
81
You try a 2 channel version for the raid you have & add a 4th drive, this way you'd have 2 drives per channel, that would help.

Also what size stripe are you using ? I went from the largest size to 64k and got terrible performance on our Dell server..

Regards,
Jose