• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

RAID 10

Symfornix

Junior Member
I have a Dell PowerEdge 4400 Server. This server has a 2 channel internal RAID controller (Dell branded, but it's Adaptec)which supports RAID 0,1,5 & 10 and has 128MB Cache.

Channel 1 currently has 2x15K RPM Cheetah's in a RAID 1 mirror; it's partitioned into C: (FAT, NT Server 4) and D: (NTFS, Programs).

Channel 2 currently has 6x15K RPM Cheetah's in a RAID 10 array (3xRAID 1); partitioned into E: (NTFS). It's got no files on it currently, but I bought this server to run MS SQL Server 2000.

My Dillema: The RAID 10 was factory setup, but I think it was done incorrectly, since my server hard-locks (KB/mouse freeze, nothing in event log, no blue screen, etc) under heavy load on the drives. I think the stripe sizes between the arrays may be incorrect, and I have to re-create the RAID 10 array (E🙂.

BTW: Dell tech support is clueless.

I ran Dell diags, mem-test, etc. for 72 hours straight w/not a single error; I reseated all components and connectors, etc..I've done everything I (and Dell) can think of, and this is what I conclude.

My Question: What is the correct stripe size for each RAID 1 mirror, and once that's done, what chunk size should I use for the RAID 0 across the RAID 1's? (remember, this is only going to run SQL server with a few large DB's)

Any help or advice is sincerely appreciated.

Thanks!
 
Stripe size should not matter. But If I were you, I would be running 128k or maybe ( depending on how I feel that day ) be running 256k. I would stick with 128k on this day 🙂
 
Back
Top