• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

In Need Of Very Fast Scsi Array, database server

I have been put in charge of upgrading our database server(filemaker). Right now its on dual 2.4xeonx w/ HT ( good to have HT on in a database server? ) and 2gigs of ram with 2 x seagate 10.6k drives in Riad 1 (2 drives) for everything. I want to change that so only the os is on that Raid 1 array and make the database on another array. What would be the best configuration? I dont need all that much space so 18gig drives will work fine. Here was my thought. 5 x 15k drives in Raid 5. Is there a better way? Of course we need speed + failsafe. Let me know what you think.

guide lines:
we have lots of scsi controllers... 3210s, 2100s and so on..
5 drives max( hotswap ) + 2 (drive racks )
speed
failsafe

ideas? Thanks
 
Raid 5 isn't as fast as raid 1. I'd say use RAID 1 with 5 drives.

What kind of performance problems are you having?
 
eh? raid 1 with 5 drives? dont you have to have an even number of drives?

we arnt having that much "problems" but we are expanding our db a lot and adding new customers.. going from 60 to 120 or so.. maybe more
 
Originally posted by: zephyrprime
Raid 5 isn't as fast as raid 1. I'd say use RAID 1 with 5 drives.

What kind of performance problems are you having?

I think your a little confused RAID0 is faster then RAID5. RAID5 is faster then RAID1 (disk mirroring). And right you can't mirror 5 drives.
 
I'd suggest a raid 5 array w/ 5x18Gb 15k or 10k rpm drives make sure the raid controler has 128mb+ cache, I'd suggest use 4 drives in the raid array and have 1 as a hot spare, the cap. of a raid 5 array is N-1 where N is the number of drives, eg a 4x18GB raid 5 array would have 3x18GB worth of space (ie aprox 54GB). Hope this helps...
 
yeah thats what I was thinking. I really wont not need a hotspair because we have a second db server if this one fails and we backup the db every 6 hours to another server.

anyone know where i can find some benchmarks on raid 5 and raid 10.

thanks to everyone for help
 
That's a pretty beefy server for an average Filemaker setup. Are you running any other services on it?

-kill other services
*inlcuding file sharing

-stick your os on 1 controller

-stick the dbase on another controller
*preferrably HW RAID with cache
*you could try a RAID 1 of 2 15k discs, i would begin there
*generally go to a RAID 5 if you need more performance, especially is most I/O is reads
*give RAID 10 a shot if you want better write performance

-tune the software
*find the filemaker server best practives white paper


Look for your performance gains in I/O and OS/FM tuning.
 
okay well Ive looked around and talked to many people. I think ive decided to try raid 5. Someone said that adaptec cards were bad performance in raid 5. True? What would be the best card to use? Adaptec 3210S, 3200S, 2100S, or 2000S (zero channel). I can use all of those if I want. hehe Another question is how much cache? As much as posible? Thanks
 
why so cheap? Good drives?

1 year warranty which is pretty undesirable for a drive being used in a "real" working environment, or anywhere else really. The typical 5 year warranty has a lot to do with the higher prices of SCSI.
 
Originally posted by: Pariah
why so cheap? Good drives?

1 year warranty which is pretty undesirable for a drive being used in a "real" working environment, or anywhere else really. The typical 5 year warranty has a lot to do with the higher prices of SCSI.

14560 hours of non stop abuse is the 5 year warranty. SCSI drives are generally built to last, but even so I have ordered about 8 drives in the last 2 months under warranty. Another factor in the cost of a good SCSI drive is the fact that I recieved these drives from across the country (I live in Canada) in <18 hours.
 
>RAID5 is faster then RAID1 (disk mirroring).

If the DB is doing mostly reads, and most do, RAID1 will destroy RAID5 in performance. A good controller doing RAID1 can have each mirror working on a different read request at the same time (and RAID 0+1 even more so).

Personally, even with such a small DB, I think I'd increase the amount of memory first anyway.

Edit: btw, if you're using win2k/win2k3 you might want to turn on power protected write caching (different from just write caching enable) as well if it isn't already. Just did that recently myself on my scsi array and seems a noticable improvement...have to get around to benching more soon ^^
 
For best access with this number of drives, a fast XOR is necessary along with 256MB+ of BBU cache.

The database needs to be on a logical disk consisting of dedicated physical drives. This means separate from your boot volume and the disk hosting your paging file. This will give you the best performance. RAID10 is generally recommended for best availability with fault tolerance. If your writes are relatively short and random, RAID5 will work well with the proper HBA. (SRCU42X /MR320-2X)

The Adaptecs are not good performers at all. They provide function/compatibility along with availability with performance in the backseat half a solar system away.

Cheers!
 
You can mirror whatever you want with scsi

My current setup

PowerEdge 6650 - Quad Xeon 2.8 w/ 2MB L3 Cache
5 SCSI 15k 36GB
RAID 0+1 - Stripe &amp; Mirrored with 1 Hot Spare for boot

EMC C400 Array for Data
10 Drive 73GB 15K data with 1 Hot Spare

This gets replicated to another hot server using
5 SCSI 10k 146GB
Same raid setup with 1 hot spare
 
Originally posted by: rdubbz420
Originally posted by: zephyrprime
Raid 5 isn't as fast as raid 1. I'd say use RAID 1 with 5 drives.

What kind of performance problems are you having?

I think your a little confused RAID0 is faster then RAID5. RAID5 is faster then RAID1 (disk mirroring). And right you can't mirror 5 drives.

Wrong and wrong.

RAID1 has better read performance than RAID5 (though RAID5 will spank RAID1 in terms of writes, since it's striped and uses distributed parity, as opposed to being mirrored). For a *lot* of tasks, RAID 5 will be faster, but you can't just say "RAID5 is faster than RAID1" as a blanket statement.

And you can mirror any number of drives that your controller will allow -- however, unless your I/O load is almost 100% reads (such as a DB that gets updated once a day in one burst and is read-only the rest of the time), a RAID1 with more than 3 or 4 disks starts to get unbearably slow in terms of write performance.

As for the OP -- if your I/O load is almost all reads, I'd go with a 2 (or more)-disk RAID1 for the DB. Otherwise, a 4-disk RAID5 with a hotspare is probably your best bet, or a 4-disk RAID0+1 (might be the best choice, since your DB is small and it will give better performance than a RAID5 in most cases).
 
And you can mirror any number of drives that your controller will allow -- however, unless your I/O load is almost 100% reads (such as a DB that gets updated once a day in one burst and is read-only the rest of the time), a RAID1 with more than 3 or 4 disks starts to get unbearably slow in terms of write performance.

As for the OP -- if your I/O load is almost all reads, I'd go with a 2 (or more)-disk RAID1 for the DB. Otherwise, a 4-disk RAID5 with a hotspare is probably your best bet, or a 4-disk RAID0+1 (might be the best choice, since your DB is small and it will give better performance than a RAID5 in most cases).


Agreed.. and dont think RAID is 100%.. I had a backplane failed and corrupted data on all 5 drive on it.. I now switch to dual backplane.. boot drive on 1 channel and one backplane and data drive on second channel and second backplane..

Chances of backplane failing is very low but it does happen.
 
Back
Top