Need Input on SAS RAID Solution

err

Platinum Member
Oct 11, 1999
2,121
0
76
I realize this is a networking forum .. however since anandtech doesn't officially have a storage forum, I thought this networking forum would be the best place to post my questions since there are lots of knowledgeable sys admins out there :)

So story is my client is looking for a super SQL box with multiple RAID Arrays. We looked into Dell boxes with Direct attached storage MD3000 solutions, however that seems to be out of reach for my client's budget. So we are looking to build a Supermicro box with 16 hotswap bays to accomodate his needs instead.

The requirement would be something like

Dual Quad Core Xeon 3.0Ghz
16GB RAM
Supermicro 3U 16 hot swap bays
Supermicro Motherboard
Windos Server 2003 x64

C:\ RAID 1 (2 x 73GB SAS 15K) - OS Array
D:\ RAID 10 (4 x 146GB SAS 15K) - LIVE DB Array
E:\ RAID 1 (2 x 146GB SAS 15K) - TEMP DB
F:\ RAID 1 (2 x 146GB SAS 15K) - TLOG
G:\ RAID 5 (3 x 146GB SAS 15K) - LIVE DB BACKUP

My client reiterated that he needs multiple SAS RAID Controller to accomodate his I/O needs. He would like to know whether it would be possible to have each arrays running on its own SAS RAID Controller. The objective would be to have no Disk I/O bottleneck on this very busy SQL box.

I have researched Device B/W at http://en.wikipedia.org/wiki/List_of_device_bandwidths and I don't think multiple SAS controller is needed to accomodate 13 SAS Drives and would be overkill.

so I guess my question is:

1. Is this even doable having multiple RAID SAS card in a box (up to 4 cards)
2. Is it recommended? Or is it overkill?
3. Any documentation I can point to my client about performance study?

Any help appreciated. I tried to call several storage support people, but this must still be a holiday season... I only got people's VM :(

Thanks!

Err



 

spikespiegal

Golden Member
Oct 10, 2005
1,219
9
76
Some ideas:

While it is possible to use multiple RAID controllers, you'll still find write transactions limited to the speed of the drives themselves. Even 15k SCSI has limits in this respect.

Avoid RAID 5 for *anything* requiring fast write I/O because RAID 1 generally beats it handily in this respect. Read speed is another matter.

Keep in mind that you can gang up drive arrays on high end multiple channel RAID cards, but that RAID cards *can* and *will* fail. My experience the past couple years with high end controllers and SANs is that failure rates of the controllers themselves might actually exceed MTF rates of the drives themselves. 3Ware, Adaptec, IBM, doesn't matter. Keep them clean - keep them cool.

Use generous cooling for the box because you want good airflow over the controller cards and 15k drives. Haven't seen this particular 3U box, but with that many drives you want an enclosure that's capable of hovering off the floor with fans running full out.

I can't see the additional quad core doing much other than twiddling it's thumbs, but it might come into play with VMs. If the host OS is Windows, you might consider using virtual machines to keep the DB's split up.

Also see if the client is willing to look at housing smaller, nightly purgeable temp/logs on RAM drives or solid state drive. While this falls under advanced database skills, it's one way to get a SQL box that relies heavily on tempdb transactions to get up and boogey.

http://sqlblogcasts.com/blogs/...ve/2006/08/24/958.aspx
 

err

Platinum Member
Oct 11, 1999
2,121
0
76
Thanks for the input .. There is really no set budget per se.. however would be nice to keep it under $9-10K. The Dell config that we're specing is around $17K.

I guess the question is whether it is wise to do multiple RAID SAS cards... I have spec this out a little more and am thinking it would be best to use 2 high end LSI cards ... Surely it will scream..

I am still not so sure about running solid state drives since they are such a new technology, and I am not sure if I can deal with the political implication of this decision (if you know what I am talking about) :) Would rather do RAID0 or something for the tempdb...

BTW the second quadcores are going to shoot the roof. We are currently running the DB server on Dual Quad Core 1.8ghz... and SQL is definitely milking the processors. I am planning to do a Dual Quad Core 3.0ghz on this box.

Err
 

Czar

Lifer
Oct 9, 1999
28,510
0
0
Might want to look into HP as well. First with SAS :)

Go for something simpler and safer, buy something from HP, Dell, IBM or some other big vendor. Buy it premade because your client should be more concerned about reliability and support than about price and performance. If you realy want performance like he seems to be going to go stright to fibre channel sans, might cost more to start with but it evens out in the long run when it comes to upgrading and flexibility of connecting more servers to it.

For solid state, check this out http://www.superssd.com/ , CCP runs a few databases for Eve Online on those things :)

btw, what is he going to be running of this? how many clients and so on
 

Czar

Lifer
Oct 9, 1999
28,510
0
0
And for storage you want spindles, as in bigger arrays, more disks. And you want a disk controler with alot of throughput.
 

err

Platinum Member
Oct 11, 1999
2,121
0
76
Czar

Thanks for the input. I am afraid fiber sans are out of the question at this point. The more possible solution would be to use Direct Attached Storage such as the Dell MD3000.

Anyway, SQL is running busy Internet Web 2.0 community database.

The RAID card that I am looking for them right now is LSI MegaRAID SAS 8888ELP which should run circles with 500Mhz processor and 3Gbps per port throughput on it . I am looking to get 2 of these to power total of 5 arrays.

I've submitted the request for them. Lets see what they choose :D
 

spikespiegal

Golden Member
Oct 10, 2005
1,219
9
76
A fiber SANs might actually *decrease* performance in this instance. SANs typically have great performance when it comes to connecting to several servers/LUNs, but 1:1 performance is typically slower than one machine talking to a couple high performance RAID cards running locally off the board. Just ask an EMC engineer where the optimum location for the Windows Swap file should be.


Excluding of course that wicked SSD RAM-SANs, which I think was the same widget I prescribed for a theater owner that needed absurd through-put for all his DLP's. I just want to stand in the same room with that thing and breath in the ozone :D