(6) x 15K SAS 146GB vs (6) x 500Mbps SSD in RAID 10

TechBoyJK

Lifer
Oct 17, 2002
16,699
60
91
I have an older IBM server (buying new is not option) that has 8 2.5" slots. It supports SAS/SATA for RAID but SATA is only SATAII at 3Gbps.

I'm looking for a cheap RAID 10 solution. I can get 146GB SAS 15k drives, new, for about $80 via ebay. But I can also get 120GB SSD's for about $70 from a retailer like Microcenter.

I'm thinking I would have 6 in RAID 10 and 2 hot spares (especially if using SSD's).

**I'm almost positive my RAID controller is going to be the bottleneck and that my drive choice might not matter. For this, I'm leaning towards SSD's because of power usage and warranty issues.

Question. Am I going to lose throughput (create bottleneck) by using SATA instead of SAS? It's a SAS Raid controller. I'm wondering if even though the SSD's might perform better than the SAS, if the SSD performance will be gimped because it's SATA.

ServeRAID 8k SAS Controller

Eight internal 6 Gbps SAS/SATA ports
Supports SAS and SATA drives (but not in the same RAID volume)
Two Mini-SAS internal connectors (SFF-8087)
6 Gbps throughput per port
LSI SAS2008 6 Gbps RAID on Chip (ROC) controller
x8 PCI Express 2.0 host interface
Connects to up to 32 SAS or SATA drives
Supports up to 16 logical volumes
Supports LUN sizes up to 64 TB
Configurable stripe size up to 64 KB
Compliant with Disk Data Format (DDF) configuration on disk (COD)
S.M.A.R.T. support
Maximum stripe size: 64 KB (fixed)
Supports the optional M1000 Advanced Feature Key which enables the following features:
RAID levels 5 and 5

Features:
256 MB DDR2 533 MHz
Enables 72 hours of battery life for 3 years at 45°C
RAID-0, 1, 1E, 10, 5, 6,
Copyback
FlashCopy
Stripe-unit size: 16 KB, 32 KB and 64 KB, 128 KB, 256 KB, 512 KB, and 1024 KB
Write-back cache memory is 256 MB, 533 MHz DDR2 unbuffered memory
 
Last edited:

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
Your largest issue is the controller itself. The ServeRAID 8K SAS controller is incredibly fickle with drives. You'll note from many people who have tried that 4K Sector size drives simply don't work, and most drives won't work in the device if they are not at least NL-SAS or other commercial-class drive. For instance, most WD Blue and Black drives will not work on this controller, but WD RE SATA drives will work.

Of all the LSI 2008 based cards you could have gotten a hold of, you definitely have one of the worst to work with. Additionally, the LSI 2008 controller actually supports SATA 3, but you might have to try flashing the IBM card to a stock LSI firmware, (there's articles available online), and if the drives are connecting to a Backplane, it too will need to support SATA 3 (chances are it doesn't and that is where IBM is gettign their limitation.

SATA vs SAS will not matter if these are point to point connections and not going through an Expander, as in point to point connections, neither HDD nor SSD is likely to saturate a SATA II connection. You have not noted your workload, but most general server workloads are not comprised of constant sequential writes and reads, which means in most cases SSDs are going to win out.

You also have to contend with the fact that TRIM is not supported on SATA SSDs through RAID on LSI controllers, so you will need high Over-provisioning and SSDs with good garbage collection to keep up. I'd only provision your RAID array to use 96GB on each SSD (25% OP)

Overall, today's modern SSDs should be superior in every way to the antiquated 15K SAS drives you're talking about on eBay. The real question will be if your controller will even accept it.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
I would assume that since he's comparing $80 hard drives to bare budget SSD's and that he doesn't have enough money for a new server, that SAS SSD's are out of the question. Second gen MLC Enterprise SSD's are still very pricey.
 

TechBoyJK

Lifer
Oct 17, 2002
16,699
60
91
Your largest issue is the controller itself. The ServeRAID 8K SAS controller is incredibly fickle with drives. You'll note from many people who have tried that 4K Sector size drives simply don't work, and most drives won't work in the device if they are not at least NL-SAS or other commercial-class drive. For instance, most WD Blue and Black drives will not work on this controller, but WD RE SATA drives will work.

Of all the LSI 2008 based cards you could have gotten a hold of, you definitely have one of the worst to work with. Additionally, the LSI 2008 controller actually supports SATA 3, but you might have to try flashing the IBM card to a stock LSI firmware, (there's articles available online), and if the drives are connecting to a Backplane, it too will need to support SATA 3 (chances are it doesn't and that is where IBM is gettign their limitation.

SATA vs SAS will not matter if these are point to point connections and not going through an Expander, as in point to point connections, neither HDD nor SSD is likely to saturate a SATA II connection. You have not noted your workload, but most general server workloads are not comprised of constant sequential writes and reads, which means in most cases SSDs are going to win out.

You also have to contend with the fact that TRIM is not supported on SATA SSDs through RAID on LSI controllers, so you will need high Over-provisioning and SSDs with good garbage collection to keep up. I'd only provision your RAID array to use 96GB on each SSD (25% OP)

Overall, today's modern SSDs should be superior in every way to the antiquated 15K SAS drives you're talking about on eBay. The real question will be if your controller will even accept it.

I wonder how easy it would be to replace the controller? I don't want to throw much money at these servers. Otherwise, I'm really leaning towards the 15K SAS drives simply because of compatibility and the card being primarily a SAS controller. I can't afford SAS SSD, and I doubt I'd saturate either in a RAID 10 array. But it seems, as you pointed out, that simply using SATA could cause some latency or poor performance, so the benefits of SSD might be lost and if I use SATA SSD I might actually take a performance hit.

Another option I have, which I am considering, is using the NetAPP SAN space the DC offers. I have a dedicated 1Gbps nic for storage, and the SAN they can provide is really fast (and highly redundant).
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
You can easily replace the RAID controller with anything you want. That being said, if they have a NetApp fairly priced that you can get a piece of, I would definitely go for that. It would depend on their SLA regarding whether or not the IOPS you are guaranteed meet your needs.

Any modern clustered NetApp FAS system is going to deliver far more IOPS, have way more redundancy, and have far more availability than what you can configure with a single RAID card in a server.

If you go that route, make sure you understand the DC's redundancy path and note whether or not you can make redundant connections back to Storage. Ideally you want at least 2 connections back to the Storage (since a NetApp serving a DC is no doubt in an HA configuration with multiple layers of redundancy).

Unless your workload is very latency sensitive, I'd go with the NetApp. I'm assuming its not, because if it were, you'd wouldn't be looking at old 15K SAS drives :D