RAID10 vs SSD

Homerboy

Lifer
Mar 1, 2000
30,890
5,001
126
I'm trying to spec out a new server for work and I'm flipping and flopping between running RAID10 and SSD in RAID1 trying to find a happy middle ground between price and performance.

The RAID10 would likley be 8 HDDs, though I'm not sure if 10K/7.2k - again $ versus performance.

I need to get around 2TB of space (more always welcome) so the SSDs from Dell are like $1,400 a pop. HDDs (7.2K) are $250 a pop.

This will be housing our in-house database (not SQL) that 100+ people are accessing all day long.

I just really keep saying to myself "ya but..." over and over and I need some input from others to help steer my brain.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
Like the saying goes, Hard Drives are cents per gigabyte but dollars per IOPS. SSDs are dollars per gigabyte but cents per IOPS. How many IOPS do you need? Is 2TB worth of redundant flash going to be way more OPS than you can ever use? Then don't use it. On that same note, you can only get about 70-90 IOPS from a 7.2K Hard Drive. Is the worst case write IOPS scenario of an 8 drive RAID 10 set you'd be looking at roughly 280 IOPS, which is but a shell of what you can do with a single SSD.

Is there more middle ground available that would suit better? For instance, you would double your IOPS if you moved to 10K drives to around 570 IOPS.

Is there caching available? For instance, is there a De-duplication or indexing database that could go on a RAID 1 SSD Tier while the bulk of the Database was on 10K drives?

Can you cache at the Filesystem level?

There's lots of options, but only you are going to know the nature of the workload. If you don't, I highly recommend performance profiling it, or you're just putting money down a hole for an upgrade that might not even solve your problems :)
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,019
3,490
126
Like the saying goes, Hard Drives are cents per gigabyte but dollars per IOPS. SSDs are dollars per gigabyte but cents per IOPS.

This is a very accurate statement...

Also to note that the more drives you have in R0, to make up the R10, your latency will increase.
 

PliotronX

Diamond Member
Oct 17, 1999
8,883
107
106
Tiered Storage Spaces could be an option for you if you're using Windows Server.
Darn straight! I have configured four production servers with TSS so far and the results are brilliant. Depending on the actual database size, it could even be pinned to the SSD tier but the block level usage mapping optimization works great on its own. The latest server I used two empty PCI-E slots for 960 Evos and 2016 allows for differing resiliency per tier. I have not and don't plan to ever use parity storage pools though. The towering with mirroring is on point!
 
Feb 25, 2011
16,983
1,616
126
100+ people all day long?

SSD.

If you want a less pithy answer; it depends on your workload. Have you run any kind of performance monitoring or I/O monitoring on your current system/database to see what kind of performance you actually need? If you've got 100 people sitting at a login screen and are only doing a couple dozen transactions a minute, then HDDs are probably just fine. Flipside: one intensive user can easily flog a database to death, especially if they suck at writing efficient queries.* SSDs hide that for most users.

*I suck at writing efficient queries.