Local Raid5 Array of Samsung Pro SSD's vs NetApp NAS

TechBoyJK

Lifer
Oct 17, 2002
16,699
60
91
I'm deploying a few servers for a project, and I'm torn between loading the servers up with 8 128GB Samsung Pro SSD's with 2 as hot spare (approx 600GB total) or using the datacenter's high speed NAS. The high speed NAS is pretty nice. There are redundant Netapps (mirrored) with multiple shelves. Each shelve has 15 300GB 15K SAS drives in a RAID6 array. It can be delivered to my server as either iSCSI or NFS. Customers (I work at the datacenter) have said our NAS performs better than a single SSD drive. Not sure about an SSD array though.

The servers will be running vmWare ESXi and an array of small CentOS virtual machines. Some running Apache, some running mySQL, some doing file storage. The servers' raid controller is SAS/SATA2 (3Gbps). Not SATA3.

If I stay with local storage, I make a one time investment, (about $2400 for 16 drives) and keep my monthly hosting costs what they are now; which is an upside since I have to very economical. Downside is that if/when I grow out of the local storage, I need to either replace those SSD's with larger ones, making another costly investement, or start using NAS. So even if I start local, I might very well end up having to use the NAS anyway.

If I start with the NAS, I can start with a smaller amount of storage, and simply grow it as I need it. It's about $30 per month per 100GB of space. So if I need 600GB per server, thats's 1200GB total. $30x12 = $360 per month. But I can start smaller, say 150-200GB per server and grow as needed. Either way, it'd probably cost the same for one year of NAS as the cost to acquire the SSD's.

1) SSD - One time investment, great performance, difficult to grow without distrupting service
2) NAS - Recurring fee, can start small, great performance and reliability, easy to grow
 
Last edited:
Feb 25, 2011
16,994
1,622
126
A rack of 8 small drives for a single server seems inefficient, unless you're trying to get total IOPS numbers that would break the NAS anyway.

For most applications that are okay with consumer SSDs, I'd rather have a RAID-1 of 500GB/750GB drives than a double-handful of 120s. And a couple large drives for file storage. (Hand them to a VM and do your own NAS internally for bulk file storage.)

For MySQL, the trick is to throw RAM at it until it stops hitting disk.

IMHO of course.
 
Last edited:

TechBoyJK

Lifer
Oct 17, 2002
16,699
60
91
A rack of 8 small drives for a single server seems inefficient, unless you're trying to get total IOPS numbers that would break the NAS anyway.

For most applications that are okay with consumer SSDs, I'd rather have a RAID-1 of 500GB/750GB drives than a double-handful of 120s. And a couple large drives for file storage. (Hand them to a VM and do your own NAS internally for bulk file storage.)

For MySQL, the trick is to throw RAM at it until it stops hitting disk.

IMHO of course.

It's a dual channel raid card, so I was heavily considering doing 2 raid5 arrays with a single hot spare. That way I could spread the load around.

It won't be a single OS, it'll be about 8 small CentOS vm's running on it. I actually think the SSD's in a raid5 would be overkill, and that the raid controller itself would present the bottleneck.

I also considered putting a raid1 of larger SSD's.. Like 6x128 and 2x750.
 

daxzy

Senior member
Dec 22, 2013
393
77
101
Whats your expected lifespan of the drives? Are you doing a lot of writes?

I quickly googled Samsung Pro series of SSD's, and they didn't even bother to list its write endurance. For production uses, I'd just get the DC-S3500 (like $30 more expensive) or S3700, depending on how much you think you're going to write.