• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

SSDs for MS SQL Database Server!?

skyfighter84

Junior Member
I'm going to set up a new Database Server (MS SQL 2008 R2) for a small-medium business ERP-System (80-120 User). Currently Rack-Servers (e.g. HP ProLiant DL180 or DL380) are still equipped with a bunch of 146GB 2.5" 15.000 RPM SAS HDDs, but at that point database size came to my mind:

Our Database is currently around 7-8GB and it seems to grow about ~4GB within 2-3 years - Why shoudn't I use small SSDs??

What's about such a config:
2* HP 146GB 2.5" 15.000 RPM SAS HDDs -> Raid 1 -> OS & Apps
2* 32GB Intel X25-E 2.5" S-ATA -> Raid 1 -> Only Database File
2* HP 146GB 2.5" 15.000 RPM SAS HDDs -> Raid-1 -> Database-Log-File

(Further Server Specs: 1* Intel Quad-Core, 12-16GB RAM, standard HP Raid Controller - e.g. HP Smart Array P410i/512MB with BBWC)

Would that be a good choice or should I stick to common SAS drives?
 
Is your database write-heavy or read-heavy? If the database size is small and the RAM size is high, then usually the database is write-heavy as reads are cached by the RAM over time.

Aside from SLC, you can also opt for Intel X25-M MLC SSDs which are bigger and thus have more pages. Thus even if you will be using only 10GB, having a 80GB SSD means you can write more than a 40GB SSD. So consider using the Intel X25-M 80GB or 160GB MLC SSD as well.

For example, a RAID1+0 configuration could be useful as well, as with 4 disks you also have double the write endurance. Another potentially less reliable solution would be to use RAID0 instead, and rely on a backup to keep your files safe. The downside of this is an SSD failure would cause downtime. Generally, Intel SSDs are very reliable, and this would double the amount of writes, because you will be using RAID0 instead.

It may also not be necessary to put the log and data on different arrays. I still would recommend separate filesystems for them, so for example make two partitions so you have two separate NTFS filesystems for the data and log.

Assuming that with 2x160GB MLC = 320GB = 298GiB you're way beyond the storage space that you need. This makes it ideal to under-provision your drives, which also remedies the lack of TRIM when using hardware RAID. So create your partitions, then never write to the unused space, which should be more than 50%. This space will be used by the SSD as spare space and will help keep write amplification down; this is especially important with 4K write workloads like databases. By reserving more than 50% of visible capacity, you would have near 1.0 amplification factor; meaning 20GB of random writes really is 20GB physical writes; unlike without the additional space where the write amplification can be 5.0 for example; meaning you physically write 100GB for 20GB of writes issues by the operating system->filesystem->database app.

Whatever you do, SSDs are too attractive from a performance standpoint to ignore.
 
I think your specification is perfectly satisfactory. The question really is how much performance benefit you get - and that is a tricky one.

SSDs are fantastic in terms of their read access time - but not as good as RAM. If your database is small, and you have plenty of RAM - the entire database may be cached, and therefore there is relatively little requirement for the very low access times of an SSD. The fact that you are intending to get 16 GB of RAM suggests that you'll be able to cache most of the relevant parts of the db.

SSDs are also fantastic in terms of random write performance. However, you already have a BBWC - which can take the random writes and deliver them to the drives in an optimised manner. As a result, the BBWC goes a long way to hiding slow write performance. The only exception would be under extreme transactional load, where the drives are unable to keep up with a continued extreme write load. I wouldn't have thought 100 users would be anywhere near enough to generate that sort of load, but I'm not very familiar with ERP.

SSDs would help in those marginal cases (where the Db isn't in RAM, or under continued heavy load).

I think it's right to go for the X-25E over the X-25M. The very high write speed should ensure that writes are never the bottleneck. Additionally, the very high endurance of the X-25E, should ensure a long life-time. I'd expect a 32 GB X25-E to have a longer life, than 2x 160 GB X-25Ms in RAID - as well as being simpler and having higher reliability, lower power consumption, etc. Although, even an X-25M should have plenty of life, if you don't really know how long the server is going to be deployed for - it's probably best to go for a drive that will last for the life of the server, and this means the X-25E.

Logs don't really need to go on the SSD - they only really need sequential writes, so standard HDs are fine. You also reduce contention for the SSD and reduce wear (although neither of these should really be significant). I'd stick them on the cheaper drives (which are the SAS drives).
 
i'd suggest the dl380 G7 SFF with 16 drive slot's. get the dual westmere 6 core (x5650) and 48 4gb RDIMM. Also switch to the 1GB FBWC (never needs a new battery, more cache is better).

the 300gb 15K 2.5" will be out shortly; the x25-e mlc will be out late this year. Also hp resells the fusionio if you really want mad speed.

Honestly with enough ram, raid-5 for the core storage, raid-1 for log/tmp - and make sure you have enough bandwidth to dispatch i/o threads. If i had the 16 SFF cage for the 380 i'd run the on-board p410i/1GBFBWC and then an internal P410/1GBFBWC. so you can have twice the io processor.

i feel you. Right now i'm moving to iscsi for sql - while you might think it is slower the san/iq software with 8gb of ram (each unit) accelerates reads and writes greatly.

The G6/G7 are about 15/25% faster using the same drives as the ole G5 (p400) but you have some soft-numa issues with the g6/g7 once you go multi-socket.

i've found 80% of your sql speed is tuning/coding/profiling/disk layout - most ERP systems assume you have a DBA that knows sql server and how to set it up and tune it and keep it healthy. The stripe size (thank god for online raid restriping) has a big effect as well.

Hold tight - there's alot of business class MLC coming out late this year. The new X25-E will kick the piss out of the old ones - hopefully they'll have tweaks for the CCISSP smartarray because a pair of current gen X25-E can overload the controller right now. end up with bottleneck.

6 sticks of ram is optimal for 1 socket for 1333; 12 for 2-socket. you'll drop to 800mhz if you use 9/18 but that cost sometimes is worth it.
 
Back
Top