MS SQL 2005 and SSD Wear

jpnye

Junior Member
Mar 15, 2012
3
0
0
I have a Dell Poweredge 2950 with 64GB of RAM running MS SQl Server 2005
RAID1 - (OS) 146GB
RAID10 - (SQL Data, TempDB, Logs basically everything SQL. ) - (4x 146GB) 300GB

All of our SQL data is about 140GBish.. We are looking to add some space to our SQL Server and instead of upgrading the RAID10 array to 600Gb (4x300GB) we are considering an OCZ Velo Drive PCIe 600GB SSD Solution.

I have read that Intensive IO Can prematurely burn out an SSD Others have said not so much to worry about with today's SSD.

I pulled some I/O Stats from our MS SQL Server

89 Hours is the size of the sample
WRITES:
460,951 Total Writes across all DB's including temp DB
35,193MB of data written (35,193 / 89 *24hrs in a day * 365days a year) = 3,463,940 = An estimate 3.4TB of total data written in a year?

READS
934871 Total Reads
89567Mb of Data read (89567 / 89 *24hrs in a day * 365days a year) = 8,815,808 = An estimate 8.8TB of total data Read Each Year?

Temp DB Alone
READ - 9,067MB of Data read (9067 / 89 *24hrs in a day * 365days a year) = 892,437MB = An estimate 892GB of total data Read Each Year?
WRITE - 12,562MB (12562 / 89 *24hrs in a day * 365days a year) = 1,236,439 = An estimate 1.2TB of total data written in a year?

So giving these stats would it make sense to add an SSD and

A.) Transfer everything to the SSD?
B.) Leave the TempDb and Log files on the RAID10 and move the DB over to the SSD
C.) Leave the Data on the RAID10 and move the TempDB and TransactionLogs onto the SSD
D.) Steer clear of an SSD Drive
E.) Suggestions?

(Complete Backup of ALL SQL Database(s) is done nightly)
 

exdeath

Lifer
Jan 29, 2004
13,679
10
81
5 TB data written per year is nothing. A simple 2.5" 240-256GB SSD with 32nm NAND would last 512 years at 5 TB a year. (256GB x 10,000 P/E cycles = 2,560 TB written before total NAND exhaustion).

Do it.

You keep backups. Put it ALL on the SSD.
 
Last edited:

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
We have a couple shelves of SSD is our SAN and not seeing anything out of the ordinary at the volume.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Wear will be much higher than advertised, and higher than seen in desktop wear testing, but I'm with the others: it's few enough writes to not be a big deal.

Whether or not to RAID in case of an odd drive failure is an open question, IMO, as SSDs seem to cause stranger problems than just dropping out--it's typically complicated firmware bugs when something goes wrong, not a regular failure. I'm not sure if anyone has done sufficient testing, at least with public results, on whether RAID is useful for redundancy in real use or not, nor how common such failures actually will be (in part because such problems tend to be at least as rare as mechanical HDD failures).

Also, given only 5TB/yr, is the PCI-e drive necessary? Even assuming you're worried about peak times, I would think that at 13GB/day average, any good 6Gbps drive(s) would still be able to find ample time for idle GC. Or, is the PCI-e drive about similar in cost to a speedy RAID controller (new LSI at minimum) and a drive?
 
Last edited:

razel

Platinum Member
May 14, 2002
2,337
93
101
The Poweredge 2950 doesn't support more than 32GB of RAM. Also from my novice SQL admin eyes backed by our SQL expert here, it seems like with all that RAM, if that machine's sole purpose is as a DB then your DB admin needs to learn how to tune MS SQL, especially 2005 so that it better utilize RAM rather than throw hardware. Modern DB caching is FANTASTIC and far easier than before. RAM latency is far less than the disk I/O. That includes SSD.
 

jpnye

Junior Member
Mar 15, 2012
3
0
0
The Poweredge 2950 doesn't support more than 32GB of RAM. Also from my novice SQL admin eyes backed by our SQL expert here, it seems like with all that RAM, if that machine's sole purpose is as a DB then your DB admin needs to learn how to tune MS SQL, especially 2005 so that it better utilize RAM rather than throw hardware. Modern DB caching is FANTASTIC and far easier than before. RAM latency is far less than the disk I/O. That includes SSD.

Thank you all for the feedback.

The PE 2950 actually does support 64GB with the Latest BIOS Update and also moving to Windows Server Enterprise Edition. We actually do have a majority of our T-SQL, SPROCs, and Queries optimized and Disk IO isn't really a concern for us as this Server runs quite smoothly.

We are just adding some additional storage space. Instead of Spending $$ to upgrade the RAID10 drives figured it would be more advantageous to add an additional SSD drive so it would yield more space and better IO performance. This way we would have 3 volumes to to spread information across. Instead of just upgrading the existing RAID10 to larger drives.

With all this flooding in Malaysia it has caused a shortage of Hard Drives as well as driving the prices way up if you can even locate them. 10K or 15K SAS drives aren't cheap and for a Bit more PCIe SSD are looking really attractive.

Wasn't sure if it is the Amount (in MB) or the # of writes I should be concerned with. Roughly (45,374,831) writes a year (sizes unknown).

I'm just going to pull the trigger, Add the SSD, Move all DB's over onto it, run nightly backups to the RAID10, then to tape and also Write transaction logs to the RAID 10 array.
 
Last edited:

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
Thank you all for the feedback.

The PE 2950 actually does support 64GB with the Latest BIOS Update and also moving to Windows Server Enterprise Edition. We actually do have a majority of our T-SQL, SPROCs, and Queries optimized and Disk IO isn't really a concern for us as this Server runs quite smoothly.

We are just adding some additional storage space. Instead of Spending $$ to upgrade the RAID10 drives figured it would be more advantageous to add an additional SSD drive so it would yield more space and better IO performance. This way we would have 3 volumes to to spread information across. Instead of just upgrading the existing RAID10 to larger drives.

With all this flooding in Malaysia it has caused a shortage of Hard Drives as well as driving the prices way up if you can even locate them. 10K or 15K SAS drives aren't cheap and for a Bit more PCIe SSD are looking really attractive.

Wasn't sure if it is the Amount (in MB) or the # of writes I should be concerned with. Roughly (45,374,831) writes a year (sizes unknown).

I'm just going to pull the trigger, Add the SSD, Move all DB's over onto it, run nightly backups to the RAID10, then to tape and also Write transaction logs to the RAID 10 array.

Please tell me you are putting the transaction logs and tempdb on another location. Are you doing any type of raid?
 

jpnye

Junior Member
Mar 15, 2012
3
0
0
Please tell me you are putting the transaction logs and tempdb on another location. Are you doing any type of raid?

This is what I'm looking for advice on.

Currently
We have a RAID1 (10K SAS 146GB total - OS)
We have a RAID10 (10K SAS 300GB total - SQL - ALL Including temp and Transaction logs)

Looking to Add an PCIe SSD -

Move Data onto SSD and leave Transaction logs and temp DB on RAID10?
Leave Data on RAID10 and put TempDB and transaction logs on SSD

I have heard arguments both ways saying putting the TempDB on the SSD improves performance.
Others Say don't do this BC you will Burn out your SSD Quickly because of all the small reads and writes.

I think at this point I will put all the DBs and the TempDB on the SSD and write the backups and Transaction logs to the RAID10 array and Monitor the lifespan of the SSD and see what happens. If need be I can then shift the TempDB onto the RAID10 array.

Given the estimated amounts of use in the First post are we even generating enough writes / transactions that burning out the SSD is a Concern?
 
Last edited:

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
If you don't have redundency, the putting anything on the SSD is a risk. You will still have problems in a failure with TempDB being there, albeit, it's less of a risk for data loss than data or logs.

Consider the fact that if and when the SSD fails, you need to procure another. At that point, you can put the temp files back on the RAID 10 volume, but do you want to move backwards in performance. Can you afford to move backwards in performance in a year while you get another SSD?