• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Why don't all SSD makers use compression?

CakeMonster

Senior member
Nov 22, 2012
871
21
81
From what I've read about the SandForce SSD's, there doesn't seem to be any performance penalty to using compression. And for the typical content stored on SSD's (OS, applications, user profiles) there seems to be great potential for reduced wear since (much) less is written. Considering the time that has passed since the Sandforce compression was introduced, you'd think that the controllers would be even more powerful and could compress even more easily (though as I said that never seemed to be an issue in the first place).

So... why don't all SSD makers do this now?
 
Last edited:

Blain

Lifer
Oct 9, 1999
23,643
2
81
One of the prime directives of data storage devices is maintaining the integrity of that data.
Adding more layers of complexity to the write process allows for more opportunity for data corruption.
Cell writing longevity is secondary to the main purpose of a storage device.
 

_Rick_

Diamond Member
Apr 20, 2012
3,748
28
91
If you want to use compression, just do it at the file system level. Plenty of FS support it.
Yes, it might be slower, but it's not magic.
Magic should be attempted only by Apple, everywhere else people need to be able to reverse engineer what went wrong. Debugging a compression/decompression IC sounds like a fun time.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,763
0
76
Sandforce's have quite poor trim/garbage collection behaviour. I would hazard a guess that this is one of the trade offs that comes from putting so much silicon towards compression. None of the other manufacturers want to make drives that only do well in particular benchmarks, they would rather produce a product that lasts and works well for its entire lifetime.
 

CakeMonster

Senior member
Nov 22, 2012
871
21
81
One of the prime directives of data storage devices is maintaining the integrity of that data.
Adding more layers of complexity to the write process allows for more opportunity for data corruption.
But is data corruption a real problem with this kind of hardware compression? Of all the discussion and analysis of Sandforce I can't even remember that risk of corruption has been mentioned.

Cell writing longevity is secondary to the main purpose of a storage device.
Fair enough, I suppose many think SSD are "reliable" enough. But if the cost of compression is practically free, I'd rather have the reduced writes and better longevity.
 

CakeMonster

Senior member
Nov 22, 2012
871
21
81
If you want to use compression, just do it at the file system level. Plenty of FS support it.
Yes, it might be slower, but it's not magic.
Sandforce's have quite poor trim/garbage collection behaviour. I would hazard a guess that this is one of the trade offs that comes from putting so much silicon towards compression.
I might have missed the slow garbage collection part about Sandforce. I just remember that it works in real time which is practical for some but not all. If that is true it could be that the controller has limited cpu power. However, that is something that can be solved with the next generation of controllers and a node shrink.

I'd love to have the cake and eat it too so to speak. If a controller cpu becomes cheap enough (and it probably will sooner rather than later) it can even be put on mechanical drives. I have a feeling that this is more likely than any widely adopted file system that compresses as a default setting in the foreseeable future.
 
Last edited:

Blain

Lifer
Oct 9, 1999
23,643
2
81
From what I've read about the SandForce SSD's, there doesn't seem to be any performance penalty to using compression.

So... why don't all SSD makers do this now?
One of the prime directives of data storage devices is maintaining the integrity of that data.
Adding more layers of complexity to the write process allows for more opportunity for data corruption.
Cell writing longevity is secondary to the main purpose of a storage device.
But is data corruption a real problem with this kind of hardware compression?
Of all the discussion and analysis of Sandforce I can't even remember that risk of corruption has been mentioned.
I was simply addressing your question as to why all SSD makers don't use compression like Sandforce does.
I made a hypothesis as to why all SSD makers don't use compression similar to the Sandforce method.
I'm an end-user, and can't speak for a whole industry.
 
Last edited:

bradley

Diamond Member
Jan 9, 2000
3,664
1
0
I would imagine if larger manufacturers wanted to create unstable and kludgy SSD drives they wouldn't need to follow Sandforce's lead. On-the-fly data compression to their engineers was likely a flashy solution (on an already immature technology) looking for a problem.
 

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
It is funny to hear Sandforce compression being praised when Sandforce SSDs have had stability and performance problems for years now...basically since Sandforce released their first SSD controller.

Perhaps there is a way to implement compression with an SSD controller that does not cause a lot of problems, but we certainly have not seen it yet. Strangely enough, an anandtech article just came out on the subject of problems arising from Sandforce's compression.

The latest generation of SSDs can write (without compression) at about 500MB/s. So any SSD compression algorithm would need to have that sort of throughput, and to maintain low latency, it probably could not use a block size any larger than 4KiB. And the compression algorithm would need to run on a relatively low power chip. Even with custom silicon, it would be extremely difficult to achieve good compression ratios while processing at 500MB/s with only 4KiB blocks. Given that SSDs are already writing at 500MB/s without using compression, the only benefit of such low-ratio compression would be to give the SSD a little bit of extra spare area to work with, thus decreasing the write amplification a bit. But since the same thing can be achieved (without compression) by reserving a little extra spare area, compression hardly seems worth the potential performance hits with TRIM and likely firmware bugs and possible stability issues.
 
Last edited:

Coup27

Platinum Member
Jul 17, 2010
2,130
2
76
The latest generation of SSDs can write (without compression) at about 500MB/s. So any SSD compression algorithm would need to have that sort of throughput, and to maintain low latency, it probably could not use a block size any larger than 4KiB. And the compression algorithm would need to run on a relatively low power chip. Even with custom silicon, it would be extremely difficult to achieve good compression ratios while processing at 500MB/s with only 4KiB blocks. Given that SSDs are already writing at 500MB/s, the only benefit of such low-ratio compression would be to give the SSD a little bit of extra spare area to work with, thus decreasing the write amplification a bit. But since the same thing can be achieved by adding a little extra spare area, it hardly seems worth the potential performance hits with TRIM and likely firmware bugs and possible stability issues.
+1. The competition has eaten away at Sandforce's speed advantage so much that it it's gone. Sandforce are now left with their USP being their Achilles' heel.
 

Hellhammer

AnandTech Emeritus
Apr 25, 2011
701
4
81
It is funny to hear Sandforce compression being praised when Sandforce SSDs have had stability and performance problems for years now...basically since Sandforce released their first SSD controller.

Perhaps there is a way to implement compression with an SSD controller that does not cause a lot of problems, but we certainly have not seen it yet. Strangely enough, an anandtech article just came out on the subject of problems arising from Sandforce's compression.

The latest generation of SSDs can write (without compression) at about 500MB/s. So any SSD compression algorithm would need to have that sort of throughput, and to maintain low latency, it probably could not use a block size any larger than 4KiB. And the compression algorithm would need to run on a relatively low power chip. Even with custom silicon, it would be extremely difficult to achieve good compression ratios while processing at 500MB/s with only 4KiB blocks. Given that SSDs are already writing at 500MB/s without using compression, the only benefit of such low-ratio compression would be to give the SSD a little bit of extra spare area to work with, thus decreasing the write amplification a bit. But since the same thing can be achieved by adding a little extra spare area, it hardly seems worth the potential performance hits with TRIM and likely firmware bugs and possible stability issues.
I agree. While lower WA and less NAND writes may sound like a killer deal, I've never been a fan of using tricks to achieve that. Compression is way too limited to be a solution for NAND endurance and I'm sure all major manufacturers know that. Eventually we will need something else than NAND anyway so why concentrate on beating a dead horse when the money could be invested on researching the successor of NAND.
 

lamedude

Golden Member
Jan 14, 2011
1,200
4
81
I never understood why they did it the first place. By clicking a single checkbox I can compress an entire drive in Windows and I actually get to use the space saved.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,763
0
76
I never understood why they did it the first place. By clicking a single checkbox I can compress an entire drive in Windows and I actually get to use the space saved.
They did it to get more data written to the same flash to make their drives be faster. Of course its only going to do that if:
a) the data is compressible.
b) the drive speed is limited by flash performance

(a) is definitely true for lots of the files that windows writes out. (b) stopped being true when everyone hit into the limitations of the SATA interface. So while it may give benefits on smaller transfers its also adding latency which is often killer to random IO performance. It wasn't really about space saving, although that did benefit wear levelling somewhat.
 

taltamir

Lifer
Mar 21, 2004
13,586
6
76
Why don't all SSD makers use compression?
Its costly and time consuming to develop and QA it (and there is much more that can go wrong so much more needs to be QA).
Sandforce has had a variety of issues with their firmware and even hardware.
Sandforce also licenses their technology. So if someone decides that it is a good idea they are tempted to just license it. For example, intel.
 
Last edited:

benwood

Member
Feb 15, 2004
107
0
0
...

I'd love to have the cake and eat it too so to speak. If a controller cpu becomes cheap enough (and it probably will sooner rather than later) it can even be put on mechanical drives. I have a feeling that this is more likely than any widely adopted file system that compresses as a default setting in the foreseeable future.
Data compression technology has been used on mechanical drives in a way. Remember STAC Technologies and Stacker software from the early 1990's?
 

lamedude

Golden Member
Jan 14, 2011
1,200
4
81
Even the Win95 DriveSpace help file said compressing the drive would speed up the HDD. Now in the era of why do I need this many GHZ/cores we're using dedicated controllers to do it because right clicking on the drive and checking compress this drive to save space is hard?
 

Hulk

Platinum Member
Oct 9, 1999
2,570
20
81
I see a lot of people writing about Sandforce issues but don't remember seeing much currently about those issues. Seems they they got ironed out a year or more ago. It's turned into a "scare fiasco" that doesn't seem true today.

How about the Intel 330/335 series with Sandforce? I haven't read about them having problems? Or am I missing something? Samsung is having some issues with the new 840's, do we put the "kiss of death" on them as well?

There is no doubt that compression reduces write amplification and makes the drive faster with compressible data. If you are going to be using the drive filled to capacity with in-compressible video files then performance will of course suffer, but that will happen with any SSD albeit to a lesser extent.

I love my Intel 330 Sandforce and bought big enough where I only have it about half filled. I bet it will last 10 years or move as I move it through my various systems over the years.
 

Cerb

Elite Member
Aug 26, 2000
17,485
33
86
Even the Win95 DriveSpace help file said compressing the drive would speed up the HDD. Now in the era of why do I need this many GHZ/cores we're using dedicated controllers to do it because right clicking on the drive and checking compress this drive to save space is hard?
Pretty much, but the CPU and file system features are removed from the equation.

The problems are that performance is no longer uniform, it's easy to get benchmark scores that won't reflect real usage, and compression is generally not Earth-shatteringly high for client usage (20-30% seems to be typical, from what I've seen of actual users reporting WA for their drives v. the competition). Other controller tweaks, and better quality flash, will make more of a difference, in the long run.

That said, for storing DBs, cookie directories, cached web pages, etc., drive-level compression, to improve performance and longevity, makes a lot of sense (also, these are going on EXT3 and EXT4 volumes, much of the time, with no integrated dynamic compression support).
 

bradley

Diamond Member
Jan 9, 2000
3,664
1
0
The Marvell 88SS9174 barely needed anything "ironed out." Why would anyone bother being a Sandforce guinea pig, based on their record of failed promises?

Sandforce has already proven they are better at marketing now irrelevant *data compression*than designing and engineering stable drives. Sandforce had buyers believing they needed data compression, and OCZ buyers believing they needed weekly firmware updates. It's no coincidence that they are the only companies proving such products. lol

I would be pissed as a paying consumer waiting a year or two for a stable product, when data integrity is at a premium. Consumer NAND products are already fragile stopgaps coming up on a short useable life.
 

Hulk

Platinum Member
Oct 9, 1999
2,570
20
81
Well speaking as a Sandforce "guinea pig" I'm pretty happy with my Intel 330 240GB drive that I picked up for $126 including taxes and shipping;) No ironing out needed!

And despite what some people may think, ANY drive can fail at ANY time. If your data is important be religious about back ups.
 

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
You seem to be poorly informed on Sandforce problems. TRIM was completely broken on many Sandforce SSDs for months, and a few of them still don't have it fixed. I'm not talking about the poor implementation of TRIM on Sandforce SSDs -- that has always been there and still is (even on Intel Sandforce SSDs). I am talking about TRIM being completely broken with some of the early v5 firmwares.
 

Hulk

Platinum Member
Oct 9, 1999
2,570
20
81
Like I wrote, no problem with my Sandforce Intel 330. I run the Intel SSD app, whatever it's called now and then, check my stats, trims, etc..

I have nothing but love for this drive:)

I guess I made a huge mistake buying a Sandforce based Intel SSD but so far I must have been lucky to avoid all the problems:eek:
 

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
I guess I made a huge mistake buying a Sandforce based Intel SSD...
No, just a tiny mistake. You could have done better, but since the performance differences between SSDs, even those with poor TRIM, are relatively small, even mediocre performing SSDs are good enough for most people.

P.S. You might want to think about whether SSD love is healthy -- if it dies, will you feel bereaved? ;)
 

ASK THE COMMUNITY