OCZ Vertex 2 with 25nm NAND flash reported slower

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

czesiu

Member
Feb 25, 2009
43
0
0
Uh, having about 35mb/s sequential write speed (or about a fourth of a modern 5.4k rpm drive) sure as hell can limit usage. And not just the "copy large stuff to drive" benchmarks, but also using the drive for photoshop and similar things

it seems it is still ~270mb/s (atto)

http://www.ocztechnologyforum.com/forum/showthread.php?84087-Poor-Vertex-2-60GB-Performance
(both drives at 55GB which makes this comparison weird)

http://www.ocztechnologyforum.com/f...and-load-times&p=601778&viewfull=1#post601778
 

czesiu

Member
Feb 25, 2009
43
0
0
Oh, I see, it's just the normal, everyday, uncompressible data that makes it look bad.

I wouldn't let that minor fact dissuade me from purchasing that drive. :rolleyes:

This ATTO/compressible/uncompressible has been discussed many times in the past.

I've been thru 9 SSDs (Intel, OCZ, GSkill, & Crucial) and beside the first generation GSkill the OCZ Vertex 2s were the worst of the lot.

uncompressible would mean zip/rar/video/audio? someone should benchmark normal windows file copying... :)

why was vertex worst?
 

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
uncompressible would mean zip/rar/video/audio?

A lot of data is incompressible to the Sandforce controller. It is not very good at compressing data. Give it a stream of zeros like ATTO and it is can compress like crazy. But add a 1 here and there into the stream, and suddenly it cannot compress very much.
 

Nebor

Lifer
Jun 24, 2003
29,582
12
76
uncompressible would mean zip/rar/video/audio? someone should benchmark normal windows file copying... :)

why was vertex worst?

I installed Windows 7 from a USB flash drive, expecting it to be fast. It was not. My regular ol' 7200 rpm hard drive can write sequential data 4x faster than my Vertex 2 drive.:(
 

frostedflakes

Diamond Member
Mar 1, 2005
7,925
1
81
A lot of data is incompressible to the Sandforce controller. It is not very good at compressing data. Give it a stream of zeros like ATTO and it is can compress like crazy. But add a 1 here and there into the stream, and suddenly it cannot compress very much.
Sandforce claims a typical write amplification of 0.5 for their controllers, which means the controllers are actually pretty good at compressing data. Keep in mind that write amplification <1.0 isn't possible without compression, any amount under that is achieved by compressing what you're writing to the drive. So the typical WA of 0.5 suggests that on average what you write to the drive can be compressed by at least 50%.

I mean if you think about it, most people aren't writing video/music/other compressed stuff to an SSD. It contains the OS and program files, a lot of which is probably pretty compressible.

SandForce states that a full install of Windows 7 + Office 2007 results in 25GB of writes to the host, yet only 11GB of writes are passed on to the drive. In other words, 25GBs of files are written and available on the SSD, but only 11GB of flash is actually occupied. Clearly it’s not bit-for-bit data storage.
This is the kind of stuff you generally put on an SSD and it benefits a ton from DuraClass.
 

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
Sandforce claims a typical write amplification of 0.5 for their controllers, which means the controllers are actually pretty good at compressing data.

LOL. Sandforce claims a lot of things that are inflated or simply not true.

I mean, if you actually think about it, the Sandforce controller does not have a lot of processing power, and it is not going to be able to compress much of anything at a throughput of 270MB/s with its puny processor.
 
Last edited:

frostedflakes

Diamond Member
Mar 1, 2005
7,925
1
81
Do you have anything to discredit their claim? Unless you do, I'd tend to trust them over some random guy on an internet forum who obviously has an axe to grind with the company. The "up to" read/write specs they claim for example may have some caveats (only attainable with highly compressible data), but Sandforce has always been upfront about these limitations and never tried to hide them.

edit: The Sandforce controller likely has dedicated logic for compression.
 
Last edited:

jimhsu

Senior member
Mar 22, 2009
705
0
76
Unless SandForce is using some sort of EXE/DLL packing (highly HIGHLY unlikely, least of which it is incredibly time consuming to do), the best that compression can do is somewhere around 50&#37; for typical program files likely to be stored on a SSD :

http://www.maximumcompression.com/data/exe.php

This does not explain how 270MB/s is valid on a drive that does 35MB/s on uncompressed data. I'd say ATTO data is completely uncharacteristic of any normal data sets aside from extremely sparse scientific data (which takes up terabytes and thus isn't stored on SSDs).
 
Last edited:

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
Unless SandForce is using some sort of EXE/DLL packing (highly HIGHLY unlikely, least of which it is incredibly time consuming to do), the best that compression can do is somewhere around 50% for typical program files likely to be stored on a SSD :

http://www.maximumcompression.com/data/exe.php

And those compression ratios are obtained on powerful CPUs. And for most of them, the compression algorithm is slow -- throughput much lower than 270MB/s.

When you consider that the Sandforce processor cannot be using much more than 1W or so, it becomes completely obvious that they are totally incapable of achieving an average of 50% compression ratio on general data. Anyone who believes that Sandforce is averaging anywhere close to 50% should contact me, I've got a lot of stuff to sell you!

It is really crazy that anyone with any sense would believe such claims from Sandforce. Especially since it is easy to test and see for oneself.
 

czesiu

Member
Feb 25, 2009
43
0
0
My Vertex 2 can only write data at a maximum of 37mbps, no matter what kind of data it is (sequential, 4k, etc.) People with the older 34nm drives can write @ around 190mbps from what I've seen.

For sequential data, my old 7200rpm hard drive can do around 120mbps. Disappointing.

and atto shows 270?
could you also benchmark file transfer writes in windows?
 

frostedflakes

Diamond Member
Mar 1, 2005
7,925
1
81
And those compression ratios are obtained on powerful CPUs. And for most of them, the compression algorithm is slow -- throughput much lower than 270MB/s.

When you consider that the Sandforce processor cannot be using much more than 1W or so, it becomes completely obvious that they are totally incapable of achieving an average of 50&#37; compression ratio on general data. Anyone who believes that Sandforce is averaging anywhere close to 50% should contact me, I've got a lot of stuff to sell you!

It is really crazy that anyone with any sense would believe such claims from Sandforce. Especially since it is easy to test and see for oneself.
I think you seriously underestimate what can be done with ASICs. If you have dedicated logic for compression, deduplication, etc. you can achieve much higher throughput than on a CPU while using far, far less power. That's always been the trade-off, dedicated logic is far more efficient, but obviously it can't do anything other than what it was designed for. CPUs are less efficient but have the advantage of being general purpose processors.

Also, at 50% compression ratio, the hardware would be only be processing data at 135MB/s. 270MB/s would be the effective throughput, not actual. edit: Actually I don't think that's right, my bad. But point still stands, throughput is much higher with an ASIC. Just because your CPU can only do 5MB/s or something like that with LZMA compression doesn't mean dedicated logic is that slow.

http://www.heliontech.com/comp_lzrw.htm
 
Last edited:

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
everyone knows the sandforce isn't much better than intel x25-m when using cdm/as with randomized data.

use the compact command to make your dll's smaller. exe's are compressed already iirc. dll's not so much. compact both of them and it will tell you per file. then you can use your big quad core to decompress instead of a tiny asic. just sayin'
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
I think you seriously underestimate what can be done with ASICs. If you have dedicated logic for compression, deduplication, etc. you can achieve much higher throughput than on a CPU while using far, far less power
True, the problem is not necessary processing power (you get usually orders of magnitudes better performance from dedicated hw compared to general purpose circuits), but the fact that SF must compress data in pretty small chunks which limits the achieveable compression. And if you use NTFS compression or something similar on your user profile you'll see that you'll get maybe 30% reduction (tested that a long time ago, your mileage may vary - would be interesting to get new numbers) which would be the upper boundary..

Also the fact that every real life benchmark shows that we're far away from their optimum throughput makes it pretty clear.
 

Nebor

Lifer
Jun 24, 2003
29,582
12
76
and atto shows 270?
could you also benchmark file transfer writes in windows?

yes my ATTO scores are "normal."

How do you benchmark file transfer writes in Windows? Just transfer a large file and see what the transfer speed is?
 

czesiu

Member
Feb 25, 2009
43
0
0
yes my ATTO scores are "normal."

How do you benchmark file transfer writes in Windows? Just transfer a large file and see what the transfer speed is?
yes... just try to copy some files (I guess min size ~100MB to see if it is really capped at 37mb/s) from fast source and measure the time (teracopy might also display transfer speeds correctly)
 

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
I think you seriously underestimate what can be done with ASICs. If you have dedicated logic for compression, deduplication, etc. you can achieve much higher throughput than on a CPU while using far, far less power. That's always been the trade-off, dedicated logic is far more efficient, but obviously it can't do anything other than what it was designed for. CPUs are less efficient but have the advantage of being general purpose processors.

http://www.heliontech.com/comp_lzrw.htm

You clearly overestimate what can be done with a Lempel-Ziv FPGA with a power budget of about 1W. First of all, Lempel-Ziv is old compression technology, and even when run under ideal conditions (able to scan an entire file before generating the symbol table), it will not achieve a compression factor anywhere near 50% on the data most users have on their HDDs.

Second, the top throughput on the chart on the page you linked to is only 175 MB/s (1.4Gbps). So, we have the LZ algorithm with much worse than 50% compression, a throughput of only 175 MB/s, and some unspecified power consumption to achieve this thoroughly underwhelming performance.

I repeat, I find it amazing that anyone actually believes Sandforce's claim that an average compression factor of 50% will be achieved for the typical data on most user's drives.

The funny thing is, anyone foolish enough to purchase a Sandforce drive could easily test the average compression ratio. Say you have a 60GB Sandforce SSD that has 40GB in use. Take another SSD (I suggest a 120GB Intel X25-M or 128GB Crucial C300), and create an uncompressed tar archive (or equivalent) of the 40GB onto the 2nd SSD. Also create a 40GB random file on the 2nd SSD. Secure erase the Sandforce, and use dd or equivalent to write the random file to the Sandforce. Say that the speed is 100 MB/s. Then secure erase the Sandforce, and dd the tar archive over to the Sandforce. Say the speed is 111 MB/s. Then the average compression factor is about 90% (100/111).
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
The funny thing is, anyone foolish enough to purchase a Sandforce drive could easily test the average compression ratio. Say you have a 60GB Sandforce SSD that has 40GB in use. Take another SSD (I suggest a 120GB Intel X25-M or 128GB Crucial C300), and create an uncompressed tar archive (or equivalent) of the 40GB onto the 2nd SSD. Also create a 40GB random file on the 2nd SSD. Secure erase the Sandforce, and use dd or equivalent to write the random file to the Sandforce. Say that the speed is 100 MB/s. Then secure erase the Sandforce, and dd the tar archive over to the Sandforce. Say the speed is 111 MB/s. Then the average compression factor is about 90&#37; (100/111).
The funny thing is you can just write some random stuff to the drive without the need for all the complicated rest xX

Which incidentially is what AS SSD and co are more or less doing, but then anyone can hack a small exe that writes a few gb data to a drive together in like 5 minutes if they want to see it for themselves.
 
Last edited:

frostedflakes

Diamond Member
Mar 1, 2005
7,925
1
81
You clearly overestimate what can be done with a Lempel-Ziv FPGA with a power budget of about 1W. First of all, Lempel-Ziv is old compression technology, and even when run under ideal conditions (able to scan an entire file before generating the symbol table), it will not achieve a compression factor anywhere near 50% on the data most users have on their HDDs.

Second, the top throughput on the chart on the page you linked to is only 175 MB/s (1.4Gbps). So, we have the LZ algorithm with much worse than 50% compression, a throughput of only 175 MB/s, and some unspecified power consumption to achieve this thoroughly underwhelming performance.

I repeat, I find it amazing that anyone actually believes Sandforce's claim that an average compression factor of 50% will be achieved for the typical data on most user's drives.

The funny thing is, anyone foolish enough to purchase a Sandforce drive could easily test the average compression ratio. Say you have a 60GB Sandforce SSD that has 40GB in use. Take another SSD (I suggest a 120GB Intel X25-M or 128GB Crucial C300), and create an uncompressed tar archive (or equivalent) of the 40GB onto the 2nd SSD. Also create a 40GB random file on the 2nd SSD. Secure erase the Sandforce, and use dd or equivalent to write the random file to the Sandforce. Say that the speed is 100 MB/s. Then secure erase the Sandforce, and dd the tar archive over to the Sandforce. Say the speed is 111 MB/s. Then the average compression factor is about 90% (100/111).
It was just one example, other implementations may be faster (the example was on 65nm, at 40nm or 45nm the hardware should be able to operate at higher frequencies and achieve higher throughput). Was just showing that an ASIC can be far, far faster than even a top of the line CPU at tasks like compression and decompression. And <15k gates should give an idea of the power consumption (hint: It's very small, 15,000 gates is nothing at 65nm and smaller nodes). And it's been mentioned before that Sandforce might not use compression but other techniques like deduplication. I'm not sure how this compares in terms of throughput, compression ratio, etc.

I don't know what else to say, though. Obviously I tend to trust the info Sandforce releases to the public and you don't. Obviously I'm not going to change your mind. I'd love to see it tested if this is possible.
 

jwilliams4200

Senior member
Apr 10, 2009
532
0
0
The funny thing is you can just write some random stuff to the drive without the need for all the complicated rest xX

Which incidentially is what AS SSD and co are more or less doing, but then anyone can hack a small exe that writes a few gb data to a drive together in like 5 minutes if they want to see it for themselves.

You missed the context. Some people believe Sandforce's absurd claims that for the data most people are writing to the drives, the compression will be much higher than that achieved with random data like AS-SSD writes. People who believe such things will not be convinced by writing random data. They would need to write actual data that they store on their drive.
 

frostedflakes

Diamond Member
Mar 1, 2005
7,925
1
81
And if I had wings, I could fly. Too bad I don't have wings. And Sandforce does not have a magical compression chip, just a lot of hot air.
To be fair, it's not like you've provided anything factual to refute Sandforce's claims. You're basically asking me to believe your hot air instead of theirs. :hmm: