An idea to make SSDs even more ridiculously fast

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
When I saw the price of the Fusion IO (80GB for $2,400), the first thing that hit me is The cost per GB is over 3x higher than good quality low latency DDR3-16000!!

For high end SSDs why not just use RAM as a write buffer? For the extreme enterprise stuff like Fusion IO, you could make the write buffer as big as the drive, and just throw 80GB of DDR on the board, flushing to flash in the background. For normal consumer grade drives, 1-2GB would be plenty. The only time you would ever write more than this in a burst would be when ripping a DVD (which the drive can easily keep up with even w/o the huge buffer).

In order to take care of the situation where the power is lost before all data is backed up to flash, include a small battery which can be trickle charged while the drive is in use. The battery doesn't need very high capacity - it takes under 2 minutes for HDD Erase to go through my entire 80GB Intel X-25M, the time to flush the write buffer should not be that much more.
 
Dec 30, 2004
12,553
2
76
Taking it a step further, why not do that to harddrives? I always wanted a slot for an old RAM stick. Buy a 256MB DDR stick of ram and plug it into a slot on the top/bottom of the harddrive. Controller would have so much more write buffer to work with when using NCQ.
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
Originally posted by: soccerballtux
Taking it a step further, why not do that to harddrives? I always wanted a slot for an old RAM stick. Buy a 256MB DDR stick of ram and plug it into a slot on the top/bottom of the harddrive. Controller would have so much more write buffer to work with when using NCQ.

The penalty for flushes would be too high. For a lot of spindles at 10 or 15k it can keep up fairly well and this is why we see multi gigabyte cache on intelligent hosts.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
what do you think 64KB of cache means? that 64kb of cache is SIGNIFICANTLY faster than DDR3 ram. and under normal use it is not really at risk of running out.
Now you could do 64KB cache -> 1GB of DDR3 -> SSD.
But the cost of designing the chipset go up significantly, the benefits in speeds are not all that, and the risk of dataloss goes up significantly. (unless they add a large and expensive capacitor array to feed the drive as it writes back).
The limitation currently is really the amount of time a drive can remain powered by its capacitors after power failure to write down unwritten data.
By definition the drive HAS to report the data as "written" to the OS even though its only been put in the cache, which means that if it cannot make good on it, it lied and mislead the OS and caused trouble. Some companies actually disable cache on drives because they do not trust it for their database tasks.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
you might think about the fact that you just use the ram in your pc for write back cache.

ddr3 is alot faster than sram and it requires a bit more power to maintain
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126

This does not even address the risk of uncommitted data which they have tricks of handling as well that operating systems would be supporting too.
 

jimhsu

Senior member
Mar 22, 2009
705
0
76
Related question: at what point does increasing cache sizes on an SSD not bring any more benefits? N*size of a block? 32MB? 1GB? Never?
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
intel doesn't use their cache for write back.

only the other guys do.

guess what happens when power los.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Emulex
guess what happens when power los.

I lose whatever updates I had made to my files prior to saving them?

Think it thru...what's the time difference between saving my file to cache (at 3Gb/s) and going on about my business while the controller flushes the cache contents to the underlying nonvolatile memory just as fast as it can be written (say ~70MB/s) versus me saving my file directly to the underlying media (at the same 70MB/s) while I stare at a rotating hourglass?

In either case my updated file isn't going to be "safe" from power outage or system upset until the data are written to the nonvolatile media, be it coming from the on-controller cache of my harddrive or my system's cache in the dram.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: jimhsu
Related question: at what point does increasing cache sizes on an SSD not bring any more benefits? N*size of a block? 32MB? 1GB? Never?

Depends on the usage pattern of the user. Seriously. The cache merely need be large enough that the user never saturates it with more read/write requests than the underlying physical media can execute and retire in the idle periods.

If I routinely write 500MB files then I'd want more than 500MB of cache, but I probably never saturate a 1GB cache at that point so it would be irrelevant whether I had 1GB or 10GB of cache.

On spindle drives and SSD's the biggest day-to-day system response impact to the end user's computing experience is small file (4-64KB) writes. You don't need gigabytes of cache to effectively cache 500 writes of a 4KB file, a few MB's is plenty and you deliver sizable improvements in the apparent system responsiveness.

The guys who want/need GB/s of bandwidth for manipulating GB's worth of files are usually working with filesizes on the order of MB's to GB's already, so latency and system response aren't the issue (bandwidth is) and bandwidth for large filesizes is easily purchased with dedicated raid card and the right number of spindles rolled into a raid array.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: Idontcare
Originally posted by: Emulex
guess what happens when power los.

I lose whatever updates I had made to my files prior to saving them?

Think it thru...what's the time difference between saving my file to cache (at 3Gb/s) and going on about my business while the controller flushes the cache contents to the underlying nonvolatile memory just as fast as it can be written (say ~70MB/s) versus me saving my file directly to the underlying media (at the same 70MB/s) while I stare at a rotating hourglass?

In either case my updated file isn't going to be "safe" from power outage or system upset until the data are written to the nonvolatile media, be it coming from the on-controller cache of my harddrive or my system's cache in the dram.

Current tech Scenario A:
I don't save my work, power goes out.. my work is gone!

Current tech Scenario B:
I save my work, power goes out some time later. My files are saved and safe.

80GB of ram waiting to be flashed to SSD Scenario A:
I don't save my work, power goes out... my work is gone!

80GB of ram waiting to be flashed to SSD Scenario B:
I save my work, power goes out some time later. My saved files are gone!
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: taltamir
PS... why 80GB of DDR3 on an SSD? you can have it as a standalone drive with a battery...
http://benchmarkreviews.com/in...=view&id=308&Itemid=60 - newer device

http://www.anandtech.com/storage/showdoc.aspx?i=2480 - first device on market

We had our very own resident ACARD reviewer here in the forums who wrote up quite the direct end-user review as well including crystaldiskmark results (poster's name was davecason)...IIRC he had IRAM tests too.
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
Originally posted by: Emulex
intel doesn't use their cache for write back.

only the other guys do.

guess what happens when power los.

This is why I said "include a small battery which can be trickle charged while the drive is in use" -- after power is lost, any remaining data can be flushed to the disk using battery power. And flushing to SSD is extremely fast, even in the worst case.
 

jimhsu

Senior member
Mar 22, 2009
705
0
76
The concern is probably that writes to cache are reported as successful - this could result in the data being in an inconsistent state (say you're writing 10 files and the cache reports all writes to be successful, but the power goes out after flushing the 5th file) vs the power going out when the 5th file is actually being written straight to disk, so that the changes can be rolled back. The problem is that with a cache the system doesn't know which file the power failure occurred at, because of course the write is reported as successful. This is why most mission critical apps disable write caching.

PS Yes, you could issue a write flush and wait for that to complete before committing the transaction, but most programs (as in internally developed source code) don't seem to bother with that.
 

mozartrules

Member
Jun 13, 2009
53
0
0
Originally posted by: jimhsu
say you're writing 10 files and the cache reports all writes to be successful, but the power goes out after flushing the 5th file

It can be much worse than that. One of the files might be only half written which may result in it being corrupt or you not noticing something is wrong with it.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: glugglug
Originally posted by: Emulex
intel doesn't use their cache for write back.

only the other guys do.

guess what happens when power los.

This is why I said "include a small battery which can be trickle charged while the drive is in use" -- after power is lost, any remaining data can be flushed to the disk using battery power. And flushing to SSD is extremely fast, even in the worst case.

this requires perfect accuracy in predicting how long it takes to write each file. And writing times vary GREATLY from file to file
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
Originally posted by: taltamir
Originally posted by: glugglug
Originally posted by: Emulex
intel doesn't use their cache for write back.

only the other guys do.

guess what happens when power los.

This is why I said "include a small battery which can be trickle charged while the drive is in use" -- after power is lost, any remaining data can be flushed to the disk using battery power. And flushing to SSD is extremely fast, even in the worst case.

this requires perfect accuracy in predicting how long it takes to write each file. And writing times vary GREATLY from file to file

No, it doesn't. The drive isn't aware of any "files". Just flush from the lowest LBA to the highest. Rewriting the entire SSD in the worst case takes only a couple minutes. Or for the 1-2GB writeback, probably the worst case is less than 10 seconds.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Rewriting the entire SSD in the worst case takes only a couple minutes
No, it takes much much longer than a few minutes.
It is true that it is not a matter of "files" so much as "data to be written". But the data to be written varies greatly in write speed.