• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

How does SF in place update work?

IanWorthington

Senior member
iirc the new Sandforce controllers apply compression before storing the data.

Anyone know how they cope with an inplace update which no longer fits in the same space as a result of it compressing less?

i
 
The controller doesn't tell you how much data it has really written to the drive and therefore doesn't allow you to write more than the given capacity to the drive even if you write only highly compressable data to it (would be quite challenging from a technical perspective, dynamic LBA sizes).

It uses the free space internally for wear leveling and other stuff.
 
Aye, I understand that. What I don't understand though is how they cope with an update to a sector which will no longer fit in the space it did. How much data needs to be rewritten in this case?
 
Since the SF controller uses wear leveling, LBAs mapped by the controller to some location. Just because you write some data to the same LBA it doesn't mean the controller will write the data at the same position.

Therefore it'll write the data to a new position and then update the mapping to reflect the change.
 
OK... But what if the data I'm rewriting no longer fits in the space available? How does SF cope with that?

There's always space available to write to any of the sectors on the drive. The controller never tells the operating system that the drive is bigger than the total uncompressed capacity of the flash memory chips in the drive, so the OS never tries to use more.
 
I'm clearly not explaining myself very well.

Say I have a very large file and I update in place a section of that file with data that does not compress as well as the data that was originally in that section.

This *could*, in a poor implementation, cause rolling rewrites for the entire file. I doubt that SF does that. But what *does* it do?
 
I'm clearly not explaining myself very well.
You've got some basic misconceptions it seems.

The controller has NO notion of files at all. Basically it just tells the OS how large it is (i.e. how many LBAs it has). The filesystem is responsible for managing files, the controller couldn't even tell you of which file something is part of (and considering deduplication in some modern FSes that's not even a simple function but a relation)

If the FS tells the controller to overwrite the LBA at pos 0x1000 with new contents the controller will look for some new space write the data there and then update the mapping, if the new data is larger, well too bad it can't use as much space for internal purposes. There's no guarantee that LBA X and X+1 will be anywhere near the same place.
 
I'm clearly not explaining myself very well.

Say I have a very large file and I update in place a section of that file with data that does not compress as well as the data that was originally in that section.

This *could*, in a poor implementation, cause rolling rewrites for the entire file. I doubt that SF does that. But what *does* it do?

No it could not cause rolling rewrites, because files are not and never have been stored as whole files. Not ever, since the first day of computing.
Files are stored in sectors so reading a file is like "7 sectors: 4, 1002,1003,1004, 302,303,304". So you can just change the pointers if needed. Switching between varying spots on the drive is time consuming in a spindle drive and is why we defragment (move the sectors to be sequential for a single file). But there is no reason to make them sequential on an SSD.

If you do an in place update of the segments of the file (ex, use a defragmenting software, database software, some torrent software or other specialized software. selecting overwrite in windows file copy will not do that) and you replace a segment in the middle of the file with one that is larger then it was before then all you end up is that you use up extra pages somewhere else in the drive. Space is not wasted but you get some fragmentation (which is irrelevant in an SSD; note that altering a portion in the middle of the file to be bigger then the entire file is not normal and doesn't occur in spindle drives. This is not what causes fragmentation).

Despite looking for it, I can't find exact specifics on how the SF compression scheme works. But if based on basic storage knowledge that is the same for all drives, even spindle disks, you will not have a rolling rewrite from altering a portion of the file in the middle, just use another page and update the pointers to include it.

Theoretically in an FS level implementation that is plain wrong (file level compression) making a point change in the file will require a full recompress of then entire file then a full rewrite of the new file (which isn't rolling from the point of change, if you change the end of the file the entire file still needs replacing)... but I am not aware of there ever being a case of someone actually doing that on an FS/drive controller. Maybe if you are manually storing files in zip/rar. Which, I can see happening with a video game that stores its data in a zip file.

you might find this interesting: http://www.anandtech.com/show/2829/3
 
Last edited:
Back
Top