Can TRIM work on a drive that never rests?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Coup27

Platinum Member
Jul 17, 2010
2,140
3
81
Mark I am not sure why you have spoken at great length about overprovisioning and write amp. This thread has been centred on the correct/incorrect use of the terms for TRIM and GC and trying to define exactly what their respective processes are. Having spare area/overprovisioned NAND does not actually relate directly to the processes of TRIM or GC.
 

cytg111

Lifer
Mar 17, 2008
25,508
15,029
136
I've noticed that my hard drive light blinks a steady 1x/second even when I would expect the system to be idle. I booted (Win7 HP 64-bit, if it matters) into Safe Mode and in Task Manager there are only 22 processes and none of them show any activity under I/O Reads/Writes/Other but the light still blinks at a steady rate. Even logging off and letting the screen show the log-on screen: blink, blink, blink, even overnight.

Now, I don't know if it's my SSD or one of my HDDs, but being a pessimist, I will assume it's the SSD. If so, will TRIM *ever* get a chance to do its work? What about the drive doing its own garbage collection? (Intel 520)

Of course, I would like to know why my drive light pulses like that, if for no other reason than curiosity, but I'm foremost concerned about TRIM/GC.

If TRIM/GC can't work during constant drive activity in Windows, would at least GC work if I pause at the BIOS boot process before Windows loads? If so, how long should I pause? Overnight? A few hours?

sysinternals - procmon

All you need to know :)
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
There are rumors that there is FS specific GC, such as the anandtech article (which doesn't actually do the experiment that would show whether it was true or not). However, I've seen nothing concrete from a 1st-hand source.

First you need to learn something about how file systems work
http://en.wikipedia.org/wiki/NTFS#Internals
In NTFS, all file data—file name, creation date, access permissions (by the use of access control lists), and contents—are stored as metadata in the Master File Table.
http://www.ntfs.com/ntfs-mft.htm
The MFT equivalent is called File Access Table in FAT. Every FS has something similar.

GC by definition is impossible without being FS specific.
There is absolutely no way for the drive to determine that a file which has been internally marked as trash by NTFS/FAT/etc is trash without FS Specific detection algorithm.
None, whatsoever.

The only space the drive can unequivocally know to be free is unallocated space (which requires the controller be only be able to read MBR or GPT) and spare area (which the controller never makes available in the first place)

If there was a way to know that wasn't FS specific then we would never have needed TRIM in the first place. TRIM exists solely because of this limitation.
Since HDDs do not need to be erased before programming like an SSD does, then overwriting different data is equal in cost to writing it on a drive that is entirely zeroed, then there is absolutely no reason to fully zero out the sectors used by a file upon deletion. Instead the FS marks it internally as deleted so that the space may be used by the OS to write data over. This is why you can recover data from a spindle drive, since its still there.
In an SSD you must erase (have no charge, equivalent to all 1s) a 512kb block of 128 sectors each 4kb in size before you can write to it and the methodology for HDDs does not work here. The drive must be informed of what is and isn't deleted.
Gen 1 SSDs simply had nothing, their performance suffered.
Gen 2 SSDs added GC, they will scan compatible FS and locate that info themselves.
Gen 3 SSDs added TRIM which lets the OS instantly tell the drive when sectors are to be considered empty since their data is deleted.

GC is now useful only for legacy OS where TRIM is not supported.
GC is not FS agnostic because there is no magical way for the drive to know what sectors contain valid data and what sectors contain junk without reading the relevant info from the FS itself (FS specific) or being told it by the OS (via TRIM).
The MBR may be used for one or more of the following:
Holding a partition table, which describes the partitions of a storage device. In this context the boot sector may also be called a partition sector.
Bootstrapping an operating system. The BIOS built into a PC-compatible computer loads the MBR from the storage device and passes execution to machine code instructions at the beginning of the MBR.
Uniquely identifying individual disk media, with a 32-bit disk signature, even though it may never be used by the operating system.[2][3][4][5]
As you can see, the MBR only contains the data on PARTITIONS
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
GC by definition is impossible without being FS specific.
There is absolutely no way for the drive to determine that a file which has been internally marked as trash by NTFS/FAT/etc is trash without FS Specific detection algorithm.
None, whatsoever.
TRIM doesn't operate on files. GC does not work on files. The SSD doesn't operate on files. Your hard drive does not operate on files.

Most typically, today, what you are worried about are 4K blocks. Your OS works with 4K as the smallest logical memory size, your filesystem defaults to a 4K cluster size, and the SSD works on 4K logical blocks* (there is a historical causal correlation, there). Those 4K blocks are arbitrary data.

The controllers can be tweaked for FS structures and access patterns of certain FSes (SD cards are probably the most infamous for it), but that is in no way necessary for garbage collection. Garbage collection requires: (1) externally-visible memory locations that are mapped to internal locations by an active translation layer, (2) more internal locations available than are allowed to be mapped (if it reaches 1:1, there will be no garbage to work with), (2) a method to determine what mapped locations are dead (garbage), and (3) a method to reclaim dead locations (collection). Any such implementation is generally referred to as garbage collecting.

GC is now useful only for legacy OS where TRIM is not supported.
No, GC goes on and does its thing with any OS. TRIM allows space known free by the OS to be considered recyclable by the SSD, adding to the available blocks to use as it sees fit.

GC is not FS agnostic because there is no magical way for the drive to know what sectors contain valid data and what sectors contain junk without reading the relevant info from the FS itself (FS specific) or being told it by the OS (via TRIM).
Yes, there is. All OS-visible space is assumed to be not junk. When an OS-visible chunk of data is overwritten, it is written to a new physical location, the old location(s) is/are now considered garbage. It's not ideal, but it works, and works pretty well. Consumer drives typically have just over 7% of their total capacity, or more, that only the drive knows about, to enable this.

If there was a way to know that wasn't FS specific then we would never have needed TRIM in the first place. TRIM exists solely because of this limitation.
No, TRIM exists for the same reason that file caching exists: free space should be put to good use, not wasted. Instead of the drive only being able to use its over-provisioned space, it may use that space and the space given by TRIM, allowing for the benefits of higher over-provisioning, while there is free space, without restricting the maximum space available to the user, as over-provisioning does.

This is why you can recover data from a spindle drive, since its still there.
With TRIM disabled, you can do the same on an SSD, and for the same reason. Without TRIM, the data stays there, just like on a HDD.

* not necessarily the physical size; many use 8k
 
Last edited:

taltamir

Lifer
Mar 21, 2004
13,576
6
76
TRIM doesn't operate on files. GC does not work on files. The SSD doesn't operate on files. Your hard drive does not operate on files.
True, it works on sectors. The OS however sends sector info based on files deleted.
And the FS works on files. So when a file is deleted all its sectors are marked as junk in the FS and then the SSD can read it and find that with GC.
Or when a file is deleted the OS immediately sends the drive a TRIM command for all the sectors it used (it always sends the command, HDDs ignore it as unrecognized command, SSDs utilize it if capable)
So to clarify, when you delete a file, the FS marks the sectors that file occupied as junk in its master file table. I phrased it poorly there.

Most typically, today, what you are worried about are 4K blocks. Your OS works with 4K as the smallest logical memory size, your filesystem defaults to a 4K cluster size, and the SSD works on 4K logical blocks* (there is a historical causal correlation, there). Those 4K blocks are arbitrary data.
4k Sectors not blocks since we are all about terminology here.

The controllers can be tweaked for FS structures and access patterns of certain FSes (SD cards are probably the most infamous for it), but that is in no way necessary for garbage collection. Garbage collection requires: (1) externally-visible memory locations that are mapped to internal locations by an active translation layer, (2) more internal locations available than are allowed to be mapped (if it reaches 1:1, there will be no garbage to work with), (2) a method to determine what mapped locations are dead (garbage), and (3) a method to reclaim dead locations (collection). Any such implementation is generally referred to as garbage collecting.
The problem is #(2) a method to determine what mapped locations are dead (garbage)

As I said, THIS IS IMPOSSIBLE without the drive controller being aware of the OS. Garbage sectors are not marked in the MBR, are not marked individually, are not marked in any way. The only place that keeps track of it is the FS' mfaster file table. Something unique to each FS.

GC is now useful only for legacy OS where TRIM is not supported.
No, GC goes on and does its thing with any OS.
If the OS has TRIM then GC does nothing useful at all.
Oh, it (probably) runs... but it doesn't get to ever mark a single sector as garbage, since the OS sends the TRIM command for all sectors used by a file the moment said file is deleted.

TRIM allows space known free by the OS to be considered recyclable by the SSD, adding to the available blocks to use as it sees fit.
That is what I said, yes.

Yes, there is. All OS-visible space is assumed to be not junk. When an OS-visible chunk of data is overwritten, it is written to a new physical location, the old location(s) is/are now considered garbage. It's not ideal, but it works, and works pretty well. Consumer drives typically have just over 7% of their total capacity, or more, that only the drive knows about, to enable this.
You just described the absolute worst case scenario. The one where the drive only finds out data is garbage when it is told to overwrite it. Until that moment it has been preserving the data despite it being garbage, unknowingly, causing write amplification.
This doesn't work "well", it works very VERY poorly. This is explicitly what TRIM & GC were created to combat. When I said there is no way I meant clearly "there is no way except the worst case scenario which TRIM&GC were created to avoid which btw requires the OS to notify the drive of that status".

The 3 methods which could notify a drive of data being garbage:
1. The OS tells it to overwrite this data (absolute worst case scenario, the thing TRIM and GC were made to avoid)
2. The OS tells it the data is junk (TRIM command) as soon as it becomes junk.
3. The SSD scans the FS contained within (requires FS specific support) and marks sectors as junk based on what sectors are marked as junk in the FS itself (which occurs on deletion).
There is no magical 4th way for it to know. It is not in the MBR, it is not in a magic toggle for each sector. It just doesn't exist.
If it existed then TRIM would have never been created and never been needed and never would have been so useful.

No, TRIM exists for the same reason that file caching exists: free space should be put to good use, not wasted. Instead of the drive only being able to use its over-provisioned space, it may use that space and the space given by TRIM, allowing for the benefits of higher over-provisioning, while there is free space, without restricting the maximum space available to the user, as over-provisioning does.

With TRIM disabled, you can do the same on an SSD, and for the same reason. Without TRIM, the data stays there, just like on a HDD.

* not necessarily the physical size; many use 8k
Actually, with neither TRIM nor GC all space except over provisioned space is considered "not junk" by the controller. This leads to atrocious write amplification that causes your performance to tank AND eats up your drive's lifespan rapidly. Over-provisioning helps a lot, but we are talking write amplification in the few dozen compared to the maximum possible write amplifcation of 128x.
With TRIM you can have write amplification around the 1x since junk data is never preserved.
With GC you can have the somewhere in between TRIM and nothing, based on how long it had to search the drive since last write (did it have time to find all the new junk and mark it)
 
Last edited:

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
The SSD works on pages of 4KB, that is the minimum size of write and a file consists of one or more of these pages, often in the operating system called sectors. Whether its a file table or a file itself makes no difference to the SSD, it just sees 4k sectors. The SSD however than groups these into 512KB blocks. This matters because the 512KB block is the minimum size the SSD can clear.

Before trim the only time an SSD knew that a page had become eligible for GC was when a sector was overwritten in the OS. On File systems when a file is deleted it only removes the entry in the file table and doesn't zero out the contents of the file. The problem with this is that you don't know that sector is empty until its overwritten. The SSD in response to an overwrite would then write the new page somewhere else (if there is free space) and mark the block as eligible for garbage collection. It may or may not collect it immediately it just depends on the algorithm and the state of the drive at the time.

Then they added trim. The OS deletes a file and now it does two things. First it removes the file from the file table as before, and secondly it sends a trim command to the SSD for all the pages that the file was in. The SSD responds by making those sectors eligible for GC.

That is all clear and its the stuff we know for certain. What the SSDs then do with the information of eligibility for GC isn't well known. Some of them go about cleaning up earlier with higher write amplification and some of them wait until they near fill their free space before they do it, which all things considered should reduce write amplification. But regardless GC is asynchronous unless its forced to be synchronous because the drive has run out of free pages to write to. At which point write speed drops dramatically until there are enough blocks to maintain the rate of write.

Trim makes an SSD perform better and GC is a work around the technology and always present. Trim is simply a performance hint to the Garbage Colelctor of the SSD. There is no need to worry about it if you don't understand it however because its surprisingly hard with normal usage to even notice GC is happening. By all means read the Anandtech SSD Anthology if you want to know how it all works but never worry about whether all this stuff is working or not, its 100% under the covers.
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
First you need to learn something about how file systems work
I know exactly how file systems work. I've written file system code for microcontrollers.
I know precisely what assumptions a file system can make about the underlying storage device.

GC by definition is impossible without being FS specific.
There is absolutely no way for the drive to determine that a file which has been internally marked as trash by NTFS/FAT/etc is trash without FS Specific detection algorithm.
None, whatsoever.
The whole point about GC is that it cleans up any flash pages that it knows are trash. There are 3 ways the drive can know this: the page has never been written to, the page is hidden from the OS by means of sector virtualisation (i.e. data has been overwritten), or the OS has sent a discard (TRIM) command. None of this needs to be FS specific.

It is not the job of the drive to presume what the FS wants to do. The job of the drive is to do exactly what the FS tells it to do, when it is told. A drive which does otherwise is likely to cause catastrophic stability problems.

However, even attempting to use FS specific detection is so fraught with problems, such that it is insanity for a drive to even attempt it. Yes, it is possible to make a drive do FS detection and act in an FS specific manner. However, it is impossible to do this safely, so that it will not randomly corrupt data in certain circumstances. FS optimisations are common for certain performance enhancements, but they don't change the data integrity behavior of the drive (e.g. USB sticks and flash cards often expect a FAT format, and assume where the FAT block will be, and treat it differently to normal data for performance reasons. The result is that formatting such a stick in NTFS leads to cripplingly bad performance).

For example, if I take 2 SSDs, and RAID 0 them. If I use a motherboard controller, there will be absolutely nothing stored on the drives that states that they are in RAID. Drive 1 will look like a normally formatted volume, drive 2 will have an invalid partition table. If the stripe size is large (e.g. 2 MB), then the main root directory and FS metatdata may well be in the expected position on drive 1. Maybe even the bitmap file will be in the expected position - however, if the drive takes any internal action based on the assumption that the data is a valid NTFS partition, it will irrecoverably corrupt the RAID volume. It is *impossible* for a drive to avoid this type of problem if it assumes anything about the data format.

Similarly, what if I wanted to make a drive image for forensic or data recovery purposes. I'd just ghost the original drive to another drive, for analysis. An SSD is often a good idea for this, as it's speed permits the analysis to run much faster. However, imaging a drive onto a purported self-cleaning SSD would be catastrophic, as it would randomly clear out unwanted data, destroying any chance of data recovery.

What about journalling file systems (like NTFS), or copy-on-write file systems? They often write the data to disk, and *then* change the metadata to show that the space is occupied. If a drive really did internally scan for a FS specific bitmap, it could easily end up erasing fresh data where the bitmap update was pending.

The only space the drive can unequivocally know to be free is unallocated space (which requires the controller be only be able to read MBR or GPT) and spare area (which the controller never makes available in the first place)
Not true.

The drive does not need to understand MBR or GPT. It only needs to keep a record of which sectors the OS has ever written to - which it's virtualisation tables do automatically. If the OS has never written to a sector since new (or secure erase), then the drive can be certain that the sector contains nothing of interest. MBR and GPT may be common, but there are other methods of partioning - and some embedded systems may not use partitioning at all, and simply drop a file system starting at sector 0, or may keep proprietary data in unpartitioned space (for example the "hidden" recovery partitions on some laptops).
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
Yes, there is. All OS-visible space is assumed to be not junk. When an OS-visible chunk of data is overwritten, it is written to a new physical location, the old location(s) is/are now considered garbage. It's not ideal, but it works, and works pretty well. Consumer drives typically have just over 7% of their total capacity, or more, that only the drive knows about, to enable this.

I wouldn't say it works "pretty well", I'd say it works "extremely well".

Absolute worst case, for a correctly implemented 7% spare area is a WA of about 15 [ignoring wear levelling]. (Inferior garbage collection, such as on indilinx barefoot drives, may achieve much worse levels). When large amounts of the drive consist of static data that is never updated, then the WA will necessarily be worse, as entire valid blocks have to be moved around for the wear levelling.

In practice, most writes are not truely random writes - most are sequential, or are semi-random (e.g. random within a short range, such as a pagefile). In this case, WA is easily reduced dramatically. In fact, 3% overprovisioning is capable of getting WA below 2 for most practical workloads (without the need for TRIM or any kind of FS detecting algorithm).
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
The whole point about GC is that it cleans up any flash pages that it knows are trash.

1. That is block recycling engine and is not part of GC.
2. You clearly indicated in your argument that you accepted my definition of what GC and thus this was not in any way your claim until right this very moment. Now you are trying to pretend your previous arguments didn't exist to save face, by claiming that you used a different (and proven to be incorrect with plenty of linked evidence) definition for GC.

I really don't understand it, there is nothing wrong about being wrong. If I am wrong I say thank you for educating me and adopt the position of the person who was right, and I clearly indicate that it was something taught to me by another.

However, even attempting to use FS specific detection is so fraught with problems, such that it is insanity for a drive to even attempt it. Yes, it is possible to make a drive do FS detection and act in an FS specific manner. However, it is impossible to do this safely, so that it will not randomly corrupt data in certain circumstances.
Yet we have GC and it reads info from the FS with no issues whatsoever. Its a miracle! (of engineering).
Also it would be impossible to run GC on a drive that has not been over provisioned (and it would be extremely costly in write amplification) if what you said was correct (your claim is that GC is basically a process where over provisioned space is consolidated and prepared ahead of time by costly clearing of 128 sectors blocks to acquire ensure enough ready to write clear blocks).

The drive does not need to understand MBR or GPT. It only needs to keep a record of which sectors the OS has ever written to
This is ALL OF THEM. After filling a secure erased drive once with data equal to the amount of space it contained it reaches used state. Where ALL sectors are considered not junk, with the exception of the over-provisioned area (which is shifted around as per wear leveling).

Also, I said its the only way not that they do it. Drives do NOT read MBR or GPT for partition info and do NOT consider unpartitioned space to be free automatically.
 
Last edited:

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
1. That is block recycling engine and is not part of GC.
2. You clearly indicated in your argument that you accepted my definition of what GC and thus this was not in any way your claim until right this very moment. Now you are trying to pretend your previous arguments didn't exist to save face, by claiming that you used a different (and proven to be incorrect with plenty of linked evidence) definition for GC.

I really don't understand it, there is nothing wrong about being wrong. If I am wrong I say thank you for educating me and adopt the position of the person who was right, and I clearly indicate that it was something taught to me by another.

You call it "block recycling engine", the SSD industry and the industry literature, even wikipedia, call it garbage collection.

I don't believe I ever argued that your definition of GC was correct.

I agree, there's nothing wrong with being wrong. However, in this case, there is nothing to suggest that I am wrong. The technical literature supports my view.
Yet we have GC and it reads info from the FS with no issues whatsoever. Its a miracle! (of engineering).

Can you prove it? Can you link a technical article where this is discussed? I don't mean unsubstantiated rumors like the anandtech article that you linked. I've just checked the semi-technical literature from a half-dozen SSD vendors, not one mentions file-systems or deleted files, but most mention flash block erasing and recycling. I'd like to check the true technical documentation, but the vendors won't give it out.

Also, I said its the only way not that they do it. Drives do NOT read MBR or GPT for partition info and do NOT consider unpartitioned space to be free automatically.

A new drive will treat a sector as hidden overprovisioned space until that sector gets touched by the OS. Unpartitioned space will never be touched by the OS, and so should remain permanently as hidden overprovisioned space, which does not need to be preserved by the drive.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Lets for a moment put aside our argument on which components that we all agree exist in a drive make up GC. (to make a car analogy, we all agree a car wheels and a transmission, but you and me disagree on whether the wheels are defined by completely arbitrary convention separate component or are by definition of the word transmission, included in it; while some others insist that the engine block is actually the transmission because they are REALLY clueless)

Despite the fact I have provided multiple links and evidence backing up my own claim as to what the definition means. Because honestly I am sick of arguing it, I provided evidence, nobody else has and we keep repeating the same thing about it.

So, now that we are ignoring which components that we all agree are in a drive make up that nebulous thing called "GC".

Do you in any way refute my list of the 3 ways it is POSSIBLE for a drive to find out what sectors are garbage?
That being:
1. Receive a command to overwrite them from OS.
2. Receive a TRIM for them from OS.
3. Read the FS MFT

Not that they are probable, actual, or likely... just that those are the POSSIBLE ways for it to know.

Also, I said its the only way not that they do it. Drives do NOT read MBR or GPT for partition info and do NOT consider unpartitioned space to be free automatically.
A new drive will treat a sector as hidden overprovisioned space until that sector gets touched by the OS. Unpartitioned space will never be touched by the OS, and so should remain permanently as hidden overprovisioned space, which does not need to be preserved by the drive.

Only if it was NEVER partitioned since the last time the drive was secure erased.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
You just described the absolute worst case scenario. The one where the drive only finds out data is garbage when it is told to overwrite it. Until that moment it has been preserving the data despite it being garbage, unknowingly, causing write amplification.
Worst or not, it's too common of a case not to optimize for. Only Windows 7 users with fairly new chipsets will actually have TRIM. It is possible to enable it otherwise, but it's not common. Linux's TRIM support still seems to need some work, and is FS-dependent, so is not on by default except for swap. OS X's isn't there. Proprietary embedded FSes may never use it. The controller designers, firmware writers, and drive manufacturers must assume the case that overwriting is the only way to know dead data. As such, some controllers have needed over-provisioning space in excess of 14%, to keep performance and WA at acceptable levels (though, they've gotten good enough today to only need the MB v. MiB ~7%).

This doesn't work "well", it works very VERY poorly.
No, it worked poorly on a couple older controllers, and PC power users wanted the speed so badly they accepted it, and bought into this marketed need for TRIM.

The 3 methods which could notify a drive of data being garbage:
1. The OS tells it to overwrite this data (absolute worst case scenario, the thing TRIM and GC were made to avoid)
2. The OS tells it the data is junk (TRIM command) as soon as it becomes junk.
3. The SSD scans the FS contained within (requires FS specific support) and marks sectors as junk based on what sectors are marked as junk in the FS itself (which occurs on deletion).
There is no magical 4th way for it to know. It is not in the MBR, it is not in a magic toggle for each sector. It just doesn't exist.
But, there isn't one needed, even #3, which could be disastrous in the long run. MS, FI, has added features to NTFS. Go try to mount your Windows 7 drive in Windows 2000, and watch it run chkdsk and ruin all your metadata. If MS makes a change that causes your SSD firmware to brick, or worse, corrupt your data, then what? It's just not worth the risk. Hardware and software have long lives.

If you have 9GB of space to play with for 118GB of data, and tens of MBs of RAM to use for write caching, you shouldn't need any magic (and, make no mistake, FS-specific features are magic, and should generally be avoided...have I ever mentioned I don't like the SDA?). You should need competent engineers, a handful of customers sufficiently interested in the technology that they are willing to be guinea pigs, money to pay for it all, and time.

Actually, with neither TRIM nor GC all space except over provisioned space is considered "not junk" by the controller. This leads to atrocious write amplification that causes your performance to tank AND eats up your drive's lifespan rapidly. Over-provisioning helps a lot, but we are talking write amplification in the few dozen compared to the maximum possible write amplifcation of 128x.
No, we're talking about write amplification of <2, often <1.5, on desktop workloads over time, without TRIM, using only the over-provisioned space. It's not rocket science, it's just a matter of real implementations being complicated, and thus needing manhours and multiple HW/SW generations to refine the technology.

And again, TRIM is not separate from GC. The very process of freeing/recycling memory transparently to the users of the data, by updating pointers, so that the memory can be managed separately from the users of the data residing in it, is called garbage collection. It has been called that since at least 1958 (page 27, footnote 7). It is referred to as such, because the original implementation(s) waited until it had to run, to collect the accumulated garbage, thus it operated in occasional cycles, analogous to waste management. Today, preemptive and incremental implementations still keep the name, even if they do when they can, not merely after they must.
 

KingFatty

Diamond Member
Dec 29, 2010
3,034
1
81
The one-flash-per-second culprit was the DVD drive. Since I really don't use the DVD drive very much, I just disabled it in Device Manager. If/when I need it, I'll just enable it.

There may be a chance that you could speed up your boot procedure by physically disconnecting the optical drive, instead of merely disabling it in windows. Could you try that, and see if your boot-up time improves?

I'm very close to doing this as I have 2 optical drives, but I've been too lazy to test how much faster boot-up becomes? Now that I have an SSD boot drive, and I remotely boot my computer to serve files to my TV in another room, I will likely do this too.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
I find it fascinating that I am the only person here to have ever sourced a "fact" in this discussion.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
I find it fascinating that I am the only person here to have ever sourced a "fact" in this discussion.

And yet none of your resources support your claims.

Hard drives are simple block devices they don't work with files. That shouldn't be hard to prove but we can't seem to explain that. You are just wrong and a lot of people are telling you this, you should reflect on why it is so many people have written so many words telling you the same fact that you reject.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Hard drives are simple block devices they don't work with files.

Strawman argument.

see
True, it works on sectors. The OS however sends sector info based on files deleted.
And the FS works on files. So when a file is deleted all its sectors are marked as junk in the FS and then the SSD can read it and find that with GC.
Or when a file is deleted the OS immediately sends the drive a TRIM command for all the sectors it used (it always sends the command, HDDs ignore it as unrecognized command, SSDs utilize it if capable)
So to clarify, when you delete a file, the FS marks the sectors that file occupied as junk in its master file table. I phrased it poorly there.
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
I find it fascinating that I am the only person here to have ever sourced a "fact" in this discussion.

You quoted a speculative article, which didn't reference any reliable primary source.

There is as a good a reference for what "garbage collection" means as any on wikipedia:
http://en.wikipedia.org/wiki/Garbage_collection_(SSD)#Garbage_collection

This tallys with the description for what I term garbage collection.

Now, I must now apologise, as I have now found a reliable source. Someone has actually done an experiment to look what Samsung called "garbage collection" (which is a misuse of the already established term). http://www.jdfsl.org/subscriptions/JDFSL-V5N3-Bell.pdf. I'm sure there are a lot of people here who will be as astonished as I was that such bad practice could go into production (I wonder how well it's been tested against "corner cases" such as I described - unfortunately, the manufacturers won't even acknowledge its existence, let alone how well [or not] it works!)

Incredibly, the drive does appear to do what you say, which is that it looks for an NTFS free-space bitmap, and will wipe data automatically, if it doesn't appear to correspond to a legit file. I simply cannot believe that this is stable, and I now know which drives I'll be avoiding in the future.

The experiment was done by a data forensics firm. The filled up a drive with known data, then ran an NTFS "quick format". They then immediately transferred the drive to a forensic data recovery workstation. To their surprise after only a minute or so of being powered up, the SSD was near completely zeroed out - indeed, they found that the drive started self wiping within seconds of being powered on.

OK. I can see these being great if you've got something to hide, and want to destroy your data sharpish. However, I'd like to have the option of data recovery should I accidentally screw something up (yes, I've used virtual box to accidentally format a mounted HDD - and I was able to get the data back with recovery software. With this type of self-wiping SSD, I'd be going back to my backups).

Given that this firmware is "secret" and is obviously making significant assumptions about my data, I'd personally keep my data as far away from it as possible.

I believe that this matter is now settled.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Now, I must now apologise, as I have now found a reliable source. Someone has actually done an experiment to look what Samsung called "garbage collection" (which is a misuse of the already established term). http://www.jdfsl.org/subscriptions/JDFSL-V5N3-Bell.pdf. I'm sure there are a lot of people here who will be as astonished as I was that such bad practice could go into production (I wonder how well it's been tested against "corner cases" such as I described - unfortunately, the manufacturers won't even acknowledge its existence, let alone how well [or not] it works!)
I recall Samsung claiming they would be doing that, a few years ago, but hadn't come across evidence of them actually having done it, yet. Having occasionally wiped the wrong files, sometimes not even realizing how, that is a bit concerning. It's probably great for write performance, though (an area where Samsung's 830 is quite good).
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,205
126
There was another thread on here about the GPT partition-table backup sectors, at the end of the drive, being overwritten/randomized by the drive. It was a Samsung 830 drive, IIRC. It figures Samsung would be stupid enough to write FS/partition-recognition code into their drive. No wonder they are unstable.

I would say that the vast majority of SSD vendors do NOT do that, it just isn't safe in all cases.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Now, I must now apologise, as I have now found a reliable source. Someone has actually done an experiment to look what Samsung called "garbage collection" (which is a misuse of the already established term). http://www.jdfsl.org/subscriptions/JDFSL-V5N3-Bell.pdf. I'm sure there are a lot of people here who will be as astonished as I was that such bad practice could go into production (I wonder how well it's been tested against "corner cases" such as I described - unfortunately, the manufacturers won't even acknowledge its existence, let alone how well [or not] it works!)

This is a fascinating read, thank you for posting it. I just finished reading the whole thing. They repeatedly stress that "the SSD destroys evidence without receiving a command of its own volition" but really it ONLY deletes files that have been deleted by the user. Unlike a HDD that does not delete them until they are rewritten, deleting it on an SSD means it actually gets deleted within minutes (if GC works right)
It also reaffirms my "3 possible ways for a drive to know" statement.

Interestingly, while it shows I was not wrong and that such GC exists, it also refers to the block recycling mechanism (which I insist is not a part of GC but a seperate thing) as GC as well.
So it is quite possible that on one of my arguments (that GC never referred to block recycling engine) I was wrong (well, or rather, the reviews from which I draw my terminology were wrong; or maybe they used the terminology incorrectly in said article).
Either way, this article indicates that GC can refer to multiple things interchangeably, depending on what different manufacturers decide it refers to.

There was another thread on here about the GPT partition-table backup sectors, at the end of the drive, being overwritten/randomized by the drive. It was a Samsung 830 drive, IIRC. It figures Samsung would be stupid enough to write FS/partition-recognition code into their drive. No wonder they are unstable.

I would say that the vast majority of SSD vendors do NOT do that, it just isn't safe in all cases.

Thank you for reminding me of that thread.

Interestingly this one does not show a fault in their NTFS scanning algorithm.
In order to even read NTFS you must first be able to read the partition table in MBR. GPT is backwards compatible with MBR for reading purposes. If you are already reading the MBR, you can note which areas of the drive are partitioned and mark all unpartitioned space as garbage. The recovery sectors of GPT are on the end of the drive in unpartitioned space.

The controller on those samsung drives clearly supports MBR but not GPT, it reads the MBR and finds out that there is unpartitioned space at the end of the drive (the GPT recovery record) and deletes it.
This should then occur regardless on what file system you use on said controller. ZFS, ext3, FAT... they should all exhibit this problem.
This also means that deleting a partition will cause all of the data it contained to be marked as garbage, even though the NTFS records on the MFT are untouched.

So basically, the issue with those samsung controllers is that in addition to an NTFS clearing algorithm they have an MBR clearing algoritm and it is the latter that has an issue, where its inherently incompatible with GPT.

Since MBR/GPT are FS agnostic it is entirely possible that other drives do do that, but that they do not have this incorrect behavior. However I rather doubt that they do since normal operations do not have users leaving large unpartitioned areas.
 

Coup27

Platinum Member
Jul 17, 2010
2,140
3
81
Just to chip in on that last point there, it was user techvslife who reported he was getting GPT backup partition header curruption under his 830 in UEFI/GPT.

I have been doing some of my own testing with SSDs in UEFI/GPT (although for imaging purposes beyond anything else) and I did not see this curruption on my system. I installed Win 7 in UEFI/GPT on my 830 for a good 2 days and ran tools such as GPT fidsk which verified my GPT strucutre and not once did I get an error.

What can be said is Samsungs SSD Magician has a "performance optimization" feature which according to the user guide, force exectues "TRIM & GC". While this works on MBR, it does not work on GPT.

There's a few support replies in that original techvslife thread from Samsung where they said the unit is not recommended to be used in GPT as it was designed for MBR, and they will "anaylse support for GPT" in the next release of their SSD Magician.


After that experience with Samsung, I emailed Intel to ask if their drives and optimisation features work with UEFI/GPT and got this reply:

"as long as the systemmainboard, and its BiOS support GPT then any SSD will work with GPT as well. GPT as such, is not a specific requirement for SSD's nor conventional hardrives. GPT only provides the option to create a boot volume with a higher capacity then 2 Terrabyte. Any drive (SSD or HDD will be able to host this boot volume as long as the board and BIOS support GPT.
This most likely the reason why there is no information available for SSD's with regards to GPT. You should however be able to find more information on GPT regarding system boards and operating systems"

I also repeated the same tests on my 320 in my main rig which passed all gdisks validation tests and their SSD Optimiser worked fine under GPT.

Thread in question - http://forums.anandtech.com/showthread.php?t=2228064
 
Last edited:

DominionSeraph

Diamond Member
Jul 22, 2009
8,386
32
91
The one-flash-per-second culprit was the DVD drive. Since I really don't use the DVD drive very much, I just disabled it in Device Manager. If/when I need it, I'll just enable it.

No need to disable the drive. If it's the same as the XP problem (likely) all you have to do is disable auto-insert notification for the drive.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
Incredibly, the drive does appear to do what you say, which is that it looks for an NTFS free-space bitmap, and will wipe data automatically, if it doesn't appear to correspond to a legit file. I simply cannot believe that this is stable, and I now know which drives I'll be avoiding in the future.
That’s pretty shocking actually. No drive should function based on knowledge of file systems. That’s a brittle and dangerous way of doing things.
 

Dufus

Senior member
Sep 20, 2010
675
119
101
If you are already reading the MBR, you can note which areas of the drive are partitioned and mark all unpartitioned space as garbage.

You must NEVER assume unpartitioned space is free. For example I have a MBR boot manager that uses over 30 primary partitions on one disk. Only 4 partition entries can be used by the MBR at one time so depending on the selected configuration up to 4 partitions are visible, all other partitions are seen as unpartitioned space.

When the MBR code is loaded it looks briefly to see if a keyboard code was pressed. If not it keeps the current partition table and loads the active partition. If the proper key-code is pressed then a configuration menu is presented where a different selection can be made comprising of different partitions and the MBR partition table updated. Now imagine what would happen with such a presumptuous drive that tried to read the MBR to determine what was garbage !

BTW if the Samsung drive does do that then the GPT first sector should be seen as having a MBR partition table with full allocation of the drive extents so in effect it should see a GPT drive as fully used, so something amiss there.

The only time I can think off when the drive firmware should consider a LBA mapping is free is...

1. The LBA has never been written too.
2. The LBA has been overwritten or replaced creating a new mapping.
3. The drive has been SE'd.
4. The TRIM command has been sent indicating the LBA is no longer used.


To the OP, IMO once the drive receives a TRIM command it is very much drive firmware specific as to what happens after the LBA(s) have been marked as no longer used.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
You must NEVER assume unpartitioned space is free..
I know and agree. I meant that this must have been their reasoning and must have been what causes the GPT corruption issue..

"as long as the systemmainboard, and its BiOS support GPT then any SSD will work with GPT as well. GPT as such, is not a specific requirement for SSD's nor conventional hardrives. GPT only provides the option to create a boot volume with a higher capacity then 2 Terrabyte. Any drive (SSD or HDD will be able to host this boot volume as long as the board and BIOS support GPT.

They clearly did not understand your question and were telling you about booting an OS using GPT (something that drive does not need to be aware of to happen) not that they tested with GPT to ensure a dangerous GC algorithm does not corrupt it.

I have been doing some of my own testing with SSDs in UEFI/GPT (although for imaging purposes beyond anything else) and I did not see this curruption on my system. I installed Win 7 in UEFI/GPT on my 830 for a good 2 days and ran tools such as GPT fidsk which verified my GPT strucutre and not once did I get an error.

What can be said is Samsungs SSD Magician has a "performance optimization" feature which according to the user guide, force exectues "TRIM & GC". While this works on MBR, it does not work on GPT.

It is possible then that the issue was with their SSD Magician software manually trimming "unpartitioned" space and not recognizing GPT. And that they silently fixed it (in order to avoid losing face and lawsuits that eventually ensue from any public admission of fault)
 
Last edited: