SSD Defragging?

BonzaiDuck

Lifer
Jun 30, 2004
16,376
1,911
126
I had accepted the consensus that there was no need for defragging an SSD, and that it simply adds more wear.

I ran a search on the topic here at the forums, and couldn't find an answer to the simple question. But I thought I'd read recently -- maybe here -- that it was "OK."

I have a lot of s*** on my plate with these systems today. I have some quirks with hibernation I'm trying to work out, needed to turn it off (eliminating the file), "defrag," and then turn it back on again, per an MS tech page for resolving the problem I just experienced ("failed to come out of hibernation" etc.)

I finally just decided to defrag the boot SSD -- now in progress. What exactly is the wisdom about doing an occasional defrag on an SSD?
 

UsandThem

Elite Member
May 4, 2000
16,068
7,383
146
It wears it faster, and is really just unnecessary.

Of course, most SSDs can write a lot of data before they die, so it's not the end of the world doing an occasional defrag. If someone set an aggressive schedule like defragging them once a week, then after a year I would imagine the SSD would show a very large amount of data written.

There are a lot more technical articles out there, but I think this one is short and to the point:

http://forum.crucial.com/t5/Crucial...use-any-long-term-performance-loss/ta-p/71051

The short answer is this: you don't have to defrag your SSD.

To understand why, we need to look at the purpose of defragmenting. Defragging ensures that large files are stored in one continuous area of a hard disc driveso that it can be read in one go. Mechanical drives have a relatively long seek time of approximately 15ms, so every time a file is fragmented you lose 15ms finding the next one, And this really adds up when reading lots of different files split into lots of different fragments.

However, this isn't an issue with SSDs, because the seek time is about 0.1ms. You aren't really going to notice the benefit of defragged files--which means defragging has no performance advantages with an SSD.

An SSD moves data that's already happily on your disk to other places on your disk, often sticking it at a temporary position first. Thats's what gives defragmenting a disadvantage for SSD users You're writing data you already have, which uses up some of the NAND's limited rewrite capability, -- with no performance advantage to be gained from it.

So basically, don't defrag your drive because at best it won't do anything, at worst it does nothing for your performance and you will use up write cycles doing it. Having done it a few times isn't going to cause you much trouble, but you don't want this to be a scheduled, weekly type thing.

What many people agree on that is actually helpful for SSDs is 'optimizing' or 'retrimming' them. I have Windows 10 set to do this once a month on all my SSDs (3 desktops and 3 laptops):

http://arstechnica.com/gadgets/2015...garbage-collection-so-i-dont-need-trim-right/
 

john3850

Golden Member
Oct 19, 2002
1,436
21
81
I believe the firmware has much to do were the files end up on a ssd.
My samsungs 830,840,850 keep the files close together or normal even after a years use.
My seagate 600 and a crucial ssd keep the files all over place.
I tried to defraged my seagate 600 3x or for 30 min time and never noticed much difference.
What I did was copy the ssd to a hd spinner and defrag it.
Secure erase the ssd then copy the info back from the spinner that defrag it to get good results.
 

BonzaiDuck

Lifer
Jun 30, 2004
16,376
1,911
126
I believe the firmware has much to do were the files end up on a ssd.
My samsungs 830,840,850 keep the files close together or normal even after a years use.
My seagate 600 and a crucial ssd keep the files all over place.
I tried to defraged my seagate 600 3x or for 30 min time and never noticed much difference.
What I did was copy the ssd to a hd spinner and defrag it.
Secure erase the ssd then copy the info back from the spinner that defrag it to get good results.

What happens with Piriform(?) Defraggler -- it eventually shows the entire disk "full" with only a few GB to spare as it "does its thing. I suspect that I actually wrote ~ > 450GB to the disk for that little education.

The ADATA Toolbox . . .uh, what-a-minute . . . shows about 1.6 TBW. It's a cheap SSD, but could probably last beyond 250. the drive is about three months old. And stupid ass that I am, I already use the comprehensive Trim feature in addition to what Windows does on its own.

the end result showed a drive map with the same contiguous scatter of blocks in which most of the 20% had been consolidated -- or so I would think. But not the result you'd imagine for an HDD. Not really a lot was moved around, although much was made in writes to the disk -- just writing 0's or 1's to the unused space.

And since I really don't know as much as I need to about how SSDs work, I'm just guessing what all those Piriform indications mean.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
I finally just decided to defrag the boot SSD -- now in progress. What exactly is the wisdom about doing an occasional defrag on an SSD?

That it is completely pointless. The LBA the OS sees has no coupling to what is happening on the drive. The drives controller handles all that, and just hands the OS a "fictional" LBA table. Add the fact that every single cluster has the exact same access time, so you don't need the clusters to be in order like a HDD.

The only thing you're accomplishing is wearing out the drive a lot quicker.

As a side note don't defrag the new SMR HDDs either. They have on-drive management similar to SSDs.
 

BonzaiDuck

Lifer
Jun 30, 2004
16,376
1,911
126
That it is completely pointless. The LBA the OS sees has no coupling to what is happening on the drive. The drives controller handles all that, and just hands the OS a "fictional" LBA table. Add the fact that every single cluster has the exact same access time, so you don't need the clusters to be in order like a HDD.

The only thing you're accomplishing is wearing out the drive a lot quicker.

As a side note don't defrag the new SMR HDDs either. They have on-drive management similar to SSDs.

I thought so. I should've simply avoided it.

I also need to check some drives I'm using in the new and an existing build to see if they are SMR instead of PMR.

I had thought perpendicular drives were the cat's meow when they were released, and now I see another development to enhance density of HDDs.

I can't keep up with all this s***.

I'm not going to cry myself to sleep about this one mistake, though. I've watched the TBWs grow on several SSDs I use: Samsung, Crucial, Adata and some Patriots. This one just aged by a few months out of something a lot longer than ten years, unless usage patterns change.

It also dawned on me that if I plan to try my tiered caching later with an NVMe and I have it organized on SATA at the moment, there's little need for any large pagefile even if I'd allocated 2GB to each of an SSD and HDD. That's essentially due to the fact that the 60 to 100GB caching SSD/partition also eliminates a need for a pagefile anyway.

This entire detour arose when I realized I needed to eliminate and re-create hiberfil.sys's on two systems. The "Defrag" step was a point of confusion and error on my part.

And I could tell ya about sorting that out, or how it was giving me boot-time problems.

If you're going to hibernate, make sure you eliminate any possibility that post-time will throw an "Hit F1 and enter BIOS" message. In my case, it was an error for the CPU_FAN and OPT_FAN connections I had reversed the day before. Got that right, now, resolved and put aside.
 

john3850

Golden Member
Oct 19, 2002
1,436
21
81
I did use Piriform(?) Defraggler.
It took me 3 to 4 times and way over 30 min for the operating system drive to look like a 850 which was never defraged
 

superstition

Platinum Member
Feb 2, 2008
2,219
221
101
That article says that not only is it a myth that SSDs aren't slowed by fragmentation, a well-designed algorithm tailored for SSD fragmentation management will reduce drive wear rather than increase it.

So, rather than manufacturers/brands telling you that there is nothing to be done about SSDs and fragmentation, they should be saying that either internal controller algorithms or software they provide for installation will optimally manage fragmentation.
 

Elixer

Lifer
May 7, 2002
10,371
762
126
The problem with any defrag/optimization software for SSDs is, they just won't do what they say they will do, since they can't.
The most any of them can do is just trim the drive, and the OS does that anyway, so it is useless.

There is an abstraction layer between what the SSD sees, and what the OS sees, and it is impossible for the OS to know the direct mapping that the SSD does, since all that information isn't given to the OS.

This is 100% in control of the firmware & how their garbage collection routines work.

If people want to read on how exactly SSDs work, check out http://codecapsule.com/2014/02/12/coding-for-ssds-part-1-introduction-and-table-of-contents/ and also read http://www.flashmemorysummit.com/English/Collaterals/Proceedings/2012/20120821_TB11_Sykes.pdf to understand why "defragging" a SSD is snake oil.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
I can't keep up with all this s***.

You and me both. Its a pain really.

I'm not going to cry myself to sleep about this one mistake, though. I've watched the TBWs grow on several SSDs I use: Samsung, Crucial, Adata and some Patriots. This one just aged by a few months out of something a lot longer than ten years, unless usage patterns change.

Don't worry about it. As the techreport experiment showed even consumer drives are good for a petabyte or two before failing. It'll only be a problem if you do it regularly.

The problem with any defrag/optimization software for SSDs is, they just won't do what they say they will do, since they can't.
The most any of them can do is just trim the drive, and the OS does that anyway, so it is useless.

There is an abstraction layer between what the SSD sees, and what the OS sees, and it is impossible for the OS to know the direct mapping that the SSD does, since all that information isn't given to the OS.

This is 100% in control of the firmware & how their garbage collection routines work.

If people want to read on how exactly SSDs work, check out http://codecapsule.com/2014/02/12/coding-for-ssds-part-1-introduction-and-table-of-contents/ and also read http://www.flashmemorysummit.com/English/Collaterals/Proceedings/2012/20120821_TB11_Sykes.pdf to understand why "defragging" a SSD is snake oil.

Yup. The coupling between what the OS sees and what actually happens on drive is entirely within the drives own firmware. The controller firmware also manages garbage collection and wear levelling. You simply can't affect that from within the OS.

And to repeat myself, data does not have to be in any particular order for optimal access, since every last cluster has the exact same access time. Unlike a traditional HDD with its rotating mechanical platters that slow down as you get further in from the edge and R/W heads which have to physically move to retrieve specific data.

Anandtech also has an excellent primer:
http://www.anandtech.com/show/2738
 

bononos

Diamond Member
Aug 21, 2011
3,928
186
106
Don't worry about it. As the techreport experiment showed even consumer drives are good for a petabyte or two before failing. It'll only be a problem if you do it regularly.
.......
The Techreport experiment tested MLC drives and the vnand 840. Todays consumer drives uses TLC which wear out faster and get more slower with lots of writes.
 

superstition

Platinum Member
Feb 2, 2008
2,219
221
101
data does not have to be in any particular order for optimal access, since every last cluster has the exact same access time.
So why do random tests have the lowest scores in benchmarks versus sequential?

And, if there is no way for Windows/software to know how files are actually located/organized how can they determine what is random and what is sequential?

The other issue is OS overhead from these "virtual" fragments. Doesn't it take more work for an OS to deal with a 20 MB file that is virtually fragmented into 2000 parts versus one seemingly contiguous part?
 

superstition

Platinum Member
Feb 2, 2008
2,219
221
101
That's the so-called "steady state". There was a review at HardOCP of the 128 MB Samsung 840. It was the first review to really call into question TLC drive performance, well before the TLC degradation problems surfaced. The drive was, according to them, a very poor performer in steady state in particular — because of its small capacity and weak NAND.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
So why do random tests have the lowest scores in benchmarks versus sequential?

"Sequential" in this case means the controller already knows which clusters are needed for a particular request, so it can fetch them sequentially. Say the OS request file1.txt. The file is split between cluster A, B, C and D. Since the controller already knows this from the LBA table it keeps internally, so it can serve up A, B, C and D in order.

"Random" access can't be predicted because it is by nature random. There is a properly a pun in there somewhere :confused:. The controller then does not know which clusters will be requested. That means it'll first have to look up the request in the internal LBA table for every cluster requested, then actually power up the required cluster, before it can serve the request. This takes longer of course.

I hop that makes some sense.

And, if there is no way for Windows/software to know how files are actually located/organized how can they determine what is random and what is sequential?

The other issue is OS overhead from these "virtual" fragments. Doesn't it take more work for an OS to deal with a 20 MB file that is virtually fragmented into 2000 parts versus one seemingly contiguous part?

Storage is (mostly) managed as an LBA (LogicalBlockArray) today. All the OS sees is a large number of blocks which can be filled with data. The OS doesn't really care where the files actually end up, it just knows which blocks contain what data.
 

Elixer

Lifer
May 7, 2002
10,371
762
126
Storage is (mostly) managed as an LBA (LogicalBlockArray) today. All the OS sees is a large number of blocks which can be filled with data. The OS doesn't really care where the files actually end up, it just knows which blocks contain what data.
And those LBAs on the SSD are in a virtual table, but can be located on any NAND chip depending on how the firmware handles it.
Which basically means that the SSD is moving the data around because of wear leveling, and garbage collection, and other routines so the data at starting location #543 (made up address on SSD) can be moved to #2938 on another period and #20 the next, and so on. This is all firmware controlled, however, as far as the OS is concerned, it just knows the LBA address (which is located in the file allocation table) and tells that to the SSD, SSD uses the lookup table to find where it stored that data, send it back to the OS.