WD 8tb gets slower the more full it is

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
Hmm?
How do you figure that?
OS (or any utility) doesn't know how the SSD actually stores the data.
What may seem to be random/sequential to the OS (or a defragmenter) can in fact be sequential/random to the SSD itself. There is no 1:1 mapping here.

NTFS metadata is cached in memory, and unless you are dealing with 100's of TB of info, 99.9999% wouldn't notice any difference at all.
This is a blog written by a Microsoft employee: https://www.hanselman.com/blog/TheRealAndCompleteStoryDoesWindowsDefragmentYourSSD.aspx

The relevant snippet is from the Windows storage team (they develop the file system and storage drivers):

Actually Scott and Vadim are both wrong. Storage Optimizer will defrag an SSD once a month if volume snapshots are enabled. This is by design and necessary due to slow volsnap copy on write performance on fragmented SSD volumes. It’s also somewhat of a misconception that fragmentation is not a problem on SSDs. If an SSD gets too fragmented you can hit maximum file fragmentation (when the metadata can’t represent any more file fragments) which will result in errors when you try to write/extend a file. Furthermore, more file fragments means more metadata to process while reading/writing a file, which can lead to slower performance.
 

Elixer

Lifer
May 7, 2002
10,371
762
126
That is more about bookeeping than anything else.

NTFS metadata files are: $Mft, $LogFile, $Volume, $AttrDef, $Bitmap, $Boot, $BadClus, $Secure, $UpCase, $Extend, and I think possibly a few more.
So, the info contained in those files can become fragmented, however, how it is stored on the SSD internally, only the SSD's controller knows.

The OS has no clue on how the SSD stores the data internally.
 
  • Like
Reactions: Valantar