Bill Brasky
Diamond Member
I tested after a defrag on my Intel 510 yesterday, and even though I did not see any higher scores, I did win Safety Bingo at work and am buying pizza for the plant tomorrow.
Excellent! :thumbsup:
I tested after a defrag on my Intel 510 yesterday, and even though I did not see any higher scores, I did win Safety Bingo at work and am buying pizza for the plant tomorrow.
This thread got me thinking, "I wonder if win7 recognizes the fact that I have an SSD and doesn't schedule regular defrags?"
The answer: No, it does not. My computer has been set, by default, to automatically defrag on a schedule.
"Which is what some of the others have rightly tried to get across. "
Yet have failed to show any sort of positive results. Let TRIM/Garbadge collection and the firmware do its work. Trying to defrag an SSD, something designed for HDD hardware is silly.
Did you read at all what I wrote?
As long as random reads are slower than sequential reads (look at the benchmarks!), defragmentation (ie. rendering "sequential data" in the form of big files "actually sequential" by putting it into a contiguous LBA space) has a place. If the SSD can 'predict' the next LBA being read, which it can for sequential reads, then it can read up almost ten times faster, then when it can't. The reason you don't see this a lot, is because fragmentation got relatively rare. But if you use an SSD for a while, with a lot of writes, and a relatively full disk, you will get fragmentation, and your random to sequential read ratio will increase, which will decrease performance.
There's really no myth there. The only reason defragmentation is not as important is because in the age of HDD's random reads were about another or two order of magnitudes slower, so we just don't really notice that our big file read slowed to 50MB/s from 500MB/s supposed speed, it's not nearly as noticable as when it slows down to below 1M/s as it is prone to with HDDs.
This doesn't change the principle of fragmentation, and that sequential reads always will be faster.
Did you read at all what I wrote?
As long as random reads are slower than sequential reads (look at the benchmarks!), defragmentation (ie. rendering "sequential data" in the form of big files "actually sequential" by putting it into a contiguous LBA space) has a place.
Except that doesn't even happen with HDDs, except on rare super-fragmented files, synthetic benchmarks, and Exdeath's Java project backups 🙂. Windows 7 w/ AHCI (read: NCQ) pretty well takes care of it on HDDs. With SSDs, it's basically only adding more logical IOs for a given amount of data read and written, that should be a concern, and that will take severe fragmentation to be noticeable.There's really no myth there. The only reason defragmentation is not as important is because in the age of HDD's random reads were about another or two order of magnitudes slower, so we just don't really notice that our big file read slowed to 50MB/s from 500MB/s supposed speed, it's not nearly as noticable as when it slows down to below 1M/s as it is prone to with HDDs.
Though sometimes happens on NTFS, IME. FSes like EXT3/4, JFS, XFS, and so on, will only ever get bad enough to worry about if they get too full, and then medium-sized files get small edits. NTFS seems to be just enough of a throwback to be able to get such fragmentation with some files, by whatever pathological editing pattern allows it, even with enough free space. Even on NTFS, though, it will tend to be a rare issue, and a manual copy+delete+replace should fix it for months to come, when/if it occurs (an SSD-tuned defrag service could do that only with files averaging < xMB per fragment and fragments > y threhold, and keep NTFS good for many more years, with a negligible increase in host writes over time).2. Said single file is in fragments smaller than 0.5MiB (unnatural fragmentation level)
Though sometimes happens on NTFS, IME.
Or, to have a ton of fragments to read. If stuck at a QD of 1, a faster drive will still be a faster drive, and no on would not use NCQ unless they had reason to (such as adding a new drive to existing hardware/drivers that will not support it).ah, but even if condition 1 & 2 occur, you need condition 3 to also occur.
If stuck at a QD of 1, a faster drive will still be a faster drive, and no on would not use NCQ unless they had reason to (such as adding a new drive to existing hardware/drivers that will not support it).
We are a point where a 10MB file in 30 evenly-sized fragments should be considered moderate fragmentation for a HDD, unless copying a bunch of them is all you do, and hardly worth mentioning for a SSD. A 10MB file with >100 little 4-16K edits scattered across the drive's address space, over its lifetime...now, that's a problem. Rarer than in the NT 4 days, certainly, but I've still seen it a few times on Windows 7. There's no way that level of fragmentation is not going to cause lower performance.