• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Huge Files Highly Fragd, Dfrag will help SSD Speed ?

scaryfast

Member
I have Huge files with extreme fragmentation.

Benchmarks on SSD's claim that sequential reads are much faster than random reads; am I wrong ?

Would defragmentation help to get much faster reads of files of about a gig with extreme fragmentation on an SSD ?
 
I am not an expert but I am pretty sure SSD's by design are not supposed to be defragged. They don't need to be. They are never fragmented.
 
Don't defrag your SSD. You'll wear the drive down by writing over and over. Use Trim if the drive supports it.
 
File fragmentation has very little affect on files on normal drives and SSDs have virtually 0 seek time so any affects on speed will be that much lower.
 
No point. Defragmentation improves mechanical drive performance because seek times are a bitch. Since SSDs have no effective seek time, defragmentation is just an exercise in futility and getting your SSD to wear out faster. Try garbage disposal (whatever it's called) or TRIM. They should help.
 
I have Huge files with extreme fragmentation.

Benchmarks on SSD's claim that sequential reads are much faster than random reads; am I wrong ?

Would defragmentation help to get much faster reads of files of about a gig with extreme fragmentation on an SSD ?

What do you mean by extreme fragmentation? If a 1TB file is split up into 12 different fragments that's not terrible at all.

That having been said do not defrag an SSD. Use Trim. If you can't then just consolidate free space. Free space fragmentation is more important on SSDs.
 
It seems then that random access is not the same thing as a fragmented file.

Please tell me why a gig file in thousands of fragments is not the same thing as random access. What am I missing.

Will I still get virtually sequential speeds even though I have a highly fragmented file ?
 
What happens is when a mechanical hard drive tries to read a file in thousands of little bits scattered all over the hard drive, is that the read head has to move to those thousands of different places sequentially. And each move takes, on average, about 10ms. In this instance, the time it takes the hard drive to actually read the data is insignificant in comparison to the time it has to take getting to each fragment. When you defragment a drive, it moves all the fragments of a file into one piece, so that the read head has a minimal distance to move and the file can be read a lot faster. This is why defragmenting a mechanical hard drive improves performance; because you are turning lots of what are for all intents and purposes random reads into one big sequential read. And if you look at any benchmark of a hard drive, you will see that sequential read speeds (usually in the order of 60-120MB/s) are much higher than random read speeds (maybe 1-2MB/s at most).

If you look at a benchmark of an SSD, however, you will notice that the random access time of an SSD is not 10ms. It is not even 1ms. It's on the order of nanoseconds. This means that no matter where on the drive your data happens to be, the SSD will most likely get to it within a single millisecond. This suddenly means that random reads are much faster on SSDs than on mechanical hard drives - 10-100 times better. Which is why there is no real point in defragmenting SSDs - random read performance is so fricking awesome, it just can't get that much better. And in return you're thrashing your SSD, depriving it of previous read, write, and delete cycles and greatly shortening its lifespan.
 
Yuriman seems to be saying that there is no true sequential read on an SSD.

Mr. Pedantic, you are saying that a hard drive often gets 60 times the performance on a sequential.

It seems, from my poor memory, that some benchmarks show 2 to 3 times the performance on an SSD.

And on my fragmented file It seems most would say I would only get an extra 5 to 25% or so performance gain by defragmenting. And that would come at the cost of extra wear.

Is that right ? Or could I expect as much or more than 100% performance increase ?
 
Last edited:
http://www.anandtech.com/storage/showdoc.aspx?i=3702&p=9
http://www.anandtech.com/storage/showdoc.aspx?i=3702&p=10

Look at those two pages. Anandtech doesn't compare the SSDs with newer hard drives, which I think is a shame, but the points these pages support are still valid.

Velociraptor:

2MB Seq. Read: 120.7MB/s
2MB Seq. Write: 119.7MB/s
4KB Random Read: 1.5?MB/s
4KB Random Write: 0.7MB/s

Fair enough? The random bars are a bit hard to read, but you can sort of tell by how big relatively the bars are. For an Intel X-25M G2 160GB:

2MB Seq. Read: 256.7MB/s
2MB Seq. Write: 101.7MB/s
4KB Random Read: 37.4MB/s
4KB Random Write: 64.3MB/s

You don't need to be a genius, or indeed, to even make out the numbers, to know that in random reads and writes, mechanical hard drives are brutally and massively raped by even the slowest SSDs.
 
http://www.anandtech.com/storage/showdoc.aspx?i=3702&p=9
http://www.anandtech.com/storage/showdoc.aspx?i=3702&p=10

Look at those two pages. Anandtech doesn't compare the SSDs with newer hard drives, which I think is a shame, but the points these pages support are still valid.

Velociraptor:

2MB Seq. Read: 120.7MB/s
2MB Seq. Write: 119.7MB/s
4KB Random Read: 1.5?MB/s
4KB Random Write: 0.7MB/s

Fair enough? The random bars are a bit hard to read, but you can sort of tell by how big relatively the bars are. For an Intel X-25M G2 160GB:

2MB Seq. Read: 256.7MB/s
2MB Seq. Write: 101.7MB/s
4KB Random Read: 37.4MB/s
4KB Random Write: 64.3MB/s

You don't need to be a genius, or indeed, to even make out the numbers, to know that in random reads and writes, mechanical hard drives are brutally and massively raped by even the slowest SSDs.

actually, the slowest SSDs are the jmicron first generation drives which are MUCH SLOWER than the 0.7MB/s that the velociraptor gets. you should evaluate each product based on its own merit.
the fastest SSDs are indeed two orders of magnitude faster than the fastest spindle drive in random writes.
 
This is concerning an SSD intel G2.
Hard drive performance is irrelevent to my questions.

taltamir, your last post shows random reads can be about 7 times slower.

Would degragmenting a gig file that was fragmented into 4k fragments result in a 7 time faster sequential read ?
 
OP. don't defrag your SSD. TRIM it.

what is the difference? defrag is built for a drive that lets the OS have control over physical placement of data, it then combines and condenses that data to improve performance.
SSDs do NOT give the OS true access to the data, it has a table that converts "OS address" to "actual physical address" which it updates as it dynamically changes those to spread out writes (to avoid wearing out specific portions faster). As a result, defrag just makes a mess out of SSDs with no benefit whatsoever, in fact it makes performance WORSE on an SSD.
TRIM tells the drive to treat certain entries in that table as "expired data". Which means that the SSD controller can then get rid of it to maximize performance. (think of trim as defrag for SSDs)

It is a lit more complicated than that but this is the basic premise. read this:
http://en.wikipedia.org/wiki/TRIM
 
Last edited:
actually, the slowest SSDs are the jmicron first generation drives which are MUCH SLOWER than the 0.7MB/s that the velociraptor gets. you should evaluate each product based on its own merit.
the fastest SSDs are indeed two orders of magnitude faster than the fastest spindle drive in random writes.
Sorry, I forgot about those. I'd sort of, you know, blotted them out of my mind...

This is concerning an SSD intel G2.
Hard drive performance is irrelevent to my questions.
No, it isn't. Because defragmentation was invented for hard drives, because of a physical deficiency on the part of the hard drive that just isn't there in an SSD.
 
Sorry, I forgot about those. I'd sort of, you know, blotted them out of my mind...

understandable...

Lets do a quick Q&A...
Q: Can I Defrag my SSD?
A: You CAN, but you shouldn't, ever!

Q: What will happen if I defrag my SSD?
A: It will get slower, it will also lose some of its lifespan

Q: Doesn't that mean that spindle drives are superior?
A: no, even without defrag your SSD is much much faster then a spindle drive.

Q: Is there ANYTHING I can do to increase the speed of my used SSD back to new like status?
A: yes, TRIM it, TRIM allows SSDs to maintain new like speeds.

Q: I defragged my SSD! can I undo the damage?
A: The "it will get slower" part can be undone by TRIMing it (causing it to recover back to "brand new" speeds); or simply using it normally (without defragging) will cause it to recover (to "used" speeds) over time, the "lose lifespan" part cannot be undone. Don't worry though, it should still outlast every other component in your computer.

Q: What is trim?
A: http://en.wikipedia.org/wiki/TRIM

Q: My Intel G1 doesn't support trim and never will! Should I go back to a spindle drive?
A: No, your intel G1 is still a lot faster then a spindle drive even without TRIM or Defrag.

Q: How slow is the "used" speed?
A: About 60% of the "brand new" speed. Still faster than a spindle drive, but not as good as it can be.

Q: Does the used speed continually deteriorate?
A: no, it levels off, it might spike up and down, but it will return to a certain value over time

Q: What can I do to get back to new speed?
A: TRIM will bring you back to new speed and will keep you there as long as it is functioning.
 
Last edited:
Thank You.

I pretty much understand the basics and will not defrag and will always use trim(as done with the intel toolbox for Vista) when more than just a few things have been deleted.

But indulge me on this sequential idea. Why are SSD's benchmarked on sequential reads if their data is, basically, never laid out sequentially ?

And what makes an SSD's sequential reads faster, by a factor of 7, than it's random reads.
 
If you ever defragment a modern SSD, you will have done the exact opposite and made a mess out of the internal flash cell remapping table of the SSD; while Windows sees all data tidily together, in reality the SSD was forced to fragment the data alot. That's not so bad, but the mapping tables that have to be checked for EVERY I/O operation will become fuller and slow down all I/O done to the SSD. After you defragmented your SSD; you'll have to completely wipe / secure HPA erase the surface to reset the mapping table. A format won't do this by the way; you need a zero-write or secure HPA erase utility. Simple SSDs who do not remap writes won't be affected by this. NEVER defragment your SSD - doing so will cause lower performance.
 
You make it sound like it is hopeless after a defrag.
Why can't you just run the intel optimizer and have it up to snuff again ?

But indulge me on this sequential idea. Why are SSD's benchmarked on sequential reads if their data is, basically, never laid out sequentially ?

And what makes an SSD's sequential reads faster, by a factor of 7, than it's random reads.
 
You make it sound like it is hopeless after a defrag.
Why can't you just run the intel optimizer and have it up to snuff again ?
Well for one because i run Linux, and i figure it only works on Windows. ;-)
But yes you're right, such a utility is nice.

But indulge me on this sequential idea. Why are SSD's benchmarked on sequential reads if their data is, basically, never laid out sequentially ?
Because the layout of the actual data is irrelevant; you want to know how fast the thing can go regardless of HOW it internally manages to do that. In other words; the end results count.

The operating system sees the SSD as a flat block-level storage device, accessible from sector 1 to sector 999whatever, each sector being 512 bytes. And it is storing files sequentially because that's traditionally the fastest for mechanical media. Not necessarily for flash, by the way.

And what makes an SSD's sequential reads faster, by a factor of 7, than it's random reads.
By the simple fact that sequential I/O is predictable and random I/O is not. If the drive gets 2 contiguous I/O requests (i.e. sector 7 and 8) it may assume the next request would be for sector 9. So it can already retrieve this data and cache it, even before the command arrives. This is called read-ahead and its the most basic (yet effective) optimization known to modern storage.

Because HDDs are serial in operation, the interface doesn't buffer I/O commands as effectively to allow the SSD to saturate its multi-flash channels. In essence, the SSD doesn't get utilized enough; much like a single-threaded application runs on a 8-core CPU.

That should improve with SATA 6Gbps, and Intel uses NCQ in SATA 3Gbps to allow for command buffering/queueing. This helps increase random IOps - if the controller can keep up of course. But the OS would be able to send more I/O's to the SSD without waiting for confirmation; so the SSD can use parallel I/O on its 8 flash channels.

SSDs theoretically have almost infinite performance; in theory they could access all cells at once. That will stay theory for a long time i figure. 😀
 
... indulge me on this sequential idea. Why are SSD's benchmarked on sequential reads if their data is, basically, never laid out sequentially ?
You're confusing two things here. The sequential vs. random reads/writes that they use in hard drive tests are at the OS level. All hard drives (mechanical and SSD) split files up, they *never* store a file all in the same "location" on disk (unless the file is smaller than a sector, then it only needs one "location" period). On mechanical hard drives if the multiple sectors needed to store a single file are all lined up, then when you request that file the hard drive has no seek time to piece together the different parts of the file. But make no mistake, it is piecing together the file.

As others have said, SSDs don't have to seek, they can just call all the memory blocks that store the different parts of any given file and piece them together (because remember, [almost] all files have to be pieced together).

Hope that helps.
 
Last edited:
It is not possible to "defrag" a SSD, regardless of what your defrag software tells you.

The controller on a SSD remaps everything to random locations, so even if you defrag to give that pretty drive map, it won't change anything about the performance of your drive, and in truth will just wear it out (flash memory can only be written to so many times before it fails).

Besides, there is no point, the entire idea of a SSD is that you don't have to defrag, because everything is accessible instantly.

BTW, the differences between random reads and sequential reads in benchmarks have nothing to do with the physical location of files on the drive. Accessing a 1 GB file is a simple request to the controller. Accessing 1,000 1MB files quickly is not. You have no control over this.

Bottom line, sit back and enjoy your drive. If you really want to learn, then read all the SSD articles that AnandTech has published, they have done a very good job with this topic.
 
it won't change anything about the performance of your drive
It will, all the small writes will be remapped to empty flash cells. Meaning that the HPA table becomes full slowing down all I/O done by the SSD, as well as that heavy internal fragmentation may cause sequential speeds to show fluctuating performance; because you 'tore holes' in the SSD; its no longer contiguous.

In that state, you would need to secure erase HPA or zero-write the entire disk; to reset the tables and to make the SSD a normal disk again where sector 1 = flash cell 1, etc.
 
Back
Top