I have Huge files with extreme fragmentation.
Benchmarks on SSD's claim that sequential reads are much faster than random reads; am I wrong ?
Would defragmentation help to get much faster reads of files of about a gig with extreme fragmentation on an SSD ?
Wear levelling attempts to work around ... by arranging data so that erasures and re-writes are distributed evenly across the medium.
http://www.anandtech.com/storage/showdoc.aspx?i=3702&p=9
http://www.anandtech.com/storage/showdoc.aspx?i=3702&p=10
Look at those two pages. Anandtech doesn't compare the SSDs with newer hard drives, which I think is a shame, but the points these pages support are still valid.
Velociraptor:
2MB Seq. Read: 120.7MB/s
2MB Seq. Write: 119.7MB/s
4KB Random Read: 1.5?MB/s
4KB Random Write: 0.7MB/s
Fair enough? The random bars are a bit hard to read, but you can sort of tell by how big relatively the bars are. For an Intel X-25M G2 160GB:
2MB Seq. Read: 256.7MB/s
2MB Seq. Write: 101.7MB/s
4KB Random Read: 37.4MB/s
4KB Random Write: 64.3MB/s
You don't need to be a genius, or indeed, to even make out the numbers, to know that in random reads and writes, mechanical hard drives are brutally and massively raped by even the slowest SSDs.
Sorry, I forgot about those. I'd sort of, you know, blotted them out of my mind...actually, the slowest SSDs are the jmicron first generation drives which are MUCH SLOWER than the 0.7MB/s that the velociraptor gets. you should evaluate each product based on its own merit.
the fastest SSDs are indeed two orders of magnitude faster than the fastest spindle drive in random writes.
No, it isn't. Because defragmentation was invented for hard drives, because of a physical deficiency on the part of the hard drive that just isn't there in an SSD.This is concerning an SSD intel G2.
Hard drive performance is irrelevent to my questions.
Sorry, I forgot about those. I'd sort of, you know, blotted them out of my mind...
Well for one because i run Linux, and i figure it only works on Windows. ;-)You make it sound like it is hopeless after a defrag.
Why can't you just run the intel optimizer and have it up to snuff again ?
Because the layout of the actual data is irrelevant; you want to know how fast the thing can go regardless of HOW it internally manages to do that. In other words; the end results count.But indulge me on this sequential idea. Why are SSD's benchmarked on sequential reads if their data is, basically, never laid out sequentially ?
By the simple fact that sequential I/O is predictable and random I/O is not. If the drive gets 2 contiguous I/O requests (i.e. sector 7 and 8) it may assume the next request would be for sector 9. So it can already retrieve this data and cache it, even before the command arrives. This is called read-ahead and its the most basic (yet effective) optimization known to modern storage.And what makes an SSD's sequential reads faster, by a factor of 7, than it's random reads.
You're confusing two things here. The sequential vs. random reads/writes that they use in hard drive tests are at the OS level. All hard drives (mechanical and SSD) split files up, they *never* store a file all in the same "location" on disk (unless the file is smaller than a sector, then it only needs one "location" period). On mechanical hard drives if the multiple sectors needed to store a single file are all lined up, then when you request that file the hard drive has no seek time to piece together the different parts of the file. But make no mistake, it is piecing together the file.... indulge me on this sequential idea. Why are SSD's benchmarked on sequential reads if their data is, basically, never laid out sequentially ?
It will, all the small writes will be remapped to empty flash cells. Meaning that the HPA table becomes full slowing down all I/O done by the SSD, as well as that heavy internal fragmentation may cause sequential speeds to show fluctuating performance; because you 'tore holes' in the SSD; its no longer contiguous.it won't change anything about the performance of your drive