new intel ssd user: stupid question regarding defrag and memory managment

endervalentine

Senior member
Jan 30, 2009
700
0
0
maybe some of the gurus like Idontcare or Mr Fox can chime in but at first I thought it was ridiculous to 'defrag' a SSD drive but I thought I read somewhere that there was something you can run to help improve the performance of the SSD drive. Is this true?

Also, what is all the hoopla about the TRIM functionality ... I have the G1 version and will move to win7 but it seems like I shoulda have waited out for the G2 version? In very basic layman terms what does the TRIM offer?

And lastly, w/ a SSD is there a direct effect on the usual of RAM or virtual memory on the OS? My thinking is that there isn't any effect since the HD sits behind the RAM and virtual memory ... the SSD just does things faster and does not affect or deal with the memory management?

sorry for the noob questions but I figure I should know of how the SSD works since it works! :)
 

jdjbuffalo

Senior member
Oct 26, 2000
433
0
0
My G2 is on the way and I've researched a lot on the current state of SSDs.

You can run something to improve the performance of the drive. This is what TRIM does automatically and why everyone wants it. TRIM basically sends a command from the OS (Windows) to tell it to delete a file. Thus when comes to writing to that sector the speed will be almost the same as if were a clean drive with nothing on it (see the link for a more detailed explanation). TRIM Intel has stated they will release a manual tool that will allow you to do something very similar to TRIM. Other SSD makers have already come out with this but there isn't one specifically for Intel. I believe the tentative date was before the end of the year.

Since the SSDs are small and expensive it's best to keep your page file (virtual memory) to a minimum, provided you have plenty of RAM and don't max it out all the time.

More information on TRIM:
http://www.anandtech.com/stora...owdoc.aspx?i=3531&p=10
 

latch

Member
Jul 23, 2007
66
0
0
Win7 should detect the SSD and automatically configure a few settings for it, including not defraging the drive. Some people have said that it still looks enabled, but it never actually runs. Depending on the performance of your SSD (and your g1 will definitely be fast enough, no worries there), win7 will also disable ReadyBoost, Superfetch and any prefetching.

Others have frequently commented on disabling indexing, using a fixed page size, moving the page file to a different disk and moving your hibernate file to a different disk. The reason you'd move those two files to a different disk has more to do with size, with an 80gb drive, and 8gb of ram, your hibernate file will represent 10% of your total disk space, which sucks.
 

jimhsu

Senior member
Mar 22, 2009
705
0
76
The typical answer is that no, you don't want to defrag an SSD, because the LBA allocation (corresponding to the physical location of the data) doesn't match the file system allocation. That is, if you defrag, you will most likely fragment the data already on the drive further. Some solutions like Hyperfast[1] have been proposed, but they haven't been shown to work with drives that use this form of LBA indirection (e.g. Intel drives).

[1] http://www.diskeeper.com/

I've always found this quote (from Intel) to be helpful for understanding:

SSDs all have what is known as an ?Indirection System? ? aka an LBA allocation table (similar to an OS file allocation table). LBAs are not typically stored in the same physical location each time they are written. If you write LBA 0, it may go to physical location 0, but if you write it again later, it may go to physical location 50, or 8.567 million, or wherever. Because of this, all SSDs performance will vary over time and settle to some steady state value. Our SSD dynamically adjusts to the incoming workload to get the optimum performance for the workload. This takes time. Other lower performing SSDs take less time as they have less complicated systems. HDDs take no time at all because their systems are fixed logical to physical systems, so their performance is immediately deterministic for any workload IOMeter throws at them.

The Intel ® Performance MLC SSD is architected to provide the optimal user experience for client PC applications, however, the performance SSD will adapt and optimize the SSD?s data location tables to obtain the best performance for any specific workload. This is done to provide the ultimate in a user experience, however provides occasional challenges in obtaining consistent benchmark testing results when changing from one specific benchmark to another, or in benchmark tests not running with sufficient time to allow stabilization. If any benchmark is run for sufficient time, the benchmark scores will eventually approach a steady state value, however, the time to reach such a steady state is heavily dependant on the previous usage case. Specifically, highly random heavy write workloads or periodic hot spot heavy write workloads (which appear random to the SSD) will condition the SSD into a state which is uncharacteristic of a client PC usage, and require longer usages in characteristic workloads before adapting to provide the expected performance.

When following a benchmark test or IOMeter workload that has put the drive into this state which is uncharacteristic of client usage, it will take significant usage time under the new workload conditions for the drive to adapt to the new workload, and therefore provide inconsistent (and likely low) benchmark results for that and possibly subsequent tests, and can occasionally cause extremely long latencies. The old HDD concept of defragmentation applies but in new ways. Standard windows defragmentation tools will not work.

SSD devices are not aware of the files written within, but are rather only aware of the Logical Block Addresses (LBAs) which contain valid data. Once data is written to a Logical Block Address (LBA), the SSD must now treat that data as valid user content and never throw it away, even after the host ?deletes? the associated file. Today, there is no ATA protocol available to tell the SSDs that the LBAs from deleted files are no longer valid data. This fact, coupled with highly random write testing, leaves the drive in an extremely fragmented state which is optimized to provide the best performance possible for that random workload. Unfortunately, this state will not immediately result in characteristic user performance in client benchmarks such as PCMark Vantage, etc. without significant usage (writing) in typical client applications allowing the drive to adapt (defragment) back to a typical client usage condition.

In order to reset the state of the drive to a known state that will quickly adapt to new workloads for best performance, the SSD?s unused content needs to be defragmented. There are two methods which can accomplish this task.

One method is to use IOMeter to sequentially write content to the entire drive. This can be done by configuring IOMeter to perform a 1 second long sequential read test on the SSD drive with a blank NTFS partition installed on it. In this case, IOMeter will ?Prepare? the drive for the read test by first filling all of the available space sequentially with an IOBW.tst file, before running the 1 second long read test. This is the most ?user-like? method to accomplish the defragmentation process, as it fills all SSD LBAs with ?valid user data? and causes the drive to quickly adapt for a typical client user workload.

An alternative method (faster) is to use a tool to perform a SECURE ERASE command on the drive. This command will release all of the user LBA locations internally in the drive and result in all of the NAND locations being reset to an erased state. This is equivalent to resetting the drive to the factory shipped condition, and will provide the optimum performance.