Optimization Esp for SSDs via Perfect Disk!!

Virgorising

Diamond Member
Apr 9, 2013
4,470
0
0
Well!

I just installed my first SSD, a Crucial M500 240GBs.

I had naturally set Perfect disk to not go near it, only go near my backup mechanical drive.

I just opened PD, saw this below, went to Raxco and saw this amazing video on exactly how PD for SDDs can AUGMENT TRIM if you tell it to!

Here is link to amazing video:
http://www.raxco.com/resources/articles/ssd-optimization

hrkgsj.jpg
 
Last edited:

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,693
136
Sounds an awful lot like snake oil to me. The LBA the OS sees has absolutely no coupling to what's happening internally in the SSD.

At best this kind if "optimization" is harmless, at worst it'll cost you PE cycles. Besides its completely unnecessary since every block has the exact same access time and any newer SSD has its own very effective garbage collection.

In short leave the SSD controller to do its thing. No need for user intervention.
 

code65536

Golden Member
Mar 7, 2006
1,006
0
76
Besides its completely unnecessary since every block has the exact same access time

To be fair, they claim that it's about better TRIM (by getting rid of partially-full pages, since the pages are larger than the sector size) and not about access time.

But you're right: it's snake oil.

I'm extremely skeptical of their claim that their software "understands" the firmware enough that it can somehow tease out a SSD's page boundaries. And even if they can, there's no way for them to guarantee that when they move a sector around, that it gets put into the page that they intend because the firmware ultimately decides that, and the firmware could toss that sector into a completely different page and this software would have no idea that it happened. Oh, and let's not forget about compression on SandForce drives, which makes this mapping even more of a joke.

There's also the matter of cost-vs-benefit. Firmwares/controllers already do a very good job of managing this sort of thing, and they have complex algorithms at work. Even if this software works as intended (which I highly doubt), the benefit would be marginal. And for what cost? You're spending time running this thing, and it's incurring writes as it does so--are those writes really balanced out by the savings of more efficient page-filling? That's highly unlikely.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Sounds an awful lot like snake oil to me. The LBA the OS sees has absolutely no coupling to what's happening internally in the SSD.
Yes, it does. A contiguous set of LBAs takes much less overhead to request and transfer. Also, most SSDs very clearly try to write contiguous transfers sequentially on the flash, as evidenced by sequential performance being high. It's not a tight coupling, but it is a coupling. NVMe should make the overhead basically nil, but we're still on SATA/AHCI, for now, for which multiple non-sequential requests do incur substantial overhead.

Besides its completely unnecessary since every block has the exact same access time
Only in low-QD scenarios with SSDs that use a RAM cache for a flat address list or balanced tree, and when accessing blocks on 'free' flash. It's right that the access time is very low, if a channel is being used, another access within that channel will be slower than one to another channel. Likewise for dies (and now planes).

Even so, I'd want recorded stats before going about defragging an HDD, much less an SSD ("optimization" here generally means defragging to make larger contiguous free space blocks, but not defragging fragmented files). SSDs do quite a good job without help, nowadays, even SF-2281 drives.
 

Virgorising

Diamond Member
Apr 9, 2013
4,470
0
0
Sounds an awful lot like snake oil to me. The LBA the OS sees has absolutely no coupling to what's happening internally in the SSD.

At best this kind if "optimization" is harmless, at worst it'll cost you PE cycles. Besides its completely unnecessary since every block has the exact same access time and any newer SSD has its own very effective garbage collection.

In short leave the SSD controller to do its thing. No need for user intervention.


I think not snake oil. Which is why I posted. And, consider the source: Raxco.
 

Virgorising

Diamond Member
Apr 9, 2013
4,470
0
0
Even so, I'd want recorded stats before going about defragging an HDD, much less an SSD ("optimization" here generally means defragging to make larger contiguous free space blocks, but not defragging fragmented files)

Yes. I remain impressed by wut the guy in the video says. Not clear yet re the viability of this, but they say it has no downside, only ups.
 
Last edited:

quikah

Diamond Member
Apr 7, 2003
4,135
701
126
Oh, I am sure it does something. Just like the AT article, it does something. But whether you will actually notice it in your everyday usage is VERY debatable.

You will be able to tease out a few more points in your benchmarks though probably.
 

Virgorising

Diamond Member
Apr 9, 2013
4,470
0
0
Oh, I am sure it does something. Just like the AT article, it does something. But whether you will actually notice it in your everyday usage is VERY debatable.

You will be able to tease out a few more points in your benchmarks though probably.

Interesting, But, if it extends the life of the drive, wouldn't that be significant?
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,693
136
Yes, it does. A contiguous set of LBAs takes much less overhead to request and transfer. Also, most SSDs very clearly try to write contiguous transfers sequentially on the flash, as evidenced by sequential performance being high. It's not a tight coupling, but it is a coupling. NVMe should make the overhead basically nil, but we're still on SATA/AHCI, for now, for which multiple non-sequential requests do incur substantial overhead.

But still, the OS has zero control over where that data is actually written in the NAND. The controller handles that part without OS intervention. Of course if you could alter read/write requests to happen exclusively sequential and minimize random read/write, then I suppose it would have an effect. I just don't think it worth the bother, the controller is already designed to handle these things without user/OS intervention.

Only in low-QD scenarios with SSDs that use a RAM cache for a flat address list or balanced tree, and when accessing blocks on 'free' flash. It's right that the access time is very low, if a channel is being used, another access within that channel will be slower than one to another channel. Likewise for dies (and now planes).

Of course. That's where over provision comes in to ensure there is a supply of clean blocks to write to. But again, built-in garbage collection is likely more effective for ensuring there are no "bad" blocks with garbage data floating around. After all, the guys who designed the controller/firmware properly know what they're doing. (honourable mention to OCZ here, sometimes it didn't quite seem they knew what they where doing... :D)

Anyway even with a worst case scenario, access time is still going to be way better then any HDD. (Assuming you're using a reasonably modern SSD)

SSDs do quite a good job without help, nowadays, even SF-2281 drives.

My point exactly... :)
 

Virgorising

Diamond Member
Apr 9, 2013
4,470
0
0
Extend I'd guess reduce? rewriting data to be contiguous wouldn't extend it I don't think.

Not sure about that. Listen carefully re what the guys says. Hold on, I found and parked a white paper from Raxco yesterday.....it's in PDF....let me see if I can convert it.
 

quikah

Diamond Member
Apr 7, 2003
4,135
701
126
Interesting, But, if it extends the life of the drive, wouldn't that be significant?

Again, what is your usage? If it extends write cycles by a few percent that could be significant or not. If you were never going to exceed the write cycle in the first place then it doesn't matter. And really, just not running IO benchmarks will probably give you just as much if not more of a life extension than any hocus pocus software.

Speaking of usage, that AT article is kind of pointless. Who is doing 10k+ 4K iops on a regular basis? Where is the data for 32k, 64k, etc? Doing 10k iops with 4k size is just burning CPU for no good reason (unless you have A LOT of really tiny files).
 

Virgorising

Diamond Member
Apr 9, 2013
4,470
0
0
K....converted it, will now paste; it's BIG... but, I think, important to read: white paper from Raxco:

Improving SSD Performance:

Combating Write Amplification
to Extend SSD Lifespan
www.raxco.com
SSD Performance Optimization | Whitepaper


Overview

Storage technology is constantly evolving. One of the latest technologies is Solid State Drives (SSD) - which replaces traditional electro-mechanical parts (i.e. rotating disk platters and read/write heads) with flash memory.

Benefits of SSD storage include:

· Very fast random access times due to elimination of slow electro-mechanical components

· Low read latency times due to elimination of disk seek times

· Consistent read performance because physical location of data doesn't matter (there is no "fastest" part of the drive as in traditional disk drives)

· File fragmentation has a negligible effect due to the elimination of electro-mechanical components (seeking)


Due to the nature of the flash memory and how data is currently written, SSD write performance degrades over time.

The Inherent Flaw with Flash Memory

Every write to a solid state drive results in the undesirable action known as “write amplification,” which shortens the life of the drive.

The flash memory of a solid state drive must be erased before it is rewritten, so when you try to write to the SSD, user data and metadata must be rewritten more than once to accomplish the intended write.

The multiplied number of writes and the bandwidth consumed by write amplification have two negative consequences:

1. Decrease the lifespan of the SSD

2. Reduce random write performance

But there is something you can do to slow down, if not reverse, the effects of write amplification.




www.raxco.com
SSD Performance Optimization | Whitepaper

To improve SSD performance and extend the life of your SSD you must limit writes to the drive. There is an easy way to accomplish this but let's dive into the issue of “why” first.

SSD Performance Factors

SSD performance depends on the following factors:

· Write Endurance1: The number of write cycles to any block of flash is limited. The maximum number of write cycles (endurance) is dependent on type of flash memory (MLC vs. SLC) and varies from 10,000 write cycles in older SSD drives to 1,000,000 write cycles or more with today's modern SSD drives.

· Write Amplification2: Write amplification is native to all NAND flash memory. Just as with traditional disk drives with NAND flash memory, data is laid down in blocks. However, block sizes on an SSD are fixed - meaning even a small 4k chunk of data write can take up a 512k block of space, depending on the NAND flash memory being used. When any portion of the data on the drive is changed, a block must first be marked for deletion in preparation of accommodating the new data (read/modify/write). The amount of space required for each new write can vary. The write amplification factor on many consumer SSDs is anywhere from 15 to 20.

That means for every 1MB of data written to the drive, 15MB to 20MBs of space is actually needed3. For example, a read/modify/write algorithm in an SSD controller will take a block about to be written to, retrieve any data already in it, mark the block for deletion, redistribute the old data, then lay down the new data in the old block.

1 http://en.wikipedia.org/wiki/Flash_memory#Write_endurance

2 http://en.wikipedia.org/wiki/Write_amplification

3 Knut Grimsrud, a director of storage architecture in Intel's research and development laboratory.


www.raxco.com
SSD Performance Optimization | Whitepaper

To maintain SSD write performance, SSD manufacturers implement one or more of the following techniques:

· Wear Leveling4: The SSD controller keeps track of how many erase cycles have been performed on each flash block and dynamically remaps logical to physical blocks to spread out the wear over all the cells in the drive. This means that no one portion wears out faster than another - prolonging the life of the SSD.


· Over Provisioning: Over provisioning provides extra memory capacity (which the user can't access). The SSD controller uses these "extra" cells to more easily create pre-erased blocks - ready to be used in the virtual pool.

· TRIM5: TRIM allows the SSD controller to remove data from deleted cells so that the next write won't have to move, erase and then write. This allows an SSD to maintain write performance for a longer period of time. In order for TRIM to be effective, it has to be implemented in the SSD itself as well as in the Windows operating system. TRIM has been included with all new operating system releases since Windows 7 and Windows Server 2008 R2.

4 http://en.wikipedia.org/wiki/Wear_leveling

5 http://en.wikipedia.org/wiki/TRIM_(SSD_command)


www.raxco.com
SSD Performance Optimization | Whitepaper

How to Combat Write Amplification

Because SSDs read data from flash memory, file access and “read” times are not a problem with SSDs...but writing to the disk is another issue.

SSD users should be concerned about the fragmentation of free space. If free space is scattered across the SSD between full blocks of data and trapped within partially full blocks of data, the more places the SSD must look to in order to write to the disk, and the less efficient write operations become.

You can combat the effects of write amplification by keeping free space consolidated on the SSD. Write amplification actually decreases when running TRIM operations to free up disk space, so you want to use the TRIM command to wipe clean unused disk space trapped in partially full blocks of data.

But how do you keep TRIM operations running as efficiently as possible?

Is It OK to Defrag an SSD?

While there has been much discussion around whether or not to defrag an SSD - the consensus is in:

Do not defragment your SSD.

Since there is no mechanical seek time on an SSD, traditional file-based defragmentation really doesn't provide any performance benefit.

In fact, you can do more harm than good by performing a defrag on an SSD, as it would actually create additional writes to the drive -- unlike a hard disk drive, any write operation to SSD storage requires not one step, but two: an erase followed by the actual write -- so SSD defragmentation should be avoided.


With SSD storage, the whole idea is to decrease the number of writes/updates to the SSD, so you want to be sure that any sort of optimization pass performed on the SSD does as little "shuffling" of files and data as possible.

www.raxco.com
SSD Performance Optimization | Whitepaper

But Yu Hsuan Lee of Apacer Technology, a company that produces industrial SSD solutions, wrote an article at RTC Magazine6 that discusses and provides some benchmarks showing that optimizing an SSD drive improves performance and extends an SSD's life span:

"Since the erase/write speed is slow compared to read, a write multiplication due to free space fragmentation can slow down I/O time severely."

"This means a well-designed defrag algorithm can extend an SSD's life span."

Intelligent SSD Optimization

High free space fragmentation is a strong indicator that a high instance of untrimmed -- or partially full -- blocks exists on an SSD. Free space consolidation eliminates free space fragmentation and consolidates partially full blocks of data. This results in more efficient TRIM operations and faster write performance, reducing write amplification.

The new requirement for managing SSDs is a disk optimizer that identifies which drive is an SSD and which is a traditional hard drive and then performs the appropriate actions for each drive.

6 http://rtcmagazine.com/articles/view/101053


www.raxco.com
SSD Performance Optimization | Whitepaper

PerfectDisk® and Solid State Drives

PerfectDisk's SSD Optimize feature, specifically designed for SSDs, automatically eliminates free space fragmentation and consolidates fragmented free space wherever the largest section of contiguous free space exists, whether at the beginning, middle or end of the drive.

While PerfectDisk is known for its efficient defragmentation and fragmentation prevention on traditional hard drives, its SSD Optimize feature entirely avoids file defragmentation on SSDs, focusing solely on the consolidation of free space. As mentioned above, file fragmentation does not inhibit SSD read performance, so running a traditional defrag would provide no benefit to the SSD.

Raxco Software draws upon its pioneering work with free space consolidation in PerfectDisk to provide SSD Optimize, which focuses exclusively on what matters for performance in SSDs:

· Consolidates free space on the drive without performing a traditional file defrag

· Identifies where the largest section of free space is located and consolidates free space in that location -- regardless of whether it is at the beginning, middle, or end of the disk

PerfectDisk detects SSD hardware and defaults to the SSD Optimize setting for SSDs. Running SSD Optimize on your solid state drive automatically results in more efficient SSD TRIM operations, preventing the multiple writes caused by write amplification before they occur, leaving you with faster, more efficient writes to the disk and a longer-lasting drive life.

By working with the beneficial properties of solid state drive technology, PerfectDisk's SSD Optimize is able to maintain SSD performance over the long term without causing additional wear on the disk.

The automatic SSD optimization method SSD Optimize is included as a standard feature in PerfectDisk Professional and PerfectDisk Server.

www.raxco.com
SSD Performance Optimization | Whitepaper


About Raxco

Raxco Software is a Microsoft Gold ISV and a VMware Technical Alliance Partner member. For over 35 years Raxco has developed award-winning software solutions and performance utilities that improve performance on Windows servers and workstations, simplify administration and help our customers work more efficiently.
 
Last edited:

Virgorising

Diamond Member
Apr 9, 2013
4,470
0
0
Again, what is your usage? If it extends write cycles by a few percent that could be significant or not. If you were never going to exceed the write cycle in the first place then it doesn't matter. And really, just not running IO benchmarks will probably give you just as much if not more of a life extension than any hocus pocus software.

Speaking of usage, that AT article is kind of pointless. Who is doing 10k+ 4K iops on a regular basis? Where is the data for 32k, 64k, etc? Doing 10k iops with 4k size is just burning CPU for no good reason (unless you have A LOT of really tiny files).


Pls see above, the PDF white paper I got yesterday and just converted and pasted.
 
Feb 25, 2011
16,964
1,597
126
Why do you trust raxco more than us? We don't stand to benefit by selling you anything.

That paper is basically full of FUD and BS. Let your SSD firmware do it's job.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
My main issues with this paper:
High free space fragmentation does indicate less spare area but for most drives that have at least a 7% spare area (nearly all consumer ones do) will generally have that area open and cleared barring a reason that garbage collection failed to run. This issue only matters if you intend to write 7% of the drive at full controller / NAND performance. A 256GB SSD has about 17.5gb of spare floating around. Most consumer desktops don't have a prayer to be able to fill that at a rate that GC could not start happening during the copy.

Consolidating the free space only may reduce write amplification. Doing the consolidation is going to generate a lot of write amplification on its own. Usage is going to tell you if you lost or gained life. If you don't write much, like just surfing the web and caching, the free space defrag is going to burn far more write cycles moving the data around. IE if you move 50GB of data around to consolidate, you lose a lot more than if you just dealt with the 50MB of cache writes.

The paper assumes the drive actually is doing / did anything with the TRIM commands or was able to even process them. There is no requirement for the drive to report a status for a TRIM request. The drive is free to completely ignore it. They might do that alsoo because heavy garbage collection is additional writes that induce wear. Consolidating the drives free space means nothing if the SSD has stopped servicing the TRIM requests due to reasons programed in to the firmware.

Seems very circumstantial.
 

Virgorising

Diamond Member
Apr 9, 2013
4,470
0
0
Why do you trust raxco more than us? We don't stand to benefit by selling you anything.

I trust my differential assessment, Dave; it's not either or. Raxco is among the most formidable companies within its genre, and has been for a long time.

I doubt they would evolve some dog and pony show to sell a few more apps. Esp considering the tiny percentage of humans who choose to use SSDs, big picture.

Plus, they cite an expert outside of Raxco.

That paper is basically full of FUD and BS.

How, do you know this, Dave?
 
Last edited:

Virgorising

Diamond Member
Apr 9, 2013
4,470
0
0
My main issues with this paper:
High free space fragmentation does indicate less spare area but for most drives that have at least a 7% spare area (nearly all consumer ones do) will generally have that area open and cleared barring a reason that garbage collection failed to run. This issue only matters if you intend to write 7% of the drive at full controller / NAND performance. A 256GB SSD has about 17.5gb of spare floating around. Most consumer desktops don't have a prayer to be able to fill that at a rate that GC could not start happening during the copy.

Consolidating the free space only may reduce write amplification. Doing the consolidation is going to generate a lot of write amplification on its own. Usage is going to tell you if you lost or gained life. If you don't write much, like just surfing the web and caching, the free space defrag is going to burn far more write cycles moving the data around. IE if you move 50GB of data around to consolidate, you lose a lot more than if you just dealt with the 50MB of cache writes.

The paper assumes the drive actually is doing / did anything with the TRIM commands or was able to even process them. There is no requirement for the drive to report a status for a TRIM request. The drive is free to completely ignore it. They might do that alsoo because heavy garbage collection is additional writes that induce wear. Consolidating the drives free space means nothing if the SSD has stopped servicing the TRIM requests due to reasons programed in to the firmware.

Seems very circumstantial.

Well....a response worth pondering.

I did tell my PD to do it. If I get evidence that was not a good decision, will tell it to stop.
 

Revolution 11

Senior member
Jun 2, 2011
952
79
91
What I want to know is how Raxco knows what the SSD algorithms for each company's individual controller and firmware version are? Also, even if Raxco did know all this, which they don't, how will they do a better job of optimizing the SSD than the company that actually made the product?

PS: Could you in the future consolidate multiple consequential posts into a larger one? It is annoying to see a whole list of posts containing single sentences cluttering the thread when you could have clearly edited the post you made first.
 
Last edited:

Virgorising

Diamond Member
Apr 9, 2013
4,470
0
0
Then why am I just hearing about them for the first time in this thread?

That's surprising. I think you are an exception in not knowing among those besotted by computers/software, etc. I can't remember when I didn't know about them.

Yr premise appears you are some kinda uber pundit, and if you don kno about a given thing it must be Less Than. You may wanna rethink that.

Boy, have I got a bridge to sell you!

You said yourself you know nothing about Raxco. That, despite that you not only dismiss them out of hand/cavalierly, while also not offering any technical specifics, but also impugn them.....for me, looses credibility.