Originally posted by: fuzzybabybunny
Just read this recent article:
Long-term performance analysis of Intel Mainstream SSDs
http://www.pcper.com/article.php?aid=669
It shows very significant performance decreases as SSDs are used for longer periods of time.
Intel replies to solid-state drive 'slowness' critique
After a technology review site claimed Intel solid-state drives slow considerably after extended use, Intel said it has not been able to duplicate the results.
http://news.cnet.com/8301-13924_3-10168084-64.html
Originally posted by: taltamir
I never said NCQ does wear leveling, i said NCQ aware wear leveling can reduce the impact of steady state and your write amplification. And according to one of the articles on the subject the intel controller does just that... but I have only read it in one location and it might be bull. However it makes sense.And taltamir, I'm pretty sure its not NCQ that does the wear levelling part of the Intel SSD because what if NCQ was disabled?
Wear leveling at its simplest will try to, by itself, write each block to the least written to block on the drive. If you combined it with NCQ it will still try to do the same, but it will now have the added benefits of doing read (512K circuit)-modify with SEVERAL pending 4k writes at once-erase entire 512K-write modified data.
Originally posted by: Cookie Monster
Sp basically Intel is saying that in the "real" world situation, average PC users wont be affected so much with the degradation of a SSD in performance which is probably true. However denying the fact that when SSDs are used heavily, they can really take a hit when it comes to performance as shown by PcPer and there is no lie about this.
Originally posted by: Idontcare
Originally posted by: Cookie Monster
Sp basically Intel is saying that in the "real" world situation, average PC users wont be affected so much with the degradation of a SSD in performance which is probably true. However denying the fact that when SSDs are used heavily, they can really take a hit when it comes to performance as shown by PcPer and there is no lie about this.
I see Intel saying two things - first they appear to be claiming that even if a bench were to induce some form of performance degradation in the drive, a bench doesn't reflect real-world usage patterns and as such they would not expect customers to experience the same degradation issues (unless they run the bench, then their system is fubar'ed).
The second thing Intel appears to be saying is that regardless of the above, they can't get the benches to cause the same problems that pcper accomplished. This is a big discrepancy. If pcper's results cannot be duplicated then that is very bad for pcper's credibility.
Originally posted by: taltamir
Then you obviously don't understand what steady state is.Originally posted by: coolVariable
Link?
Samsung and Mtron ship their SSDs after a full write cycle, that is steady-state!
Don't really see where you could get another performance drop from.
I believe taltamir is correct overall in this debate, but he has not explained himself fully, and is getting sidetracked into semantics and debating the definition of the term "steady state," instead of the main issue which is the performance degradation.Originally posted by: Viper GTS
There is more to it than just a lack of unwritten cells available. Intel documents that it varies by workload/access pattern, and switching workloads requires a new cycle of conditioning the drive to obtain steady-state results.Originally posted by: coolVariable
Link?
Samsung and Mtron ship their SSDs after a full write cycle, that is steady-state!
Don't really see where you could get another performance drop from.
Originally posted by: Idontcare
I see Intel saying two things - first they appear to be claiming that even if a bench were to induce some form of performance degradation in the drive, a bench doesn't reflect real-world usage patterns and as such they would not expect customers to experience the same degradation issues (unless they run the bench, then their system is fubar'ed).
Originally posted by: Idontcare
The second thing Intel appears to be saying is that regardless of the above, they can't get the benches to cause the same problems that pcper accomplished. This is a big discrepancy. If pcper's results cannot be duplicated then that is very bad for pcper's credibility.
SSDs all have what is known as an ?Indirection System? ? aka an LBA allocation table (similar to an OS file allocation table). LBAs are not typically stored in the same physical location each time they are written. If you write LBA 0, it may go to physical location 0, but if you write it again later, it may go to physical location 50, or 8.567 million, or wherever. Because of this, all SSDs performance will vary over time and settle to some steady state value. Our SSD dynamically adjusts to the incoming workload to get the optimum performance for the workload. This takes time. Other lower performing SSDs take less time as they have less complicated systems. HDDs take no time at all because their systems are fixed logical to physical systems, so their performance is immediately deterministic for any workload IOMeter throws at them.
The Intel ® Performance MLC SSD is architected to provide the optimal user experience for client PC applications, however, the performance SSD will adapt and optimize the SSD?s data location tables to obtain the best performance for any specific workload. This is done to provide the ultimate in a user experience, however provides occasional challenges in obtaining consistent benchmark testing results when changing from one specific benchmark to another, or in benchmark tests not running with sufficient time to allow stabilization. If any benchmark is run for sufficient time, the benchmark scores will eventually approach a steady state value, however, the time to reach such a steady state is heavily dependant on the previous usage case. Specifically, highly random heavy write workloads or periodic hot spot heavy write workloads (which appear random to the SSD) will condition the SSD into a state which is uncharacteristic of a client PC usage, and require longer usages in characteristic workloads before adapting to provide the expected performance.
When following a benchmark test or IOMeter workload that has put the drive into this state which is uncharacteristic of client usage, it will take significant usage time under the new workload conditions for the drive to adapt to the new workload, and therefore provide inconsistent (and likely low) benchmark results for that and possibly subsequent tests, and can occasionally cause extremely long latencies. The old HDD concept of defragmentation applies but in new ways. Standard windows defragmentation tools will not work.
SSD devices are not aware of the files written within, but are rather only aware of the Logical Block Addresses (LBAs) which contain valid data. Once data is written to a Logical Block Address (LBA), the SSD must now treat that data as valid user content and never throw it away, even after the host ?deletes? the associated file. Today, there is no ATA protocol available to tell the SSDs that the LBAs from deleted files are no longer valid data. This fact, coupled with highly random write testing, leaves the drive in an extremely fragmented state which is optimized to provide the best performance possible for that random workload. Unfortunately, this state will not immediately result in characteristic user performance in client benchmarks such as PCMark Vantage, etc. without significant usage (writing) in typical client applications allowing the drive to adapt (defragment) back to a typical client usage condition.
In order to reset the state of the drive to a known state that will quickly adapt to new workloads for best performance, the SSD?s unused content needs to be defragmented. There are two methods which can accomplish this task.
One method is to use IOMeter to sequentially write content to the entire drive. This can be done by configuring IOMeter to perform a 1 second long sequential read test on the SSD drive with a blank NTFS partition installed on it. In this case, IOMeter will ?Prepare? the drive for the read test by first filling all of the available space sequentially with an IOBW.tst file, before running the 1 second long read test. This is the most ?user-like? method to accomplish the defragmentation process, as it fills all SSD LBAs with ?valid user data? and causes the drive to quickly adapt for a typical client user workload.
An alternative method (faster) is to use a tool to perform a SECURE ERASE command on the drive. This command will release all of the user LBA locations internally in the drive and result in all of the NAND locations being reset to an erased state. This is equivalent to resetting the drive to the factory shipped condition, and will provide the optimum performance.
Originally posted by: magreen
I do not know whose definition of "steady state" is correct, but on the issue of performance degradation, I believe taltamir is correct.
Originally posted by: IntelUser2000
So here's the impression I got reading lots of SSD reviews.
DON'T BENCHMARK NEEDLESSLY!!
Especially if you run things like IOMeter, which is an extremely stressful server-oriented benchmark program.