• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Tech-Report: SSD Endurance Experiment

Me too. Is this the only such test ever performed, btw? I can't remember seeing anything like this done before.
 
IIRC the main issue with these endurance tests is that the drive will continue to run long past it should, and a power cycle to the drive is what will finish it off. Meaning, the drive was bad long before it hit the X TB written.

So, I searched the article (didn't have time to read all of it) for words such as power cycle, reboot, shutdown, restart, etc... Nothing in the article. So, my hope is that they shutdown their machines once a day so that the drives can power off.

I believe some guys at XtremeSystems or something like that was able to get an 830 to 600TB of writes before it died, and as expected, when the system powered off the drive died with it.

That said, this does look like a comprehensive review, and that I welcome it. I just hope they implement a power cycle into their testing methods at least once per day.
 
This is so 2012 guys.

Over at XS the 830 endured over 6PB (that's 6000TB) before having issues that would alert the user that something is wrong...

Fact is for the desktop / laptop user, the rest of the machine will literally be about as good as ashes in the fireplace before the SSD gives up!

Unless you have a crappy OCZ drive which those can just go tits up for no reason and trying to figure out when and why is like trying to predict the weather more than a few days out. 🙄

I've been dabbling with SSDs since 2007 when they were crazy expensive. What I've learned is with desktop systems you don't want to skimp on power supplies and mobile racks if you use 'em.
 
This is so 2012 guys.

Over at XS the 830 endured over 6PB (that's 6000TB) before having issues that would alert the user that something is wrong...

Fact is for the desktop / laptop user, the rest of the machine will literally be about as good as ashes in the fireplace before the SSD gives up!

Unless you have a crappy OCZ drive which those can just go tits up for no reason and trying to figure out when and why is like trying to predict the weather more than a few days out. 🙄

I've been dabbling with SSDs since 2007 when they were crazy expensive. What I've learned is with desktop systems you don't want to skimp on power supplies and mobile racks if you use 'em.

At face value, I thought the same as you did with the 'Wow 6000TB! This will last forever..." The flaw with the ExtremeSystems test (as pointed out by a member here a while go) was that they never powered off the system. Once the system was powered off, the Samsung 830 was toast and it was likely toast long before, giving it a false sense of security.
 
At face value, I thought the same as you did with the 'Wow 6000TB! This will last forever..." The flaw with the ExtremeSystems test (as pointed out by a member here a while go) was that they never powered off the system. Once the system was powered off, the Samsung 830 was toast and it was likely toast long before, giving it a false sense of security.

That kind of use typically points to server or multi user access environment where the system is rarely turned off. We have a supermicro box that is 14 years old and still running and yes it's never been turned off! Still has the DPT raid array with 9.1GB 10K Cheetahs that are like 90dB loud screeching too! I bet if that box was turned off for the weekend the drives would be toast too! 😉
 
That kind of use typically points to server or multi user access environment where the system is rarely turned off. We have a supermicro box that is 14 years old and still running and yes it's never been turned off! Still has the DPT raid array with 9.1GB 10K Cheetahs that are like 90dB loud screeching too! I bet if that box was turned off for the weekend the drives would be toast too! 😉

True, but how likely is it that a server is never rebooted? I suppose some specialized unix and linux boxes... Perhaps virtualized servers too, since their reboot is superficial. But any Windows (lol, yeah) server is going to be rebooted occasionally... I think it all depends on what type of servers we are talking about. 'Enterprise' servers are a bit ambiguous these days...
 
True, but how likely is it that a server is never rebooted? I suppose some specialized unix and linux boxes... Perhaps virtualized servers too, since their reboot is superficial. But any Windows (lol, yeah) server is going to be rebooted occasionally... I think it all depends on what type of servers we are talking about. 'Enterprise' servers are a bit ambiguous these days...

That server was originally running NT 4.0 from launch day forward. It was then delegated to linux for radius authentication roles probably around 2004 or 2005. It was restarted plenty of times for sure but the power was never disconnected where the fans and drives were not spinning.
 
This got me curious and my 240GB Agility 2 is still showing 100% remaining life after several years of use including 24TB of writes in the OCZ Toolbox.

Is that legit or is the SMART data bugged?
 
Is this the only such test ever performed, btw? I can't remember seeing anything like this done before.
Not in an official capacity, AFAIK.

The other side of the coin is data retention, as in: is the data still correct even if the drive's accepting writes over its limit?

Or as the point was made earlier, simply switching it off for an hour or so to see if the data is still intact, even if the drive isn't necessarily bricked.
 
It says that the test includes an MD5 check to verify the data. So unless the data being written is fitting in the OS filesystem cache, it should be valid.
 
Back
Top