code development/ files creation and ssd endurance.

May 11, 2008
19,306
1,131
126
When programming, a lot of little files are written and read back. I know from using fpga development software (ISE from xilinx), that the ide to do the fpga programming creates a lot of files during simulating, compiling and optimization.

From another user of ise, i know this caused the ssd to die prematurely in his laptop.
Have you guys and gals noticed that it may be more handy to use an old fashion HDD for such cases ?


On a side note, i do hope that the next xbox iteration has 2 bays for 2 drives. That would make me a very happy person if Microsoft really has plans to put windows 10+ on it.
SSD+HDD. :)
 
Last edited:

Fallen Kell

Diamond Member
Oct 9, 1999
6,009
418
126
I use a combo of SSD + HDD RAID. As you stated, SSD's will degrade overtime with lots of small I/O writes.
 

MarkLuvsCS

Senior member
Jun 13, 2004
740
0
76
Are you certain the cause of the SSD failure was truly wearing all the flash cells? Most of the time SSDs seem to fail prematurely due to controller issues. Even consumer SSDs are able to take 50-100TB+ rated endurance under warranty. If endurance was a concern I would purchase a larger than necessary drive like the 850 evo or slightly older gen SSDs with like 25nm MLC using a good controller like Marvell. You could also ebay some enterprise drives that will be slightly slower but with quite a bit higher endurance and some form of power loss protection.
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
Right, the larger the SSD, the larger the unused space, the more room the controller has to do wear leveling.

My work desktop has a 120 GB Samsung 840 or 850 EVO (I forget) and it's been fine for years of daily Visual Studio development. That's even with just under 20 GB of free space (I really need to swap it for a 250 GB).
 

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,323
4,904
136
I still haven't worn out my 5 year+ old Intel SSDs... and those get thrashed by lots of small disk writes daily.
 

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
...
From another user of ise, i know this caused the ssd to die prematurely in his laptop.
...

How do you KNOW that? Did the SMART data indicate that it had reached its endurance limit, or are you noting that the two are correlated and speculating that the small writes were the cause? My suspicion is the latter. Unless you created a truly pathological workload (like database server doing 24/7 random IO) my guess is that controller issues are more likely than NAND endurance issues.
 

Gryz

Golden Member
Aug 28, 2010
1,551
203
106
Suppose you wear out a 240GB SSD 2-3 years quicker than you would other wise. That's an extra 100 euros wasted. So maybe 30-50 euros per year.

I think that would be very good use of your money !

I recently started working as a programmer. Our full software tree takes ~30 minutes to compile and link. I have a desktop on my desk, we use NXC to connect to a "lab machine" that have an HDD with our source code. Pre-processing happens on our own lab-machine, then pre-processed source-files are copied to a build-farm, where the real compiles happen . Heavily parallelized. Object files are copied back. Linking is done back on our own lab-machine again.

One of our tools-people told me that the biggest bottleneck in the whole compile is still HDD-access. Not the compile itself. (No surprise). And not the copying over the network ! Now that was surprising for me.

We could do local compiles, but they would be slower than compiling on the build farm. We could use SSDs, but they would be too expensive. Every programmer has maybe a few dozen trees checked out at any given time. One tree per (old) software release, and one tree per feature or bug-fix they're working on. This adds up too fast and would make SSDs too expensive, I guess. Mmmm, maybe I should ask our tools-guys if they have ever benchmarked SSDs for compilation.

Anyway, my point was, if you can speed up your compiles by a factor of 2, then the extra few dozen euros/dollars per year are probably worth it.
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
Use a write back cache: http://www.romexsoftware.com/en-us/primo-cache/index.html It lets you configure the delay, trims a lot of blocks, and for precaution use in conjunction with a UPS, longevity and data integrity are then secured.

That software reminds me of all of those memory compression tools way back in the day. D:

In the first paragraph it says

It transparently stores disk data into fast cache devices such as physical memory, so that future read requests for those data will be served directly from the cache and be faster. Thus access time will be reduced, showing a great improvement in overall system performance.

This feature has quite literally been built into all modern Operating Systems for quite some time now.
 

beginner99

Diamond Member
Jun 2, 2009
5,208
1,580
136
I still haven't worn out my 5 year+ old Intel SSDs... and those get thrashed by lots of small disk writes daily.

Yeah but those have 100x times the endurance compared to current TLC drives.

For such scenarios I would for sure choose a higher capacity MLC drive and give it additional spare area. But for me if you use software for FPGAs it's commercial usage and replacing the SSD every 2 years shouldn't affect costs much compared to making the process much smoother and faster. Tons of small file writes will have terrible performance on HDD.
 

TheRyuu

Diamond Member
Dec 3, 2005
5,479
14
81
Yeah but those have 100x times the endurance compared to current TLC drives.

For such scenarios I would for sure choose a higher capacity MLC drive and give it additional spare area. But for me if you use software for FPGAs it's commercial usage and replacing the SSD every 2 years shouldn't affect costs much compared to making the process much smoother and faster. Tons of small file writes will have terrible performance on HDD.

Yea but modern TLC drivers are also a lot cheaper. The Samsung drives also have small amounts of MLC storage to speed things up and (maybe) to help the TLC endurance. Didn't the Samsung TLC drives do well in that Techreport SSD experiment?

SSD's are kind of perfect for stuff like code development. I doubt that issues with endurance actually become major issues within the lifespan of an SSD drive (warranty period at least).

Use a write back cache: http://www.romexsoftware.com/en-us/primo-cache/index.html It lets you configure the delay, trims a lot of blocks, and for precaution use in conjunction with a UPS, longevity and data integrity are then secured.

All modern OS's already have this. Windows caches something like 1.5GB of writes on my system.
 
May 11, 2008
19,306
1,131
126
How do you KNOW that? Did the SMART data indicate that it had reached its endurance limit, or are you noting that the two are correlated and speculating that the small writes were the cause? My suspicion is the latter. Unless you created a truly pathological workload (like database server doing 24/7 random IO) my guess is that controller issues are more likely than NAND endurance issues.

For as far as i know, the specific user is tech savy and did some checking and investigating. I will try to get more details such as ssd model, memory size and specific failure of the memory. My guess was he was using a small memory size ssd that was almost filled.
 
May 11, 2008
19,306
1,131
126
Are you certain the cause of the SSD failure was truly wearing all the flash cells? Most of the time SSDs seem to fail prematurely due to controller issues. Even consumer SSDs are able to take 50-100TB+ rated endurance under warranty. If endurance was a concern I would purchase a larger than necessary drive like the 850 evo or slightly older gen SSDs with like 25nm MLC using a good controller like Marvell. You could also ebay some enterprise drives that will be slightly slower but with quite a bit higher endurance and some form of power loss protection.

I have a Crucial CT256M550SSD1 M550 MLC drive. Very good model.
But personally i do not worry about it breaking down. It was normal priced for its memory size : 256GB. It is good quality.
Because i only have relative small projects, i just use my HDD as main storage and medium for all the files. My SSD is my OS drive and i have my virtual machine on it. Works like a charm.