Do you know any information that alleges what the criteria is? My understanding was the 1 year retention state was specific to how P/E cycles were rated. But I'm not sure if we can even assume that is for TBW.
There's for example JESD218B-01 and JESD219A, they have some information on that - you can download them from JEDEC (registration required though).
Now TBW is according to JEDEC the amount of data that can be written to the drive (under specific conditions) and upon reaching that is to be considered worn out and be able to retain data for at least one year (also under specific conditions).
With manufacturers though it seems more like what they're willing to cover under their warranty rather than that the TBW actually represents a point where the drive is going to be worn out - think the ones with TBW set a bit low might defend it on the basis of retention being at at least one year after though and so giving them fair margins.
My other thing as mentioned is I'm not sure how many manufacturers actually implicitly mention they are following such standards for their TBW ratings.
Well, you'd have to find some small footnote in a data sheet somewhere but looking around a bit I can see Intel/Kingston/Toshiba and WD/SanDisk mentioning it (either using JESD218, JESD218A, JESD219 or JESD219A).
My consideration was for a way to hypothetically accelerate degradation as I realize that for reviewers and tech publications especially it probably isn't practical to test for such a long period of time.
While the argument could be a "hot box" and massive write conditioning at the start isn't a real world usage scenario it's to attempt to emulate how the drive would actually hold up after 2-3 years of use which I expect is a reasonable planned time period for consumers.
Get that that was why, just mentioning how my tests differed from that.
The 850 EVO line used multiple controllers. The smaller capacity drives (up to 500gb) used the then newer MGX controller you reference which was also used later on the 750 EVO line as well. The largest 1 TB however used the older MEX controller which was used on the 840 EVO line, this controller I don't believe is capable of LDPC only BCH for ECC.
https://www.anandtech.com/show/8747/samsung-ssd-850-evo-review
When the 850 EVO line was updated to V2 the controller on the higher capacities was switched to the MHX along with 32L to 48L NAND.
Aware that it used two different controllers, just didn't check which one of them the 840 EVO used.
I was referring to the 850 EVO 1TB and 500GB having the same TBW rating at 150GB. The 250GB and 120GB also had the same TBW rating.
Was pointing out the fact that TBW doubled for the 850 EVO as it did for the 860 EVO then the 850 EVO would match it.
It starts out at 75TB at 120GB but retains that TBW rating for the 250GB, otherwise it would have have matched the 150TB of the 860 EVO at 250GB and well, at every capacity after that.
As a further aside the 850 Pro line all had the same TBW rating regardless of capacity. Stuff like this suggests to me that TBW ratings given my manufacturers aren't really as standardized/tested as we might think.
I believe they doubled it from 150TB to 300TB for the 500GB-1TB capacities (or possibly just the 1TB capacity) shortly after launch though.
But, like I've said sometimes the connection between endurance and TBW is a bit weak.
For the 850-series specifically, TBW was likely kept low so they would not compete for enterprise-class drives that have higher profit margins.
That looks really bad, haven't seen any of the 850 EVOs I've tested been so affected (with similar amounts of wear).
Looks especially odd to see it happening so fast, even for files less than a week old - that would be faster than I've seen with any drive.
And the 850 EVO is maybe not as proactive with rewrites as I would like but the function is there and can be quite useful and should not allow things to get this bad.
In comparison, here are some results from both 840 EVO (old firmware), and two 850 EVOs:
First the 840 EVO:
Now the 840 EVO has been unpowered between tests and the older folders were added when the drive was hot/pretty warm, the next to last when it was lukewarm and the last folder was added when it was like a few degrees celsius.
As you can see read speeds still haven't dropped that much, that is except for the last folder but considering the conditions it's not that bad.
Here we have an 850 EVO:
It's been tested in the same way as the 840 EVO but it has been left unpowered longer between tests than the 840 EVO.
As you can see even the four month old folder which was added when the drive was only a few degrees above freezing still has an average read speed that is more than twice as fast as the 850 EVO in the link.
And here is another 850 EVO:
This 850 EVO has not been tested in the same way as the others (it's my game drive).
Also worth noting that it's the same capacity and uses the same firmware as the 850 EVO in the link.
It doesn't really have those issues with low read speeds either.
So I don't know what's going on with that 850 EVO but it seems like there might be something very wrong with it.
My speculation is that to overcome TLC issues it required a big jump in ECC with BCH to LDPC. The general thought that manufacturing alone (with the shift to 3D NAND and etc.) may not be accurate. Similarly (I actually believe commentary from yourself) that suggested newer (than the 840 EVO line) 2D TLC drives had better than expected retention when paired with newer controllers (as they may likely have better ECC).
More like that they were able to maintain high read speeds better than expected than that they had better data retention.
Not saying that they have poor data retention despite having fairly high read speeds, just pointing out the difference.
But, yes, there are some drives with 2D TLC NAND that have had pretty good read speeds and slowed down less than some drives using 3D TLC NAND or 2D MLC NAND.
These drives however; all use BCH ECC while those with 2D TLC NAND that have had issues with dropping read speeds use LDPC ECC.
That may change after a bit more wear because right now those drives using LDPC ECC aren't as worn as those with BCH ECC so if they're less sensitive to wear then the gap may decrease.
Tested a drive not that long ago in fact using 2D TLC NAND and BCH ECC after a lot of wear and after it was unpowered for a year and it had better read speeds than a drive using 3D TLC NAND.
Which brings up further questions with respect to QLC. We have neither the jump in manufacturing (in fact manufacturers may be going down in feature size again post 128L) nor do we have the next step beyond LDPC.
Toshiba used to advertise QSBC as being significantly more effective than (one version of) LDPC ECC.
Still taking those claims with a grain of salt however.