Question Are dramless TLC and QLC SSDs best to be avoided?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,202
126
I'm pushing 8.7TB TBW on this Adata SP550 240GB (2D TLC, DRAM cache, I believe). Seems find to me, although it could be a bit higher-performing. (On a rig that's maybe two years old? I was a Ryzen early-adopter, but I think this is my second OS SSD on this box, first one ran out of room. Or maybe I started this rig with this one, I don't remember.)

I've got a pair of 512GB Intel 545s SATA6G 2.5" SSDs to toss into these rigs when I re-build them.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I'm pushing 8.7TB TBW on this Adata SP550 240GB (2D TLC, DRAM cache, I believe). Seems find to me, although it could be a bit higher-performing. (On a rig that's maybe two years old? I was a Ryzen early-adopter, but I think this is my second OS SSD on this box, first one ran out of room. Or maybe I started this rig with this one, I don't remember.)

FWIW, Here are the TBW ratings for that drive:

https://www.adata.com/upload/downloadfile/Datasheet_SP550_EN_20151223.pdf

120GB: 90 TBW
240GB: 90 TBW
480GB: 180 TBW
960GB: 360 TBW
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,202
126
90TBW? So, used 8.7TB in 2+ years? Yeah, 20 years lifespan is OK with me, for a cheap 2D TLC 240GB SATA 2.5" SSD. By then, we'll have moved on to like 20TB SSDs.
 

bononos

Diamond Member
Aug 21, 2011
3,923
181
106
Looked up the old AT review of the SP550, its in the same ballpark as the Samsung 860 except that the 860 has faster random writes.

FWIW, Here are the TBW ratings for that drive:
https://www.adata.com/upload/downloadfile/Datasheet_SP550_EN_20151223.pdf
120GB: 90 TBW
240GB: 90 TBW
480GB: 180 TBW
960GB: 360 TBW

Will the performance of an ssd be maintained throughout the TBW rating or will it drop precipitously towards the end of its rated TBW? If it drops alot then it might matter even to home users who edit videos for youtube.
 
Last edited:
  • Like
Reactions: cbn

cbn

Lifer
Mar 27, 2009
12,968
221
106
Will the performance of an ssd be maintained throughout the TBW rating or will it drop precipitously towards the end of its rated TBW? If it drops alot then it might matter even to home users who edit videos for youtube.

That is a really good question and I asked the very same in the latter half of this post.
 
  • Like
Reactions: bononos

arandomguy

Senior member
Sep 3, 2013
556
183
116
I don't understand why people put so much stock in those TBW ratings. None of them actually state what the test criteria is or what counts as failure. The only vague criteria that is ever cited is that the flash should still retain data after after 1 year power off within some temperature bounds as per JEDEC criteria, even though that criteria is actually for how P/E cycles are certified not actual drives. Also it's worth noting that I don't believe anyone implicitly implies that is the target either. Even if we give the above note that criteria makes no mention of things like performance (sure the data is readable after you wait for multiple passes and ECC).

Also no one has ever gone to do any independent testing and verification. Well I guess Techreport kind of did with that write to failure test that is misleading.

What I'd like to see done is -

1) Condition the drive with a large amount of writes.

2) Write data for read testing.

3) Stash the SSD in the a "hot box" for something like a week unpowered.

4) Then perform tests.

I have some doubts on what the benchmarks especially for a QLC drive would actually be in that scenario.

More general on TBW:
The 860 EVO has (especially for the larger capacities) significantly higher TBW than the 850 EVO did despite still being specced for 2000 P/E.
Though if you doubled the TBW for the 850 EVO then you'd get the same TBW as for the 860 EVO (850 EVO may start at 75TB but that is at 120GB).
Even more interesting though is that while the TBW for the 860 Pro has gone up it looks to be specced at about 2000 P/E as well (in comparison the 850 Pro was specced for around 6000 P/E).

A difference with respect to the 850 EVO and 860 EVO is ECC. The 860 EVO uses a newer controller, they don't go into details but it's supposed to be a further iterative improvement.

Actually the 850 EVO 1TB specifically uses an even older controller that was also used on the 840 EVO which does not even use LDPC for ECC but BCH. It's also rated for the same TBW as the 500 GB despite 2x the capacity. I've also seen reports of it exhibiting some retention related performance drops similar to the 840 EVO.
 

Glaring_Mistake

Senior member
Mar 2, 2015
310
117
126
I don't understand why people put so much stock in those TBW ratings. None of them actually state what the test criteria is or what counts as failure.

Not a fan of them either.
Like I said in an earlier post it doesn't seem like TBW is as closely related to endurance as people seem to believe.
Some drives have a TBW that is a bit conservative while some drives can barely go over WAf 1 if they're to avoid getting worn out before hitting TBW.

The only vague criteria that is ever cited is that the flash should still retain data after after 1 year power off within some temperature bounds as per JEDEC criteria, even though that criteria is actually for how P/E cycles are certified not actual drives. Also it's worth noting that I don't believe anyone implicitly implies that is the target either.

Don't think it's that vague - they have some interesting documents on how SSD endurance is determined and those tests can - as they note in JESD218B.01 - be done on both a component and a drive-level.

Even if we give the above note that criteria makes no mention of things like performance (sure the data is readable after you wait for multiple passes and ECC).

That they don't.
Have seen some drives that have had read speeds suffer quite a bit after they've seen some wear.

Also no one has ever gone to do any independent testing and verification. Well I guess Techreport kind of did with that write to failure test that is misleading.

What I'd like to see done is -

1) Condition the drive with a large amount of writes.

2) Write data for read testing.

3) Stash the SSD in the a "hot box" for something like a week unpowered.

4) Then perform tests.

I have some doubts on what the benchmarks especially for a QLC drive would actually be in that scenario.

Have done something similar to this with a number of drives actually with the exception that they were not stored in a "hot box" though they unpowered for significantly longer than a week.

A difference with respect to the 850 EVO and 860 EVO is ECC. The 860 EVO uses a newer controller, they don't go into details but it's supposed to be a further iterative improvement.

Actually the 850 EVO 1TB specifically uses an even older controller that was also used on the 840 EVO which does not even use LDPC for ECC but BCH.

Yeah, Samsung can be a bit tightlipped.

But I've heard that the 750 EVO uses LDPC ECC and it uses the same controller as the 850 EVO.
Like Tomshardware: "The new SSD 750 EVO uses the 850 EVO's controller equipped with the more advanced LDPC ECC engine." and Techadvisor: "In order to help in error correction, the 750 Evo has a more advanced low-density parity-checker (LDPC) which is a feature of the Samsung MGX Controller. "
Have heard claims that the 850 EVO uses LDPC ECC too.

It's also rated for the same TBW as the 500 GB despite 2x the capacity.

Are you talking about TBW for the 860 EVO and Pro?

I've also seen reports of it exhibiting some retention related performance drops similar to the 840 EVO.

Could you try you find those?

Would be interested in seeing them.
 

arandomguy

Senior member
Sep 3, 2013
556
183
116
Don't think it's that vague - they have some interesting documents on how SSD endurance is determined and those tests can - as they note in JESD218B.01 - be done on both a component and a drive-level.

Do you know any information that alleges what the criteria is? My understanding was the 1 year retention state was specific to how P/E cycles were rated. But I'm not sure if we can even assume that is for TBW.

My other thing as mentioned is I'm not sure how many manufacturers actually implicitly mention they are following such standards for their TBW ratings.

That they don't.
Have seen some drives that have had read speeds suffer quite a bit after they've seen some wear.

My other question regarding this with respect to SSD testing is whether read testing in general needs to be looked at. Typically people are reading data that'll have been written at least some time ago while I assume all current SSD testing is based on essentially reading fresh write data.

The other thing is if modern drives are and more using cache (psuedo SLC) systems if you read it soon enough I'd assume that data hasn't even migrated to TLC/QLC yet as such aren't you only testing the reads on the pSLC cache?

We have some awareness of this with respect to writes so that is sometimes tested with the drive at capacity or after other stress factors. But there doesn't seem to be for reads.

Have done something similar to this with a number of drives actually with the exception that they were not stored in a "hot box" though they unpowered for significantly longer than a week.

My consideration was for a way to hypothetically accelerate degradation as I realize that for reviewers and tech publications especially it probably isn't practical to test for such a long period of time.

While the argument could be a "hot box" and massive write conditioning at the start isn't a real world usage scenario it's to attempt to emulate how the drive would actually hold up after 2-3 years of use which I expect is a reasonable planned time period for consumers.

But I've heard that the 750 EVO uses LDPC ECC and it uses the same controller as the 850 EVO.
Like Tomshardware: "The new SSD 750 EVO uses the 850 EVO's controller equipped with the more advanced LDPC ECC engine." and Techadvisor: "In order to help in error correction, the 750 Evo has a more advanced low-density parity-checker (LDPC) which is a feature of the Samsung MGX Controller. "
Have heard claims that the 850 EVO uses LDPC ECC too.

The 850 EVO line used multiple controllers. The smaller capacity drives (up to 500gb) used the then newer MGX controller you reference which was also used later on the 750 EVO line as well. The largest 1 TB however used the older MEX controller which was used on the 840 EVO line, this controller I don't believe is capable of LDPC only BCH for ECC.

https://www.anandtech.com/show/8747/samsung-ssd-850-evo-review

When the 850 EVO line was updated to V2 the controller on the higher capacities was switched to the MHX along with 32L to 48L NAND.

Are you talking about TBW for the 860 EVO and Pro?

I was referring to the 850 EVO 1TB and 500GB having the same TBW rating at 150GB. The 250GB and 120GB also had the same TBW rating.

As a further aside the 850 Pro line all had the same TBW rating regardless of capacity. Stuff like this suggests to me that TBW ratings given my manufacturers aren't really as standardized/tested as we might think.

Could you try you find those?

Would be interested in seeing them.

https://goughlui.com/2016/11/08/note-samsung-850-evo-data-retention-performance-degradation/

My speculation is that to overcome TLC issues it required a big jump in ECC with BCH to LDPC. The general thought that manufacturing alone (with the shift to 3D NAND and etc.) may not be accurate. Similarly (I actually believe commentary from yourself) that suggested newer (than the 840 EVO line) 2D TLC drives had better than expected retention when paired with newer controllers (as they may likely have better ECC).

Which brings up further questions with respect to QLC. We have neither the jump in manufacturing (in fact manufacturers may be going down in feature size again post 128L) nor do we have the next step beyond LDPC.[/QUOTE]
 

bononos

Diamond Member
Aug 21, 2011
3,923
181
106
There should be more investigative reviews from AT and other sites to push manufacturers to adopt a more meaningful performance spec instead of TBW. There is really nothing for the end consumer to go on except for the not so useful TBW.
 
  • Like
Reactions: UsandThem

leexgx

Member
Nov 4, 2004
57
1
71
The Dram on ssd is used for page table and wear leveling management, it at no point is used for write caching, on ssds all writes goto the NAND directly no caching on ssd side

When they are dramless all page table lookups and page table changes are written directly to NAND as to why it's far slower as more data is stored as the controller has to use the slower NAND to read and write page table (wear leveling is still done same as it was done before but table is updated immediately same as page table)

Generally I would avoid dramless ssds even if it supports HBA, as more data is stored the slower they go (some support HBA where it uses ram for ssd Dram but higher risk of data loss if page table is corrupted in RAM)

QLC ssds no point at the moment as TLC based ssds are typically same price and faster
 
  • Like
Reactions: whm1974

cbn

Lifer
Mar 27, 2009
12,968
221
106
Looking at the 3D TLC Crucial BX500 SATA SSD (DRAM-less via SM2258XT controller) it comes in three capacities 120GB, 240GB and 480GB with TBW of 40TB, 80TB, and 120TB respectively.

I reckon the 120GB will work for some people even with a TBW of 40TB and then for people that demand more TBW Crucial would recommend either the higher capacity 240GB BX500 or 480GB BX500.

But what about people that don't need the extra capacity, but want more TBW?

Does 512Gb 3D MLC derived from 1024Gb 3D QLC fit the bill? I reckon it would get 80TB to 120TB TBW if used at the 128GB capacity (ie, 2 x 512gb 3D MLC dies) and be priced in between the 120GB BX500 and 240GB BX500

P.S. Current IMFT 64L NAND die capacities are 256Gb TLC, 512Gb TLC and 1024Gb QLC. The IMFT 32L NAND die was configured either as 384Gb TLC or 256Gb MLC (ie, 3D TLC and 3D MLC used the same die).
 
Last edited:
  • Like
Reactions: UsandThem

cbn

Lifer
Mar 27, 2009
12,968
221
106
Here is a pretty sweet DRAM-less SATA SSD:

https://www.princeton.co.jp/product/img/hpsata25ssd/M700 2.5inch Simplified Datasheet_EN.pdf

Uses planar MLC NAND with (likely) SM2258XT controller ( I say likely because HP claims it is their own dual core controller...but then at the same time they advertise it is has NAND Xtend technology which comes with Silicon Motion controllers).

Warranty: 5 years or 70TBW (120GB model) 145TBW (240GB model).

https://techplayboy.com/47822/hp-ssd-m700-solid-state-drive-review/6/

CrystalDiskMark-HP-SSD-M700-240GB.png


(Like the 4K QD1 read. Would like to see some more comparisons.)
 
  • Like
Reactions: UsandThem

Glaring_Mistake

Senior member
Mar 2, 2015
310
117
126
Do you know any information that alleges what the criteria is? My understanding was the 1 year retention state was specific to how P/E cycles were rated. But I'm not sure if we can even assume that is for TBW.

There's for example JESD218B-01 and JESD219A, they have some information on that - you can download them from JEDEC (registration required though).

Now TBW is according to JEDEC the amount of data that can be written to the drive (under specific conditions) and upon reaching that is to be considered worn out and be able to retain data for at least one year (also under specific conditions).
With manufacturers though it seems more like what they're willing to cover under their warranty rather than that the TBW actually represents a point where the drive is going to be worn out - think the ones with TBW set a bit low might defend it on the basis of retention being at at least one year after though and so giving them fair margins.

My other thing as mentioned is I'm not sure how many manufacturers actually implicitly mention they are following such standards for their TBW ratings.

Well, you'd have to find some small footnote in a data sheet somewhere but looking around a bit I can see Intel/Kingston/Toshiba and WD/SanDisk mentioning it (either using JESD218, JESD218A, JESD219 or JESD219A).

My consideration was for a way to hypothetically accelerate degradation as I realize that for reviewers and tech publications especially it probably isn't practical to test for such a long period of time.

While the argument could be a "hot box" and massive write conditioning at the start isn't a real world usage scenario it's to attempt to emulate how the drive would actually hold up after 2-3 years of use which I expect is a reasonable planned time period for consumers.

Get that that was why, just mentioning how my tests differed from that.

The 850 EVO line used multiple controllers. The smaller capacity drives (up to 500gb) used the then newer MGX controller you reference which was also used later on the 750 EVO line as well. The largest 1 TB however used the older MEX controller which was used on the 840 EVO line, this controller I don't believe is capable of LDPC only BCH for ECC.

https://www.anandtech.com/show/8747/samsung-ssd-850-evo-review

When the 850 EVO line was updated to V2 the controller on the higher capacities was switched to the MHX along with 32L to 48L NAND.

Aware that it used two different controllers, just didn't check which one of them the 840 EVO used.

I was referring to the 850 EVO 1TB and 500GB having the same TBW rating at 150GB. The 250GB and 120GB also had the same TBW rating.

Was pointing out the fact that TBW doubled for the 850 EVO as it did for the 860 EVO then the 850 EVO would match it.
It starts out at 75TB at 120GB but retains that TBW rating for the 250GB, otherwise it would have have matched the 150TB of the 860 EVO at 250GB and well, at every capacity after that.

As a further aside the 850 Pro line all had the same TBW rating regardless of capacity. Stuff like this suggests to me that TBW ratings given my manufacturers aren't really as standardized/tested as we might think.

I believe they doubled it from 150TB to 300TB for the 500GB-1TB capacities (or possibly just the 1TB capacity) shortly after launch though.
But, like I've said sometimes the connection between endurance and TBW is a bit weak.
For the 850-series specifically, TBW was likely kept low so they would not compete for enterprise-class drives that have higher profit margins.


That looks really bad, haven't seen any of the 850 EVOs I've tested been so affected (with similar amounts of wear).
Looks especially odd to see it happening so fast, even for files less than a week old - that would be faster than I've seen with any drive.
And the 850 EVO is maybe not as proactive with rewrites as I would like but the function is there and can be quite useful and should not allow things to get this bad.

In comparison, here are some results from both 840 EVO (old firmware), and two 850 EVOs:

First the 840 EVO:

4fa20161028183923Result.png


Now the 840 EVO has been unpowered between tests and the older folders were added when the drive was hot/pretty warm, the next to last when it was lukewarm and the last folder was added when it was like a few degrees celsius.
As you can see read speeds still haven't dropped that much, that is except for the last folder but considering the conditions it's not that bad.

Here we have an 850 EVO:

20170119122523Result.png


It's been tested in the same way as the 840 EVO but it has been left unpowered longer between tests than the 840 EVO.
As you can see even the four month old folder which was added when the drive was only a few degrees above freezing still has an average read speed that is more than twice as fast as the 850 EVO in the link.

And here is another 850 EVO:

20170529192827Result.png


This 850 EVO has not been tested in the same way as the others (it's my game drive).
Also worth noting that it's the same capacity and uses the same firmware as the 850 EVO in the link.
It doesn't really have those issues with low read speeds either.


So I don't know what's going on with that 850 EVO but it seems like there might be something very wrong with it.

My speculation is that to overcome TLC issues it required a big jump in ECC with BCH to LDPC. The general thought that manufacturing alone (with the shift to 3D NAND and etc.) may not be accurate. Similarly (I actually believe commentary from yourself) that suggested newer (than the 840 EVO line) 2D TLC drives had better than expected retention when paired with newer controllers (as they may likely have better ECC).

More like that they were able to maintain high read speeds better than expected than that they had better data retention.
Not saying that they have poor data retention despite having fairly high read speeds, just pointing out the difference.

But, yes, there are some drives with 2D TLC NAND that have had pretty good read speeds and slowed down less than some drives using 3D TLC NAND or 2D MLC NAND.
These drives however; all use BCH ECC while those with 2D TLC NAND that have had issues with dropping read speeds use LDPC ECC.
That may change after a bit more wear because right now those drives using LDPC ECC aren't as worn as those with BCH ECC so if they're less sensitive to wear then the gap may decrease.

Tested a drive not that long ago in fact using 2D TLC NAND and BCH ECC after a lot of wear and after it was unpowered for a year and it had better read speeds than a drive using 3D TLC NAND.

Which brings up further questions with respect to QLC. We have neither the jump in manufacturing (in fact manufacturers may be going down in feature size again post 128L) nor do we have the next step beyond LDPC.

Toshiba used to advertise QSBC as being significantly more effective than (one version of) LDPC ECC.
Still taking those claims with a grain of salt however.
 
Last edited:
  • Like
Reactions: bononos and cbn

cbn

Lifer
Mar 27, 2009
12,968
221
106
FWIW, Looking back on these old Anandtech reviews:

https://www.anandtech.com/show/2808/2

https://www.anandtech.com/show/4244/intel-ssd-320-review

I find it interesting that the Intel X25-M only had 16MB DRAM for the 160GB capacity SSD and the Intel X25-M G2 SSD only had 32MB dram for the 160GB model.

The Intel 320 series SSD only had 64MB dram for the 300GB model.

In contrast, the Phison S11 dram-less controller has 32MB SRAM.....so not so far off the old standard SSDs when used at the lower NAND capacities.

P.S. Checking the Phison website on this page I noticed they have some new controllers...including a new PCIe dram-less controller (with 4th Gen ECC*). I can only wish there would be a follow-up to the Phison S11 with 64MB SRAM and 4th Gen ECC.....However, I am bit concerned this will never happen because SATA is getting so old. But who knows maybe they will because there is always application for such a controller in SSHD especially with multi-actuator hard drives coming out.

SIDE NOTE: Would like to have a "high SRAM" DRAM-less SATA SSD using the IMFT 1024Gb (128GB) 3D QLC dies in locked 512Gb (64GB) 3D MLC mode.

*For reference the most current Phison NVMe controller (the E12) uses 3rd Gen ECC.
 
Last edited:

killster1

Banned
Mar 15, 2007
6,205
475
126
Just to add a bit of perspective to the endurance of SSDs, my most abused SSD (Intel 600p 512GB/288TBW/32L 1st gen Intel/Micron NAND) has "only" clocked 6.6TB worth of writes in 3 years. It certainly hasn't been spared, so I guess this is worse then what your average consumer will ever do to it.

It'll be long obsolete before it hits 288TBW. At this rate, it'll last 127 more years before hitting the write limit...

In other words, I don't think endurance will ever be a problem for consumer drives.


some people move more than 1tb a month in data.. i know because my isp is always telling me im at 66% of my 1000gb limit.. (need to try and go unlimited this month) streaming and 4k movies just 10 of them is over 1tb. I guess you dont "abuse" your drives as much as you think u do. I wouldnt mind a TLC or QLC to install games to. You install teh game once it doesnt really get modified just read when loaded, same with your movie collection (tho not so important to be on a SSD). Just as others say the price is to close to even care (unless i was purchasing LOTS of drives) Im very excited for large capacity ssd's tho, imagine a QLC with 8tb for 400$. sure its over double the spinners price but at that price id bite many times.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
some people move more than 1tb a month in data.. i know because my isp is always telling me im at 66% of my 1000gb limit.. (need to try and go unlimited this month) streaming and 4k movies just 10 of them is over 1tb. I guess you dont "abuse" your drives as much as you think u do. I wouldnt mind a TLC or QLC to install games to. You install teh game once it doesnt really get modified just read when loaded, same with your movie collection (tho not so important to be on a SSD). Just as others say the price is to close to even care (unless i was purchasing LOTS of drives) Im very excited for large capacity ssd's tho, imagine a QLC with 8tb for 400$. sure its over double the spinners price but at that price id bite many times.

That's just what's been passed through your internet connection. Streaming by itself isn't going to affect your drives at all. Video data just gets loaded into to RAM, and then deleted when viewed. Unless you're actually downloading video before watching.

As a side note, do you by chance know how much data is required by uncompressed 4K video per second? 700MB/s. An unedited 4K video project can take half a terabyte per hour easily, so 600 odd GB isn't that much actually.
 

killster1

Banned
Mar 15, 2007
6,205
475
126
That's just what's been passed through your internet connection. Streaming by itself isn't going to affect your drives at all. Video data just gets loaded into to RAM, and then deleted when viewed. Unless you're actually downloading video before watching.

As a side note, do you by chance know how much data is required by uncompressed 4K video per second? 700MB/s. An unedited 4K video project can take half a terabyte per hour easily, so 600 odd GB isn't that much actually.

Yes everything is downloaded first. Yes 600gb is nothing. On average 2 hours of video I view is 50ish GB. I have no idea how much netflix might use or not use I guess the next time I will check
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
Yes everything is downloaded first. Yes 600gb is nothing. On average 2 hours of video I view is 50ish GB. I have no idea how much netflix might use or not use I guess the next time I will check

4K is ~25Mbit/s (~3.2MB/s, ~11.5GB/h). FHD is 10-12Mbit/s (~1.5M/s, ~5.4GB/h). If you roughly know how much you watch, it's easy to make a rough estimate. If you watch a couple of hours FHD Netflix every day for a month, it should be ~324GB/month (5.4GB x 2(h) x 30(day)).

I haven't managed, yet, to break the monthly 3TB "fair use" on my ISP contact... ;) But then, I don't watch that much anymore.
 

Glaring_Mistake

Senior member
Mar 2, 2015
310
117
126
Turns out I made a mistake on those P/E calculations. EDIT and Correction added to post #19.

But you are right in that TBW and Endurance are related.

Well, I said loosely related because I think it's often decided by what the manufacturer thinks is an appropriate amount for the drive.
For example the Adata SU900 and the Crucial BX300 use the same controller and NAND yet the former has approximately a TBW twice as high as the latter.
And that is likely because for Adata it is their flagship SATA drive but for Crucial it's their budget drive.
 
  • Like
Reactions: cbn

cbn

Lifer
Mar 27, 2009
12,968
221
106
For example the Adata SU900 and the Crucial BX300 use the same controller and NAND yet the former has approximately a TBW twice as high as the latter.
And that is likely because for Adata it is their flagship SATA drive but for Crucial it's their budget drive.

ADATA also rates the SU800 (with 3D TLC) at the same TBW as the SU900 (with 3D MLC).

https://www.adata.com/upload/downloadfile/Datasheet_SU900_EN_20170710.pdf

https://www.adata.com/upload/downloadfile/Datasheet_SU800_EN_20180905.pdf

So that is a bit strange and on top of that would actually put the SU800 (with its 3D TLC) at twice the endurance as the BX300 with its 3D MLC.

With that mentioned all this must come down to binning and validation.

P.S. According to the Anandtech article on the BX300 it did come about a year after the MX300 which sort of makes me wonder if the BX300 used less desirable 32L dies that were not able to make the cut as 384Gb TLC with X level of endurance?
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Here is a quote from Tom's hardware I originally posted in the "What SSDs have 3D QLC NAND?" thread:

https://www.tomshardware.com/reviews/dramless-ssd-roundup,4833.html

"Unfortunately, DRAMless SSDs also have a sinister side. Updating the map directly on the flash requires small random writes, which takes a bite out of the SSD's endurance. This is a particularly vexing issue with low endurance planar 2D TLC NAND flash. At Computex last June, one SSD vendor told us about an OEM 2D TLC SSD that will burn through the rated endurance in a little over a year. The SSD has to last a year because of the notebook's one-year warranty, but anything beyond a year's worth of use is up to the user to fix. Tactics like that are the driving forces behind putting cheap DRAMless SSDs in $500 notebooks."

I reckon the situation described above is one of low bin planar TLC being used in a low capacity SSD, not enough ECC and not enough SRAM on the controller.

So with this noted, assuming average bin 3D QLC and current ECC how much SRAM or DRAM does a controller need to have for basic office work? For 3D TLC or 3D MLC?

Is it an absolute number or a ratio of SRAM or DRAM to NAND that matters the most?

P.S. For the record I am thinking about SATA, not NVMe which can use Host memory buffer.
 
Last edited:

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,202
126
The strange thing is, I had thought that I had read that the PNY CS900 was DRAM-less, but darn it, those drives feel pretty speedy. Talking about the 128GB ones specifically, I haven't used the higher capacities.

I've used Sandisk SSD Plus DRAM-less SSDs, and those you could kind of tell, they were kind of sluggish.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
The strange thing is, I had thought that I had read that the PNY CS900 was DRAM-less, but darn it, those drives feel pretty speedy. Talking about the 128GB ones specifically, I haven't used the higher capacities.

PNY CS900 uses the Phison S11 dram-less controller (with 32MB SRAM).

I have a Patriot Flare 60GB (planar MLC) that uses that controller and it seems quite fast to me as well. I have been wondering if this is because it has 32MB SRAM or because it has a decent ratio of 1:2 Memory (MB) to NAND (GB),

I've used Sandisk SSD Plus DRAM-less SSDs, and those you could kind of tell, they were kind of sluggish.

Do you remember which ones you had and the capacity? I remember SSD Plus switched NAND at one point from MLC to TLC-->

https://www.anandtech.com/show/8827/sandisk-announces-entrylevel-ssd-plus-ultra-ii-msata (19nm MLC, possible low bin according to Anandtech)

https://www.tweaktown.com/reviews/7726/sandisk-ssd-plus-z410-sata-iii-review/index2.html (15nm TLC with dram-less SM2256S)

More info on that in the link below:

https://pcpartpicker.com/forums/top...g-in-g26-have-different-nand-flash-quick-rant

P.S. I have wondered if the newer SM2258XT being built on 40nm has more SRAM than the dram-less SM2256S built on 55nm.
 
Last edited:

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
"Unfortunately, DRAMless SSDs also have a sinister side. Updating the map directly on the flash requires small random writes, which takes a bite out of the SSD's endurance. This is a particularly vexing issue with low endurance planar 2D TLC NAND flash. At Computex last June, one SSD vendor told us about an OEM 2D TLC SSD that will burn through the rated endurance in a little over a year. The SSD has to last a year because of the notebook's one-year warranty, but anything beyond a year's worth of use is up to the user to fix. Tactics like that are the driving forces behind putting cheap DRAMless SSDs in $500 notebooks."

I hope that OEM isn't planning on selling that in the EU. Here warranty is mandated for two years.

Of course, they might like replacing SSDs for free...