• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Any DRAM-less SSD drives that DONT suck at 4K QD32 reads?

VirtualLarry

No Lifer
Aug 25, 2001
50,897
6,313
126
Just thinking about my $55 Team Group L7 EVO TLC 240GB SSD. It only has 25K read IOPS (true to specs).

Or is this an inherent limitation of DRAM-less SSDs. Can we detect DRAM-less SSDs by low read IOPS ratings?
 
  • Like
Reactions: cbn

hojnikb

Senior member
Sep 18, 2014
562
45
91
Just thinking about my $55 Team Group L7 EVO TLC 240GB SSD. It only has 25K read IOPS (true to specs).

Or is this an inherent limitation of DRAM-less SSDs. Can we detect DRAM-less SSDs by low read IOPS ratings?
sandforce drives usually perform pretty well, despite the dram less nature.
 

VirtualLarry

No Lifer
Aug 25, 2001
50,897
6,313
126
sandforce drives usually perform pretty well, despite the dram less nature.
SandForce drives are DRAM-less? I didn't know that.

Edit: I knew that they don't store user data in any DRAM cache, but I though that they kept the mapping tables in DRAM. No?

Anyways, my VisionTek GoDrive 120GB SSDs (according to specs, they have Async NAND), only score 50MB/sec in CDM for 4K QD32 reads. (12.5K IOPS).
 
Last edited:

hojnikb

Senior member
Sep 18, 2014
562
45
91
SandForce drives are DRAM-less? I didn't know that.

Edit: I knew that they don't store user data in any DRAM cache, but I though that they kept the mapping tables in DRAM. No?

Anyways, my VisionTek GoDrive 120GB SSDs (according to specs, they have Async NAND), only score 50MB/sec in CDM for 4K QD32 reads. (12.5K IOPS).
sandforce drives have no physical dram on the board, so no they dont store any data whatsoever on dram :cool:
 

VirtualLarry

No Lifer
Aug 25, 2001
50,897
6,313
126
Phison claims up 95K random read IOPS and 85K random write IOPS for the S11 dram-less controller:

http://www.phison.com/English/newProductView.asp?SortID=63&ID=259

Will be interesting to see how this pans out.
hmm, given some of Phison's past controllers' performance issues, those claimed specs seem almost pie-in-the-sky.

I thought that their S10 controller was a quad-core, but S11 is a single core, only 2-channel, with 16 CEs, and it can hit 95K/85K IOPS, with BOTH MLC and TLC?

Sounds too good to be true.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
hmm, given some of Phison's past controllers' performance issues, those claimed specs seem almost pie-in-the-sky.

I thought that their S10 controller was a quad-core, but S11 is a single core, only 2-channel, with 16 CEs, and it can hit 95K/85K IOPS, with BOTH MLC and TLC?

Sounds too good to be true.
I noticed the DRAM-less SM2256S in both the planar TLC SSD Plus and Z410 is hitting a wall on 4K QD32 read at 240GB capacity according to the following review.

http://www.tweaktown.com/reviews/7726/sandisk-ssd-plus-z410-sata-iii-review/index5.html



So it does bring up the question of how well a SSD with Phison S11 will do with up to 16 256Gb MLC dies (ie, 480GB capacity)?

With that mentioned I do think it is most probable a SSD with this controller will ship with Toshiba 15nm TLC (rather than MLC) and 120GB and 240GB (rather than 480GB) as the most common arrangement.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I think the best chance for DRAM-less will probably be with the M.2 form factor and NVME 1.2 (host memory buffer).

The Marvell 88NV1140 controller is a 28nm PCIe 3.0 x 1 dual core dual channel design capable of supporting this.

The question is how soon do we get these Marvell 88NV1140 drives and will they initially come with the drivers needed for Host memory buffer support?
 

Joepublic2

Golden Member
Jan 22, 2005
1,114
6
76
I'm just curious: what are you doing that needs good performance at high queue depths but is also cost sensitive enough that you're pinching pennies on the SSD? As far as I've read good performance on high queue depths is much more beneficial to heavy server type workloads and has little benefit for more sequential standard desktop workloads.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
449
126
I'm just curious: what are you doing that needs good performance at high queue depths but is also cost sensitive enough that you're pinching pennies on the SSD? As far as I've read good performance on high queue depths is much more beneficial to heavy server type workloads and has little benefit for more sequential standard desktop workloads.
You must be new to Larry threads. :)
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Right now I am using the Hectron 120GB 2.5" SSD (SM2256 and SK Hynix 16nm TLC) in an old Dell Optiplex 360 desktop. Since this particular desktop model doesn't support AHCI the Crystal Mark 5.1.2 4k Q32T1 score is only 32 MB/s (~8,000 IOPs), however it still works well from a subjective standpoint.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
ADATA SU700 using Maxiotek MK8115 dram-less ssd controller and 3D TLC NAND is posting up around 85,000 IOPs for 4KQD32T1:

 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I'm just curious: what are you doing that needs good performance at high queue depths but is also cost sensitive enough that you're pinching pennies on the SSD? As far as I've read good performance on high queue depths is much more beneficial to heavy server type workloads and has little benefit for more sequential standard desktop workloads.
You must be new to Larry threads. :)
:biggrin::biggrin:
I think it is a great question.
 

Joepublic2

Golden Member
Jan 22, 2005
1,114
6
76
Just ordered up one of these for a friend:

http://www.thessdreview.com/our-reviews/adata-premier-sp550-ssd-review-240gb/5/

Performance falls off a cliff if you're copying large files apparently but she's an amateur shutterbug and doesn't copy huge files (and she's on a tight budget obviously); the biggest file I rescued off her old 160GB spinning rust drive was around 600MB according to Spacesniffer. It has a 256MB DDR3 cache which is a bit of a step up from the drives being asked about in the OP but it costs just 2$ more so I figured it was worth bringing up.
 

Joepublic2

Golden Member
Jan 22, 2005
1,114
6
76
  1. ubuntu@ubuntu:~/Desktop$ sudo dd if=/dev/zero of=/dev/sda bs=1G count=1
  2. 1+0 records in
  3. 1+0 records out
  4. 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.32655 s, 170 MB/s
  5. ubuntu@ubuntu:~/Desktop$ sudo dd if=/dev/zero of=/dev/sda bs=2G count=1
  6. 0+1 records in
  7. 0+1 records out
  8. 2147479552 bytes (2.1 GB, 2.0 GiB) copied, 25.0635 s, 85.7 MB/s
  9. ubuntu@ubuntu:~/Desktop$ sudo dd if=/dev/zero of=/dev/sda bs=3G count=1
  10. 0+1 records in
  11. 0+1 records out
  12. 2147479552 bytes (2.1 GB, 2.0 GiB) copied, 13.3894 s, 160 MB/s
  13. ubuntu@ubuntu:~/Desktop$ sudo dd if=/dev/zero of=/dev/sda bs=2G count=1
  14. 0+1 records in
  15. 0+1 records out
  16. 2147479552 bytes (2.1 GB, 2.0 GiB) copied, 12.7255 s, 169 MB/s
  17. ubuntu@ubuntu:~/Desktop$ sudo dd if=/dev/zero of=/dev/sda bs=3G count=1
  18. 0+1 records in
  19. 0+1 records out
  20. 2147479552 bytes (2.1 GB, 2.0 GiB) copied, 13.008 s, 165 MB/s
  21. ubuntu@ubuntu:~/Desktop$ sudo dd if=/dev/zero of=/dev/sda bs=2G count=2
  22. dd: warning: partial read (2147479552 bytes); suggest iflag=fullblock
  23. 0+2 records in
  24. 0+2 records out
  25. 4294959104 bytes (4.3 GB, 4.0 GiB) copied, 47.9629 s, 89.5 MB/s
  26. ubuntu@ubuntu:~/Desktop$ sudo dd if=/dev/zero of=/dev/sda bs=2G count=3
  27. dd: warning: partial read (2147479552 bytes); suggest iflag=fullblock
  28. 0+3 records in
  29. 0+3 records out
  30. 6442438656 bytes (6.4 GB, 6.0 GiB) copied, 121.224 s, 53.1 MB/s
  31. ubuntu@ubuntu:~/Desktop$ sudo dd if=/dev/zero of=/dev/sda bs=2G count=4
  32. dd: warning: partial read (2147479552 bytes); suggest iflag=fullblock
  33. 0+4 records in
  34. 0+4 records out
  35. 8589918208 bytes (8.6 GB, 8.0 GiB) copied, 161.529 s, 53.2 MB/s
  36. ubuntu@ubuntu:~/Desktop$ sudo dd if=/dev/zero of=/dev/sda bs=2G count=5
  37. dd: warning: partial read (2147479552 bytes); suggest iflag=fullblock
  38. 0+5 records in
  39. 0+5 records out
  40. 10737397760 bytes (11 GB, 10 GiB) copied, 165.255 s, 65.0 MB/s
  41. ubuntu@ubuntu:~/Desktop$ sudo dd if=/dev/zero of=/dev/sda bs=2G count=6
  42. dd: warning: partial read (2147479552 bytes); suggest iflag=fullblock
  43. 0+6 records in
  44. 0+6 records out
  45. 12884877312 bytes (13 GB, 12 GiB) copied, 214.798 s, 60.0 MB/s
  46. ubuntu@ubuntu:~/Desktop$ sudo dd if=/dev/zero of=/dev/sda bs=2G count=20
  47. dd: warning: partial read (2147479552 bytes); suggest iflag=fullblock
  48. 0+20 records in
  49. 0+20 records out
  50. 42949591040 bytes (43 GB, 40 GiB) copied, 749.313 s, 57.3 MB/s
  51. ubuntu@ubuntu:~/Desktop$
So apparently it has ~2GB of the TLC configured as a SLC cache (dd: warning: partial read (2147479552 bytes)) and sequential write performance is still better than a mechanical drive (~170MB/s) up until you start writing files bigger than about 2GB to it, then it falls to around 55MB/s. It doesn't get any slower than that, though, even with a simulated 43GB file.
 

jrichrds

Platinum Member
Oct 9, 1999
2,531
3
81
I avoid any drive with the DRAM-less Phison S9 because the performance is terrible (have some Corsair Force LS drives with S9...the ones with S8 are better). The S10 seems solid.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I avoid any drive with the DRAM-less Phison S9 because the performance is terrible (have some Corsair Force LS drives with S9...the ones with S8 are better). The S10 seems solid.
The S9 did indeed have problems:

Here was one of the less flattering reviews on a Phison S9 equipped SSD (the Patriot Blaze 120GB):

http://www.tweaktown.com/reviews/6924/patriot-blaze-120gb-low-cost-ssd-review/index.html

Here were the results of the sequential read:



Notice only the Phison S9 drives (Patriot Blaze and Patriot Torch) are the ones with variation in min, average, and maximum read speeds. Here is what tweak town wrote about that:

The Blaze 120GB was unable to read sequential data at a consistent pace in our test. The Torch 120GB was the same way when we tested it. I think the lack of a DRAM buffer to cache the table data played a role in the wide separation between minimum and maximum performance.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Some preliminary performance info on DRAM-less SSD (using host memory buffer from NVMe spec 1.2):

http://www.anandtech.com/show/10546/toshiba-announces-new-bga-ssds-using-3d-tlc-nand

Toshiba has shared some details about how they plan to make use of HMB and what its impact on performance will be. The BG series uses a DRAM-less SSD controller architecture, but HMB allows the controller to make use of some of the host system's DRAM. The BG series will use host memory to implement a read cache of the drive's NAND mapping tables. This is expected to primarily benefit random access speeds, where a DRAM-less controller would otherwise have to constantly fetch data from flash in order to determine where to direct pending read and write operations. Looking up some of the NAND mapping information from the buffer in the host's DRAM—even with the added latency of fetching it over PCIe—is quicker than performing an extra read from the flash.

Toshiba hasn't provided full performance specs for the new BG series SSDs, but they did supply some benchmark data illustrating the benefit of using HMB. Using only 37MB of host DRAM and testing access speed to a 16GB portion of the SSD, Toshiba measured improvement ranging from 30% for QD1 random reads up to 115% improvement for QD32 random writes.

Table from Anandtech link above called "Performance improvement from enabling HBM:

Randon Read QD1:30%, QD32: 65%
Random Write QD1: 70% QD32: 115%


While it looks like HMB can do a lot to alleviate the worst performance problems of DRAM-less SSD controllers, the caveat is that it requires support from the operating system's NVMe driver. HMB is still an obscure optional feature of NVMe and is not yet supported out of the box by any major operating system, and Toshiba isn't currently planning to provide their own NVMe drivers for OEMs to bundle with systems using BG series SSDs. Thus, it is likely that the first generation of systems that adopt the new BG series SSDs will not be able to take full advantage of their capabilities.
 

ASK THE COMMUNITY