sandforce drives usually perform pretty well, despite the dram less nature.Just thinking about my $55 Team Group L7 EVO TLC 240GB SSD. It only has 25K read IOPS (true to specs).
Or is this an inherent limitation of DRAM-less SSDs. Can we detect DRAM-less SSDs by low read IOPS ratings?
SandForce drives are DRAM-less? I didn't know that.sandforce drives usually perform pretty well, despite the dram less nature.
sandforce drives have no physical dram on the board, so no they dont store any data whatsoever on dramSandForce drives are DRAM-less? I didn't know that.
Edit: I knew that they don't store user data in any DRAM cache, but I though that they kept the mapping tables in DRAM. No?
Anyways, my VisionTek GoDrive 120GB SSDs (according to specs, they have Async NAND), only score 50MB/sec in CDM for 4K QD32 reads. (12.5K IOPS).
hmm, given some of Phison's past controllers' performance issues, those claimed specs seem almost pie-in-the-sky.Phison claims up 95K random read IOPS and 85K random write IOPS for the S11 dram-less controller:
Will be interesting to see how this pans out.
I noticed the DRAM-less SM2256S in both the planar TLC SSD Plus and Z410 is hitting a wall on 4K QD32 read at 240GB capacity according to the following review.hmm, given some of Phison's past controllers' performance issues, those claimed specs seem almost pie-in-the-sky.
I thought that their S10 controller was a quad-core, but S11 is a single core, only 2-channel, with 16 CEs, and it can hit 95K/85K IOPS, with BOTH MLC and TLC?
Sounds too good to be true.
You must be new to Larry threads.I'm just curious: what are you doing that needs good performance at high queue depths but is also cost sensitive enough that you're pinching pennies on the SSD? As far as I've read good performance on high queue depths is much more beneficial to heavy server type workloads and has little benefit for more sequential standard desktop workloads.
I'm just curious: what are you doing that needs good performance at high queue depths but is also cost sensitive enough that you're pinching pennies on the SSD? As far as I've read good performance on high queue depths is much more beneficial to heavy server type workloads and has little benefit for more sequential standard desktop workloads.
You must be new to Larry threads.
I think it is a great question.:biggrin::biggrin:
The S9 did indeed have problems:I avoid any drive with the DRAM-less Phison S9 because the performance is terrible (have some Corsair Force LS drives with S9...the ones with S8 are better). The S10 seems solid.
The Blaze 120GB was unable to read sequential data at a consistent pace in our test. The Torch 120GB was the same way when we tested it. I think the lack of a DRAM buffer to cache the table data played a role in the wide separation between minimum and maximum performance.
Toshiba has shared some details about how they plan to make use of HMB and what its impact on performance will be. The BG series uses a DRAM-less SSD controller architecture, but HMB allows the controller to make use of some of the host system's DRAM. The BG series will use host memory to implement a read cache of the drive's NAND mapping tables. This is expected to primarily benefit random access speeds, where a DRAM-less controller would otherwise have to constantly fetch data from flash in order to determine where to direct pending read and write operations. Looking up some of the NAND mapping information from the buffer in the host's DRAM—even with the added latency of fetching it over PCIe—is quicker than performing an extra read from the flash.
Toshiba hasn't provided full performance specs for the new BG series SSDs, but they did supply some benchmark data illustrating the benefit of using HMB. Using only 37MB of host DRAM and testing access speed to a 16GB portion of the SSD, Toshiba measured improvement ranging from 30% for QD1 random reads up to 115% improvement for QD32 random writes.
Table from Anandtech link above called "Performance improvement from enabling HBM:
Randon Read QD1:30%, QD32: 65%
Random Write QD1: 70% QD32: 115%
While it looks like HMB can do a lot to alleviate the worst performance problems of DRAM-less SSD controllers, the caveat is that it requires support from the operating system's NVMe driver. HMB is still an obscure optional feature of NVMe and is not yet supported out of the box by any major operating system, and Toshiba isn't currently planning to provide their own NVMe drivers for OEMs to bundle with systems using BG series SSDs. Thus, it is likely that the first generation of systems that adopt the new BG series SSDs will not be able to take full advantage of their capabilities.
|Thread starter||Similar threads||Forum||Replies||Date|
|Question Installing new Data drive (my pics,vids, docs)||Memory and Storage||8|
|Question PNY NVMe SSDs, 500GB, 1000-series or 2000-series? Is one of them DRAM-less?||Memory and Storage||2|
|Question 4x4gb 3000mhz, but individually run at 2133mhz||Memory and Storage||10|
|K||Question Samsung B-die overclocking beyond Dram Calculator?||Memory and Storage||1|
|S||Question can't control LED on corsair vengeance white LED DRAM||Memory and Storage||4|
|Question Installing new Data drive (my pics,vids, docs)|
|Question PNY NVMe SSDs, 500GB, 1000-series or 2000-series? Is one of them DRAM-less?|
|Question 4x4gb 3000mhz, but individually run at 2133mhz|
|Question Samsung B-die overclocking beyond Dram Calculator?|
|Question can't control LED on corsair vengeance white LED DRAM|