Discussion Optane Client product current and future

Page 17 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Ok, so its likely entirely software. The slide above says two physical drives so.

I'm disappointed with how Intel is driving the Optane product line. I hope it works with all NVMe equipped systems including AMD platforms, but it may not be able to do so.

Greedy mega corporations.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I wonder if it will use Write through caching rather than write back caching?

As shown below 660p can write 4K QD1 at over 330 MB/s.

burst-rw.png

32GB Optane, in contrast, writes 4K QD1 at 207 MB/s.

burst-rw.png
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
@cbn We have no idea how a 32GB Carson Beach(now known as Optane Memroy M10) performs, so it might do a lot better than that. If it does 50% better, already the 32GB version will outperform QLC NAND.

@Billy Tallis Hmm, here's my take on the weird advertised capacity for SSD and Memory parts.

It's due to the idiocy that's marketing and lawyers. With Memory, they can advertise it as 64GB because it won't be user accessible anyway. With SSD, it has to follow convention, something that has been used in storage for years. Companies like this protect themselves with legal jargon because they might be hit with class action lawsuits.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
@cbn We have no idea how a 32GB Carson Beach(now known as Optane Memroy M10) performs, so it might do a lot better than that. If it does 50% better, already the 32GB version will outperform QLC NAND.

Here is a picture of the drive (from the Intel website):

ssd-optane-h10-storage-front-flat-rwd.png.rendition.intel.web.720.405.png

It looks to me like it has the same size controller used by Stony Beach:

2841273-a.jpg


(Wish I had a better picture for Stony Beach)
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
For the 16GB Optane with 256GB 3D QLC, I wonder if it still uses the 256MB DRAM cache from the larger capacity models of the 660p?

Or did Intel decide to use a smaller size DRAM cache?
 

nosirrahx

Senior member
Mar 24, 2018
304
75
101
This sounds a lot like what I talked with them about in the Optane AMA session.

I also understand why Intel is dead set against officially supporting 800P + SATA SSD as that would directly compete with this new solution.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Here is a picture of the drive (from the Intel website):

It looks to me like it has the same size controller used by Stony Beach:

That doesn't tell us much. From the surface, Sandy Bridge and Skylake looks similar too. That's merely the packaging. The die is inside.

Stony Beach with firmware updates and maybe improved 3D XPoint die resulted in the M10 and 800P SSDs with twice the throughput.

Also, I think Intel knows having a sequential throughput lower than a QLC NVMe drive is a no-go, so they would have thought about this. Hopefully.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Stony Beach with firmware updates and maybe improved 3D XPoint die resulted in the M10 and 800P SSDs with twice the throughput.

32GB M10 and 32GB Original Optane have the same throughput...with the exception of the 32GB Original Optane actually having 150 MB/s higher Sequential Read:

https://ark.intel.com/products/9974...Series-32GB-M-2-80mm-PCIe-3-0-20nm-3D-Xpoint-

https://ark.intel.com/products/1354...Series-32GB-M-2-80mm-PCIe-3-0-20nm-3D-XPoint-

So the reason 58GB 800p beats both 32GB Optanes is because it has more Optane in parrallel.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
That doesn't tell us much. From the surface, Sandy Bridge and Skylake looks similar too. That's merely the packaging. The die is inside.

That could be true, but that Optane controller package seen on Teton Glacier is pretty darn small.

Much smaller than the PCIe 3.0 x 4 Optane contrtoller used for the 900p and 905p....and even smaller than the PCIe 3.0 x 4 SM2263 controller used for the 3D QLC part of the drive (which actually one of the smallest (if not the smallest) PCIe 3.0 x 4 NAND controllers out there)

With that mentioned, anything is possible. Maybe they are indeed using a much smaller lithography.....and thus the new Carson Beach PCIe 3.0 x 4 controller can fit inside the same size package as the old Stony Beach PCIe 3.0 x 2 controller. But the name H10 (among other things) implies to me the drive uses a M10 PCIe 3.0 x 2 optane controller in a hybrid fashion.
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
That could be true, but that Optane controller package seen on Teton Glacier is pretty darn small.

We went through this before. By the way, storage controllers are nowhere demanding as CPUs we are used to for silicon real estate. In fact, they are far smaller. Intel probably don't even use their own process for the controller. I'd bet on a 40nm, or maybe 28nm TSMC.

Bare die pic:
https://www.anandtech.com/show/12899/intel-previews-m2-optane-ssd-905p-380gb

Package pic:
https://www.tweaktown.com/news/63181/intel-answers-question-905p-2-optane-release/index.html

Indeed, the package is large because its a heatspreader. The die is quite small. In fact, it doesn't look any larger than the one used in Optane Memory. However, it may need to dissipate 3x the power.

It also doesn't address the likelihood Intel knows this. This is actually the biggest reason why I'm doing a wait-and-see approach. I think they are very aware having a 150MB/s sequential throughput on the latest NVMe SSD is bonkers.

Big companies do make stupid decisions, but I'll at least give them some credit and the final product will end up doing 1GB/s sequential write at minimum. 1GB/s sequential is what the lowly regarded 660p is rated at.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
We went through this before. By the way, storage controllers are nowhere demanding as CPUs we are used to for silicon real estate. In fact, they are far smaller. Intel probably don't even use their own process for the controller. I'd bet on a 40nm, or maybe 28nm TSMC.

Bare die pic:
https://www.anandtech.com/show/12899/intel-previews-m2-optane-ssd-905p-380gb

Package pic:
https://www.tweaktown.com/news/63181/intel-answers-question-905p-2-optane-release/index.html

Indeed, the package is large because its a heatspreader. The die is quite small. In fact, it doesn't look any larger than the one used in Optane Memory. However, it may need to dissipate 3x the power.

It also doesn't address the likelihood Intel knows this. This is actually the biggest reason why I'm doing a wait-and-see approach. I think they are very aware having a 150MB/s sequential throughput on the latest NVMe SSD is bonkers.

Big companies do make stupid decisions, but I'll at least give them some credit and the final product will end up doing 1GB/s sequential write at minimum. 1GB/s sequential is what the lowly regarded 660p is rated at.

For the 16GB Optane it does have a long way to go to catch up to the Sequential write and 4K QD1 write of 3D QLC in SLC mode....particularly the Sequential write. This is one reason why I suspect Intel will use Write through cache method rather than the write back we see with Optane caching SATA. Two more reasons would be increased reliability and more room on the Optane for read cache.

P.S. You make a good point about a new controller offering benefits (this is obvious) but I can't imagine it would enough for 16GB to catch up to 256GB QLC running in SLC mode for writes. With this mentioned, Anandtech has noted M15 Optane (PCIe 3.0 x 4) to be available in capacities of 16GB to 128GB.......so maybe it does have the newest controller running with only PCIe 3.0 x 2. I just wonder why name it H10? Why not H15?
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Regarding the M15 Optane memory (which is PCIe 3.0 x 4) I do wonder how it will compare to 1st Gen Optane Memory and Optane SSDs using the current PCIe 3.0 x 2 controller.

As shown below the PCIe 3.0 x 2 Optanes (which include the 800p) did substantially better in 4K QD1 read compared to the PCIe 3.0 x 4 Optanes (900p and P4800X).

https://www.anandtech.com/show/12512/the-intel-optane-ssd-800p-review/5

burst-rr.png


I'm guessing a better design will allow the new PCIe 3.0 x 4 to have lower latency and thus do better than the old PCIe 3.0 x 4 Optane controllers.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Write-through makes more sense for the new product, but it will likely not help on endurance.

Also, I compared the results between the 660p and the Optane Memory. The QLC SSD is really only ahead in burst writes. The combined sequential writes are superior on Optane Memory because the 660p drops an enormous amount when the drive is full. The drive full scenario may be a bit pessimistic, but not overly so.

I am interested on how it will turn out. I hope the combination allows an SSD that's compelling. I wonder where its going fit in Intel's SSD product stack.

Also: Carson Beach is officially Optane Memory M15. Thanks to Billy. Optane SSD 815P is called Bombay Beach.
 
  • Like
Reactions: cbn

cbn

Lifer
Mar 27, 2009
12,968
221
106
Write-through makes more sense for the new product, but it will likely not help on endurance.

That is true and while I bet the write buffer on the 16GB and 32GB Optane is not large I'm guessing it (if used as write back) would be essentially additive to the SLC cache on the 3D QLC portion. So that would good.

However, I'm thinking because of the relatively low write performance of 16GB/32GB Optane coupled to the decently sized SLC cache of 3D QLC they will not use Write back:

https://www.anandtech.com/show/13078/the-intel-ssd-660p-ssd-review-qlc-nand-arrives

SLC%20cache_575px.png
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Regarding Optane integration into a NAND SSD at the firmware level, I wonder if the Ruler form factor NAND SSDs will be the first place we see this happen?

The reason I bring this up is because at 32TB NAND capacity the normal requirement for DRAM buffer would be 32GB (ie, 1GB DRAM per 1TB NAND).

So instead of using 32GB DRAM Intel would enable the controller (which is special anyway because of the high NAND capacity) to use 32GB or more of Optane.

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Then after this Intel evolves the integration to design a DRAM, Optane and NAND NVDIMM-P for blade servers?
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
I know this article is a bit dated now, but it has more information about the Optane Memory H10 drives.

https://www.tomshardware.com/news/intel-optane-h10-qlc-ssd,38387.html
Optane Memory H10 SSDs have a higher endurance rating than a standard QLC SSD and are backed by a five-year warranty.

They are saying its meant for OEMs to put in their laptops, and it does need PCI-e bifurcation support so maybe its better it stays there rather than bringing it to retail.

It's good the endurance is higher. Don't know what will happen with the sequential read/write performance, but it'll likely be limited to the x2 speed. If they can somehow maximize the utilization of the x2 link, then the loss will not be drastic as in practical scenarios the x2 link can support to about 1.6GB/s. Not confident on this though.

Pricing will be the most important factor. It has to be lower than the 760p to have any merit, or it'll cannibalize/be cannibalized by other Intel SSD products.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I remember seeing that article when it first came out along with the mention of higher endurance.

So With Intel 660p 512GB having TBW of 100 we should expect a TBW of greater than 50 for the H10 256GB model and a TBW greater than 100 for the H10 512GB model.

That seems very believable considering the dram-less SATA SU630 Ultimate (which uses Intel 3D QLC) has TBW of 50 and 100 for 240GB and 480GB respectively. This despite the lack of DRAM-buffer.

DRAM buffer, as pointed out to me earlier, is involved with reducing write amplification and write amplification is part of the formula of TBW:

Capture3_575px.PNG
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
The 660p SSD uses the variable sized SLC cache for performance and endurance duty. In NAND SSDs, both performance and endurance is related, so bettering one area will improve the other.

I'm hoping the combination isn't a mere first gen Optane Memory + 660p in one PCB.

The 900/905P Optane drives can do 2GB/s sequential writes with 7-channels, so each channel can throughput at least 300MB/s, because multi-channel devices usually have some losses. So the media is capable of doing 300MB/s minimum, just the particular implementation on the Optane Memory limited it to 145MB/s.
 
Last edited:
  • Like
Reactions: cbn

cbn

Lifer
Mar 27, 2009
12,968
221
106
The 660p SSD uses the variable sized SLC cache for performance and endurance duty. In NAND SSDs, both performance and endurance is related, so bettering one area will improve the other.

That is really interesting point. So you think maybe for the H10 they use a more aggressive SLC cache system than the one shown below for the Intel 660p:

https://www.anandtech.com/show/13078/the-intel-ssd-660p-ssd-review-qlc-nand-arrives

SLC%20cache_575px.png


As an example of more aggressive see the Anandtech Mushkin Source review where the 500GB 3D TLC drive acts almost entirely like a small SLC drive until it surpasses 150GB:

https://www.anandtech.com/show/13421/the-mushkin-source-sata-ssd-review/5

seq-fill-source-500.png

The Mushkin Source lasts a very long time before its SLC cache is filled. While the Toshiba TR200's write speed ends up in the gutter very quickly, it will be almost impossible for a real-world consumer workload to fill the cache of the Source unless the drive starts out nearly full. From an empty drive, the apparent initial SLC cache size is well over 150GB—essentially the entire drive operating as SLC until free space runs out
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Well, since the H10 has Optane Memory, they'd be better off improving the caching side of that than the individual controllers inside the separate devices.

I'm really uncertain how the H10 will work out. Hopefully what they are doing is right and see some impressive results out of it.

By the way, CES had some laptops on display in Intel venue using H10 SSDs.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Well, since the H10 has Optane Memory, they'd be better off improving the caching side of that than the individual controllers inside the separate devices.

I'm really uncertain how the H10 will work out. Hopefully what they are doing is right and see some impressive results out of it.

With Optane Software Application initially we had the ability to cache a SATA primary drive.....then we could cache a secondary drive that is SATA.

With H10 I wonder if eventually a person could not only use the Optane as cache for the 3D QLC NVMe but also a SATA drive at the same time? (Was thinking how awesome it would be use the 3D QLC part of H10 primarily as SLC* with most storage going to either a SATA HDD or SATA SSD. So both NVMe NAND and SATA cached**)

*Assuming the cache scheme is write-through.

**Think Primocache type update.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
For AMD users, I wonder if any of them are using (or planning to use) the lower capacity 900p or 905p Optane SSDs with StoreMI and then using the remaining portion of the SSD (after the 256GB for fast tier is allocated) for making separate partition(s) aimed at certain functions.

Example: 380GB 905P with 256GB Optane tiered to either SATA SSD or SATA HDD + 100GB Optane made into a separate partion for Primocache (cache for RAID volume, etc.) + 24GB Optane made into a separate partition for page file, etc. (Likewise the 280GB Optane could used for 256GB fast tier with 24GB allocated to a separate partition for page file)
 
Last edited:

kurosaki

Senior member
Feb 7, 2019
258
250
86
C part of H10 primarily as SLC* with most storage going to either a SATA HDD or SATA SSD. So both NVMe NAND and SATA cached**)

*Assuming the cache scheme is write-through.

**Think Primocache type update.

This is implemented in windows, though heavily nerfed in win 10. Its called S2D, Storage Spaces Direct in windows server. There you can throw in NVME as cache for SATA SSDs or usual HDDs, you can Use SATA SSDs as cache for regular HDDs and so on. This single feature, I would die for it, I cant see why MS has nerfed their OS Win 10 with removing this completely awesome feat.
Cache in Win Server

It's horrible you cant buy this function for money, not even win 10 enterprise. Witch I gladly would have upgraded to..
 

kurosaki

Senior member
Feb 7, 2019
258
250
86
It's worse than I thought! They have even deprecated the possibility to format new volumes in ReFs for normal users!
Insane (Old) news

So here I was hoping to get the feat in a coming update, and they have already made it worse! I'm lucky to have a 64 TB ReFs -volume in storage spaces already. Would not be possible for me to make it today.
 
Last edited: