Discussion Optane Client product current and future

Page 23 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Icelake, Tremont, and Zen 2 all have instructions for persistent memory. Now, Optane DIMMs require more than that, but at least we know they have the basics.

The 128GB Optane PMM costs about $600. Not having ECC can boost that capacity to 160GB. This has interesting implications for the future. Let's say the price is cut to $400 for the client version. I think it could work for enthusiasts when Cascade Lake-X arrives.

$400 for 128GB is too pricey for the SSD, but in a DIMM format it makes sense.
 

nosirrahx

Senior member
Mar 24, 2018
304
75
101
I am looking forward to larger capacity DDR5 to go along with this. Needing only 1 DRAM stick per channel would leave an entire empty channel for these PMMs.

Add in some 8TB QLC M.2 22110 sticks and you could build a high capacity/high performance workstation with 0 physical wires connected to your storage. These builds would be very clean and conducive to airflow.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
I am looking forward to larger capacity DDR5 to go along with this. Needing only 1 DRAM stick per channel would leave an entire empty channel for these PMMs.

I knew you'd like it!

But you might want to tame your hopes of needing only single channel. Dual channel support will continue to exist.

I'm talking in terms of how its in current platforms. I have 2 slots occupied, and 2 free. I would be willing to use the 2 slots for Optane PMMs. Even as a block storage device, the latency is comparable with the best RAMdisk. The PCIe penalty has been greater than initially thought.

Heck, I'd be willing to pay $600 for the 128GB module! Or more likely $300 each for two. I paid $800 for the first good MLC SSD, the X25-M. That's how future proofing is really done.
 

nosirrahx

Senior member
Mar 24, 2018
304
75
101
But you might want to tame your hopes of needing only single channel. Dual channel support will continue to exist.

I worded that poorly, I was trying to say only 1 stick for every slot in a channel. Meaning if you had 8 slots for RAM and 4 channels, you can use 4 DDR5 DIMMs and still reach massive capacity leaving an additional 4 open for PMMs.

From what I am reading 32GB DDR5 4000 should totally be a thing, maybe even 64GB.

I wonder if the DDR5+PCIe4/5 generation boards will address something else that has been bugging me for quite some time. Perhaps it is time to move most if not all wire connectors to the back of motherboards.

We already route cables to the back side of the case but inexplicably we pull them back to the front to connect to the motherboard. Imagine how clean a system could be if all the wired connectors were on the back of the motherboard exposed through holes in the motherboard tray?
 
  • Like
Reactions: kurosaki

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
I worded that poorly, I was trying to say only 1 stick for every slot in a channel

I thought that might have been what you said too, but I wasn't clear. So on the same line of thought as me. Yes I agree.

Not sure what to say about the cables. It could be because because ATX cases always have the power cables at the front. Change in ATX format is not going to happen because its so established.
 

nosirrahx

Senior member
Mar 24, 2018
304
75
101
Not sure what to say about the cables. It could be because because ATX cases always have the power cables at the front. Change in ATX format is not going to happen because its so established.

I am talking about using the delineation that is already coming with PCIe4/5 and DDR5 to make a physical change at the same time.

PCs would be so clean if PSUs sent all cables out the back side behind the MOBO tray and then connected to connectors on the back of motherboards. All connectors could be moved to the back of motherboards so all cable management would be 100% back side and completely out of sight. You could even add a power connector next to PICe 16X slots that connect to back side allowing GFX cards to have no visible power cabling.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Optane Memory H10 with Solid State Storage specifications released by Anandtech.

https://www.anandtech.com/show/14196/intel-releases-optane-memory-h10-specifications

Introduced: Today!!

Intel's caching software will support reading and writing data from both Optane and QLC simultaneously, so they rate the H10 for up to 2.4GB/s sequential reads even though neither half can exceed 2GB/s on its own.

Interesting.

Sequential
2400MB/s Read, 1800MB/s Write

Random
32K/55K, QD1/QD2 Read
30K/50K, QD1/QD2 Write

300TB endurance.

Price will determine its success.

Also did not catch this:
Intel's caching functionality for the Optane Memory H10 requires their RST version 17.2 drivers, the first release series to enable Optane Memory caching of NVMe SSDs instead of just SATA drives.

Update- From Tomshardware:
That's because, under certain conditions, Intel's RST software can read and write from both SSDs simultaneously using a bandwidth aggregation technique.

I have to guess this may be a result of greater integration.

An Intel SSD Toolbox user guide states it's shown as two drives and can be used in that mode. You *may* be able to use the device in unsupported platforms as two separate devices, as long as it supports PCIe bifurcation.

Update 2-

It's already listed on the ARK page. Interesting it lists Enhanced Power Loss Data Protection and End-to-End Data Protection. The latter is featured on Optane Memory, but not the former. That's also true with 8xxP/9xxP series SSDs.
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Regarding the H10.

They have so many lineups its difficult to keep track. The H10 drive can be run without needing the Optane acceleration active, so you can use it as two distinct drives.

So the H10 SSD may be in the qualification list for the H310 chipsets, but doesn't necessarily mean it will support acceleration. It may just mean it'll recognize the drive and show in the SSD Toolbox.

I wonder if that'll be true with unsupported platforms as well? Because right now the official support is 8th generation Core and that's quite limiting. It should be able to work with 7th generation Core too.
 

biostud

Lifer
Feb 27, 2003
19,914
7,018
136
Regarding the H10.

They have so many lineups its difficult to keep track. The H10 drive can be run without needing the Optane acceleration active, so you can use it as two distinct drives.

So the H10 SSD may be in the qualification list for the H310 chipsets, but doesn't necessarily mean it will support acceleration. It may just mean it'll recognize the drive and show in the SSD Toolbox.

I wonder if that'll be true with unsupported platforms as well? Because right now the official support is 8th generation Core and that's quite limiting. It should be able to work with 7th generation Core too.

The problem is that intel uses its optane cache drive to push system upgrades. They could with no problem support it on both older platforms and AMD platforms, but they choose not to. Even if there actually is some technical limitations on other platforms, they could still make software that made it run nearly as well, as on the top platforms.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
The problem is that intel uses its optane cache drive to push system upgrades. They could with no problem support it on both older platforms and AMD platforms, but they choose not to.

It's not really without problems. Since it uses RST, it means they'll need to bring RST to non-Intel platforms, or make whole new software that comes with portion of the RST that makes H10 work.

Also they need to be validated and tested. That's why the real solution is to bring full integration, so it'll work on any systems that have NVMe support. I'm pretty sure we'll get there. Full integration will also make the whole thing more robust and not run into weird software issues.

Anyways, price is the biggest factor. If it's cheap enough, and it exposes the two as two distinct storage, then there will be people buying it for AMD systems and older Intel platforms and use third party software to make it work. If its too expensive(say more than decent NAND SSDs like 760p) then not even users with supported chips/platforms will bite.

This PCIe bifucation will not be present on older Intel platforms, but I have been wondering if it is present on certain AMD AM4 motherboards ---

That's not true. According to this thread, even some Z87 chipset boards support it. And most high end chipset boards.

https://smallformfactor.net/forum/threads/pcie-bifurcation-motherboards.834/

AMD motherboards support it too. But you'll likely have to see the motherboards specification to be sure.

In the end, the H10 is probably going to be selling most to OEMs.
 
Last edited:

Billy Tallis

Senior member
Aug 4, 2015
293
146
116
That's not true. According to this thread, even some Z87 chipset boards support it. And most high end chipset boards.

You cannot speak about PCIe bifurcation in general. You have to be specific about the degree of bifurcation. Being able to divide an x16 slot into 2 x8 or 4 x4 ports doesn't say anything about whether you can further split an x4 into 2 x2 ports. I'm not sure if any CPU PCIe ports support bifurcation down to x2. And even on chipsets where x2 and x1 ports are supported, it might still be the case that you don't get to see both controllers on the H10 (I haven't gotten around to testing on unsupported platforms yet, but I plan to check with everything I have in stock.)
 

nosirrahx

Senior member
Mar 24, 2018
304
75
101
You cannot speak about PCIe bifurcation in general. You have to be specific about the degree of bifurcation. Being able to divide an x16 slot into 2 x8 or 4 x4 ports doesn't say anything about whether you can further split an x4 into 2 x2 ports. I'm not sure if any CPU PCIe ports support bifurcation down to x2. And even on chipsets where x2 and x1 ports are supported, it might still be the case that you don't get to see both controllers on the H10 (I haven't gotten around to testing on unsupported platforms yet, but I plan to check with everything I have in stock.)

BIOS support is also going to be critical. If the BIOS isn't upgraded, a lot of capable boards wont get support.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I noticed Optane H10 has 512Gb 64L QLC as compared to the 1024 Gb 64L QLC in the 660p.

So Faster Sequential write when SLC cache of 3D QLC is exhausted?

Also I wonder what will happen after Anandtech does the following:

https://www.anandtech.com/show/13078/the-intel-ssd-660p-ssd-review-qlc-nand-arrives/6

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data. This test is run twice: once with the drive prepared by sequentially writing the test data, and again after the random write test has mixed things up, causing fragmentation inside the SSD that isn't visible to the OS. These two scores represent the two extremes of how the drive would perform under real-world usage, where wear leveling and modifications to some existing data will create some internal fragmentation that degrades performance, but usually not to the extent shown here.

sustained-sr.png

How much does the combination of Write back cache and faster Sequential write affect fragmentation under the conditions mentioned by Anandtech?

And if fragmentation does occur (Optane using write back cache should reduce fragmentation), how much does the smaller 512Gb 3D QLC die of Optane H10 help Sequential Read compared to the 1024Gb QLC die of the 660p SSD?
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Optane Memory H10 512GB benchmark: https://zhuanlan.zhihu.com/p/62659157

Update: Few sites are showing prices for the Optane Memory H10.

Pricing seems to be somewhat higher than the EVO, but that could be because its pre-release. I believe they are going to settle for 970 EVO prices.

Not sure how this is going to work out in retail. Maybe not well because H10 is quite restrictive in compatibility. If they don't have plans to increase platforms that are compatible with it, they better get on doing so. At least make it work with the 7th Gen Core platforms!

Otherwise, this is a product that may only be fit for OEMs.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Optane Memory H10 512GB benchmark: https://zhuanlan.zhihu.com/p/62659157

Update: Few sites are showing prices for the Optane Memory H10.

Pricing seems to be somewhat higher than the EVO, but that could be because its pre-release. I believe they are going to settle for 970 EVO prices.

Not sure how this is going to work out in retail. Maybe not well because H10 is quite restrictive in compatibility. If they don't have plans to increase platforms that are compatible with it, they better get on doing so. At least make it work with the 7th Gen Core platforms!

Otherwise, this is a product that may only be fit for OEMs.

I hope we see some H310 boards with the necessary PCIe x2/PCIe x 2 bifurcations.

Even if limited to PCIe 2.0 x2/PCIe 2.0 x2 it should work extremely well.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
H10 review up on Tweaktown. The review is not detailed, but its in the HP Spectre X360 ultrabook so it might be difficult to test.

https://www.tweaktown.com/articles/8974/intel-optane-memory-h10-hybrid-ssd-review/index.html

The Intel Optane Memory H10 is a system builder product that will likely never come to the channel market in significant volume.
The Optane Memory H10 will not be a retail product like the Optane Memory H10 or SSD 660p. The drives will ship in notebooks primarily

The reviewer does seem to miss out on that the fact 512GB version has significantly lower sequential bandwidth than the 1TB version. A bit of a miss on his part, since you can easily look up on ARK.

It's still a bit difficult to say. There's not enough information. Volume-wise, if its ends up mostly in notebooks, and is competitive in pricing there, could work decent for Intel. The system requirement limitations severely reduce its attractiveness on DIY systems though.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Ah, I have my conclusions. The H10 isn't worth buying as a standalone drive. They need to stop with the shady "Memory" being cache and make it work as slow DRAM. And the H10's successors need to be a full integrated solution so it works in all systems, and they can unify the controller and get rid of the DRAM buffer to lower costs.

It could still work with laptops if the price is attractive enough. The biggest issue is still the software. I feel maybe client caching solution could have been better if they partnered with PrimoCache like AMD did with StoreMI. That would require them setting aside their egos and telling the marketing team they are not always the #1 team.

They just seem to be treating enterprise much better. The DC PMM actually makes sense and has a lot of potential. Client is either thrown aside or Intel just don't know how to make cheap devices.

Volume-wise, the DC PMM will likely sell more than the rest of non DC PMM combined.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Here is a video by Hardware canucks comparing H10 to 760p:


Very interesting how the Sequential Write gets faster with the Optane enabled:

Screenshot-2.png


And we see this happen in the Tweaktown review:

https://www.tweaktown.com/articles/8974/intel-optane-memory-h10-hybrid-ssd-preview/index2.html

8974_003_intel-optane-memory-h10-hybrid-ssd-review.png


This even though Anandtech finds the 32GB Optane by itself slow in Sequential Write:

https://www.anandtech.com/show/14249/the-intel-optane-memory-h10-review-two-ssds-in-one/9

burst-sw.png


This implies that the Sequential Write Speed of Optane is additive to Sequential Write of the 3D QLC!

Wow! How is this happening?!?!
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
This even though Anandtech finds the 32GB Optane by itself slow in Sequential Write:

I think their burst tests are too optimistic but its interesting information nonetheless.

The Tomshardware review shows more of what's going on. The rated transfer rates are hit at 128KB sizes. I wonder what makes AT results lower though?

I'm disappointed that TH decided to use a power meter rather than doing a battery life test. I've seen enough to see that power meter tests doesn't always translate well into battery life results.

Ah, at least we should see actual laptops and get to see how it does in the battery life department.

This implies that the Sequential Write Speed of Optane is additive to Sequential Write of the 3D QLC!

Were you not expecting this after they released the specs on ARK? This bit is a pleasant surprise, but the benefits are still quite muted. It seems you can easily find scenarios where they don't add up.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
This implies that the Sequential Write Speed of Optane is additive to Sequential Write of the 3D QLC!

Were you not expecting this after they released the specs on ARK? This bit is a pleasant surprise, but the benefits are still quite muted. It seems you can easily find scenarios where they don't add up.

I only looked at the 1TB H10 and 1TB 660p:

https://ark.intel.com/content/www/u...32gb-1tb-m-2-80mm-pcie-3-0-3d-xpoint-qlc.html

https://www.intel.com/content/www/u...60p-series/660p-series-1-tb-m-2-80mm-3d2.html

Both are rated at 1800 MB/s. (This, in contrast, to that leaked spec you posted in #507 that showed up to 2000 MB/s for H10)

But yes, looking at the 512GB H10 and 512GB 660p there is a 300 MB/s difference in Sequential write:

https://ark.intel.com/content/www/u...gb-512gb-m-2-80mm-pcie-3-0-3d-xpoint-qlc.html

https://www.intel.com/content/www/u...p-series/660p-series-512-gb-m-2-80mm-3d2.html

512GB H10 has Sequential write up to 1300 MB/s

512GB 660p has Sequential write up to 1000 MB/s
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Two things:

1. Optane additive sequential write: Is this caused by H10 writing a file on opposite ends and then meeting in the middle? If so, then Hybrid drives based on M15 (rather than M10) should be even faster.

2. Here is another review comparing H10 to 760p:

https://www.pcmag.com/article/367892/intel-optane-h10-tech-preview-heres-what-happens-when-you

Screenshot-19.png


Notice how Adobe Photoshop CC launch has the same 3 second launch time for both H10 and 760p on Run 2 and Run 3.

I believe this is because the app (by the time run 2 has come along) is cached in the HP SPectre x360's 16GB RAM. (So performance advantage of Optane is eliminated because the Windows RAM cache is superceding the underlying storage media)

In addition to the Optane H10, our 13-inch Spectre x360 test unit includes a quad-core Intel Core i7-8565U from Intel's latest "Whiskey Lake" generation and 16GB of main system memory.
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
If so, then Hybrid drives based on M15 (rather than M10) should be even faster.

I'd hope by the next generation they would use full integration. QLC drive with 32-64GB Optane buffer and a single controller. Otherwise it'll be another DoA drive.

The only problem is that if they started about now, making a chip would take 3-4 years and by then it might not be relevant at all.

Is this caused by H10 writing a file on opposite ends and then meeting in the middle?

It does seem to perform pretty good when you are doing heavy multi-tasking. The big benefits are when you are transferring a massive file while you are opening an application. Seems like they are using the Optane portion for doing one, and QLC portion for the other. Ideally, the two parts would work separately, so the losses will be small compared to doing things one at a time.

The algorithm seems quite fancy and neat to achieve this. But it needs to be on a single controller.
 
Last edited: