Discussion Optane Client product current and future

Page 19 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

nosirrahx

Senior member
Mar 24, 2018
304
75
101
Good point about the clockspeed and IPC.

Beyond that I wonder how much CPU core count matters?

Is loading something that could be maximized with a overclocked (5+ GHz) Core i3 in most (or all) cases?

Likely no universal answer, every app/game would be different. The only way to account for this is going right to the limit within non-exotic cooling on top end parts.

DDR4 4000 CL12, 8 cores/16 threads at 5.2 GHZ and a top of the line GFX card with OCed RAM and core.

If storage does not show a performance delta under these conditions then you can feel fairly confident that your use case does not require blazing storage.

If reviews on storage were done on a system like this the results would be more valuable for people looking to build a bleeding edge system while not spending $ on something unimportant.
 
Last edited:
  • Like
Reactions: cbn

cbn

Lifer
Mar 27, 2009
12,968
221
106
Some information from Flash memory Summit 2018 on latency of various media:

https://www.flashmemorysummit.com/English/Collaterals/Proceedings/2018/20180806_PreConC_Kulkarni.pdf

Screenshot-20.png


As you can see Phase change memory (PCM) has lower latency than DRAM.

And with Optane being based on Phase change memory according to PC Perspective I wonder how long till we see Optane stacked on top of GPU die? (e.g. Intel Xe or AMD Radeon).

How much would a wider bus (that is similar to HMB HBM perhaps) increase latency?

How much increase in endurance is needed? One Lithography shrink enough? Lithography shrink in combination with DRAM DIMMs* (for write buffer)?

If we could get Optane stacked on GPU die maybe that is one breakthrough needed to get past SATA SSD being good enough for load times?

*HBM in the case of a dGPU.
 
Last edited:

nosirrahx

Senior member
Mar 24, 2018
304
75
101
How much would a wider bus (that is similar to HMB perhaps) increase latency?

The bus width should not impact latency at all but since wider bus width does increase the odds of being able to get all of the requested data in one chunk it is highly synergistic with lower latency.

Reading your post you have to wonder, will we see RAM with cache? It kind of makes sense from the standpoint of reducing latency and cost. You could stick bleeding edge PCM right against the bus and then have a controller move data between the PCM and DRAM.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Reading your post you have to wonder, will we see RAM with cache? It kind of makes sense from the standpoint of reducing latency and cost. You could stick bleeding edge PCM right against the bus and then have a controller move data between the PCM and DRAM.

Yep, that is NVDIMM-P.

P.S. Regarding NVDIMM-P I have been wondering if Intel is working on something the equivalent of that but on the GPU die? (Imagine perhaps a GPU using TSVs (ie, stacked GPU dice*) with DRAM right above that and Optane right above the DRAM.

*Think GPU designed with high performance per watt at low voltage....with the lack of absolute performance made up by stacking the dice.
 

nosirrahx

Senior member
Mar 24, 2018
304
75
101
*Think GPU designed with high performance per watt at low voltage....with the lack of absolute performance made up by stacking the dice.

GPUs would do well with internal storage for graphics data. You could eliminate load times and texture pop in as the primary storage feeds the on GPU storage.

Imagine a typical 12GB of frame buffer and then another 64GB of low latency storage right on the GPU.

Game code would need to change to take advantage of this but I bat that would not be that hard as all you are doing is changing where the data is, not the actual data.
 
  • Like
Reactions: cbn

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
The problem with that idea, like with all others is cost.

64GB Optane is quite pricey. If you want developers to code for it, then you'd want it widespread. You would want it not just on GTX 1060 class cards, but much lower. Literally every new discrete GPU out there needs to have it on board, so of course AMD's as well. Look at what's happening with RTX and DLSS.

How much would a wider bus (that is similar to HMB HBM perhaps) increase latency?

Optane's media latency is dominant over any DRAM technology's latency so that's not really an issue.

As you can see Phase change memory (PCM) has lower latency than DRAM.

We don't really have anything other than 3D XPoint for mass produced storage class memory. There are some promising alternatives, but right now the others are available in capacities that are quite negligible. Some have densities that are even worse than DRAM! Until its available in usable capacities and at somewhat marketable prices the only available PCM is 3D XPoint.

Western Digital had a slide showing that as memory speed becomes closer to DRAM, volatility also decreases. Tape storage is excruciatingly slow but proven to have long term storage characteristics. We know with NAND it can theoretically only hold it for a decade before the data degrades to an unknown state. With Optane, Intel said on AMA that NAND or HDDs may be better for cold storage.

Micron had a slide where they could split 3D XPoint into two different devices. One for fast SSDs with long cold storage, is affordable, and the latencies may be 10x better than NAND and endurance closer to NAND. The second is with much higher volatility, but performance and endurance much closer to DRAM and price higher as well.

So DRAM being the ultimate fastest memory and endurance may just be the result of its volatility.
 
Last edited:
  • Like
Reactions: Dayman1225

cbn

Lifer
Mar 27, 2009
12,968
221
106
The problem with that idea, like with all others is cost.

64GB Optane is sold for quite a bit. If you want developers to code for it, then you'd want it widespread. You would want it not just on GTX 1060 class cards, but much lower. Literally every new discrete GPU out there needs to have it on board, so of course AMD's as well.

If Intel Xe ends up being modular (ie, 3D chip stacking and/or EMIB ) then I would think the tech could be on every Intel processor as iGPU (with the CPU being able to share that memory also*).

(I'm thinking of a basic building block (or two) that could be adapted from atom iGPU to high end dGPU by adding units) (AMD included)

*Would be very nice for the low end chips as even Today's Gemini Lake is quite capable.

P.S. Regarding cost, How do you feel about that if Intel releases a small die Optane---> https://forums.anandtech.com/thread...d-gen-optane-will-have.2555507/#post-39753460
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
@cbn Oh no, EMIB is not a cheap solution. EMIB is a relatively affordable solution for high performance dies. Even using regular DDR4 DRAM is a waste, because EMIB connections are for terabit(many hundreds of GB/s) connections.

The cheapest is not using EMIB chiplets or silicon interposers of any kind, but regular ones. They are used to package eDRAM on Iris Plus and Pro parts. The on-package PCH on U and Y chips also use the cheapest interposers.

They could be as modular as they want, and the implementation be practically be free. But you still need to pay for the pricey 3D XPoint dies.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
They could be as modular as they want, and the implementation be practically be free. But you still need to pay for the pricey 3D XPoint dies.

Please see my edit in post #457.

What do you think?
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
They'll need to do everything to hope to get there. The word is that Intel doesn't make money(if they do, not much) on the 905P models, so you can get the idea of how much it costs for them.

If the 2nd generation comes with 4 layers as rumored, it'll double the density. The increased complexity and research required will not net full 50% reduction. Think 30%. Over time with high volume it may get close to it, maybe 45%?

DDR4 is available with 8Gb ICs today, and will come with 16Gb later. So 3D XPoint right now has 16x the density. DDR4 is slim margin, so the cost to manufacture chips in Optane is probably about a quarter of DDR4 DRAM.

If they can sell it in high volume, then they can reduce the costs by another 35-45%, depending on volume.* 2X per GB over MLC NAND would be very good. High volume doesn't mean "1 million per quarter" like they announced with Optane Memory. It means 5-10 million.

*Tech Report's review concluded that they might have recommended using Optane Memory on their value guide with the Pentium G4560 CPU. So its unfortunate, this artificial segmentation thing. There's a $30 difference between the Pentium and the cheapest Core i3. Really though they should have enabled it on Celeron and Gemini Lake.
 
  • Like
Reactions: Dayman1225

cbn

Lifer
Mar 27, 2009
12,968
221
106
HBM package sizes :

Gen 1: 39.9 mm2
Gen 2: 92 mm2

https://www.anandtech.com/show/9969/jedec-publishes-hbm2-specification

hbm2_mechanical_575px.png



I wonder if a Gen 2 "medium size" Optane die is being developed to act as a second layer for HBM2? (With my humble guess/speculation of Gen 2 "Large" and "small" dies being conventional (ie, not for adding capacity/persistence to HBM) Optane?)

So Imagine a 8 to 12 HI stack with the first four dies being DRAM and determining total bandwidth (of DRAM) and second layer of 4 to 8 medium Gen 2 Optane above those four DRAM dies?

This medium size "for second layer of HBM" Optane being used for NNP, FPGA and high end GPU (Intel Xe, AMD Radeon, Nvidia too?).....with the added bonus of scaling downward. (Thinking of a high end dGPU with stacked Xe dies connnected together via EMIB x 4 as the starting point). This scalable architecture being tied together via Intel oneAPI.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Referencing the chart from post #452 here is Samsung MRAM (actually eMRAM):

https://www.anandtech.com/show/14056/samsung-ships-first-commercial-emram-product

Capacity per 28nm test chip is very low but we don't know the die size and therefore density yet:

On the flip side, however, MRAM’s density and capacity both fall far short of 3D XPoint, DRAM, and NAND flash, which greatly reduces its addressable markets. Samsung is not formally disclosing the capacity of its new eMRAM module; the company is only saying that it yet has to tape out a 1 Gb eMRAM chip in 2019, which strongly suggests that the current offering has a lower capacity.

P.S. I brought this up because MRAM is non-volatile memory (like Optane) and I believe Intel will probably overlap (to some degree) with Foveros SoCs:

https://www.tomshardware.com/reviews/intel-sunny-cove-gen11-xe-gpu-foveros,5932-2.html

Intel also added a memory chip to the top of the stack using conventional a PoP (Package on Package) implementation. The company envisions even more complex implementations in the future that include radios, sensors, photonics, and memory chiplets.

Optane memory chiplets? (To save space....despite higher latency? Or lower latency (in the near future) is part of the Internal Optane roadmap? According to the chart in post #452 write latency is what needs the most improvement.)

SRAM chiplets?

Another indication (besides 16GB Optane M15) that small die Gen 2 Optane is coming? Or maybe chiplet Optane is aiming more for Gen 3?
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
A recent news post from Blocks and Files about Optane DIMMs:

https://blocksandfiles.com/2019/03/08/optane-persistent-memory-mystery/

Here is a part speculating on a translation layer:

The Optane DIMM controller
If there is an Optane DIMM controller, what does it look like? We think the Optane DIMM has the equivalent of an FTL, a Flash Translation Layer. In this case it would be like an XTL, an XPoint Translation Layer. This XTL would add latency to XPoint data accesses.


intel has not clarified this point but we presume it would look after wear-levelling and over-provisioning to extend the DIMM’s endurance. That requires a logical-to-physical mapping function with logical byte, not block, addresses. With SSDs the translation layer operates at the block level. Optane DIMMs are byte-addressable so the mapping would be at byte-level.


Optane DIMMs come in 128GB, 256GB and 512GB capacities and so byte-level mapping tables contain entry numbers equivalent to the byte-level capacity. For instance, 512 x 1024 x 1024 x 1024 bytes: 1.074bn plus the over-provisioned bytes. It would need DIMM storage capacity to hold these mapping tables entries.


Roughly speaking it is as if an Optane DIMM is a DIMM-connected XPoint SSD.

Notice the last sentence in the first paragraph (of the above quote) mentions XTL adding latency.....and the last sentence of the quote claims "Roughly speaking it is as if an Optane DIMM is a DIMM-connected XPoint SSD". (SIDE NOTE: NVDIMM-F is an SSD in a DIMM slot.)

But what if that Xpoint translation layer could be removed? That means Optane has the potential for a latency reduction purely from software change or removal of software alone! That is pretty exciting!

I wonder how much progress is needed for that to happen? Maybe instead of a DIMM controller the Optane controller would be in the processor?
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
The "XTL" is similar to FTL on NAND, so its not like drivers you install in your system, but more like an algorithm driven by the controller. It can't really be removed. What they are saying is XTL is a necessary function required to have endurance at acceptable levels and the unavoidable downside is the latency. Not that you want to remove it anyway on a DIMM device if you want it lasting more than a few months.

NVDIMM-F is not what Optane DC PM uses. It uses Intel's own. So at least initially it'll likely end up being different. We don't know the exact details yet.

Cascade Lake already has changes to accommodate for Optane DC PM. The actual controller in the DIMM is likely quite large and will remain on the DIMM. The first generation DIMM at least has on-board DRAM as well(in addition to requiring a minimum of 1:1 DRAM to Optane pairing) and thus prevents it from being on the CPU.

The 375GB P4800X with the 60DWPD would be able to write non-stop if the average write speed is at ~260MB/s. My hope is the DIMM versions double, or triple this. Even then, artificially limiting speeds aren't enough. DRAM on board to reduce writes to the 3D XPoint media, and 1:1 DRAM pairing hopefully can reduce the writes much more.

At 60DWPD = 260MB/s sustained
At 180DWPD = 780MB/s
DRAM buffers reducing writes by 50% = 1.5GB/s
Read/Write ratios limited to 60/40 = 3.75GB/s

You'd want the device to last longer than minimum warranty specs, so limit the writes to 2.5GB/s.

Seems acceptable if they can do this at much better prices compared to DRAM. 2.5GB/s write seems low, but it'll do that at much lower latencies than any SSD.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
The 375GB P4800X with the 60DWPD would be able to write non-stop if the average write speed is at ~260MB/s. My hope is the DIMM versions double, or triple this. Even then, artificially limiting speeds aren't enough. DRAM on board to reduce writes to the 3D XPoint media, and 1:1 DRAM pairing hopefully can reduce the writes much more.

At 60DWPD = 260MB/s sustained
At 180DWPD = 780MB/s
DRAM buffers reducing writes by 50% = 1.5GB/s
Read/Write ratios limited to 60/40 = 3.75GB/s

You'd want the device to last longer than minimum warranty specs, so limit the writes to 2.5GB/s.

Seems acceptable if they can do this at much better prices compared to DRAM. 2.5GB/s write seems low, but it'll do that at much lower latencies than any SSD.


Earlier in the thread I was wondering about increasing parallelism (and reducing voltage) to increase endurance (with the increased parallelism regaining some (or all) of the write speed). However, Billy Tallis isn't entirely convinced lowering power would increase endurance. His comments here.

I'm not sure if this throttling necessarily has any effect on endurance. The controller might not be changing the media access parameters at all, and could instead just be choosing to insert small amounts of idle time between operations—too small to enter a low-power idle state, but maybe long enough for us to measure. It might be possible to notice the consequences of something like this by looking at the latency distribution. The Quarch power module samples at 4µs intervals which might not be quite fast enough to discern the difference between a reduced duty cycle of operations and a slower but steadier media access (it's great for looking at individual hard drive seeks). My oscilloscope should be plenty fast to catch individual media accesses, but probably won't have sufficient ADC resolution to show anything interesting.

But lets say it did work. Maybe Intel Xe (using a very high TSV Optane) could be the first place we see "XTL" (Xpoint version of FTL) removed? This in combination with X number of endurance boosting lithography shrinks*.

Using PCM to replace DRAM is a formidable challenge, because very fast switching times in the nanoseconds range and extremely high cycle numbers of ∼10^16 present a combination of requirements that have not been achieved by phase changematerials. DRAM replacement is a special case since DRAM is a volatile memory, whereas PCM is a non-volatile memory.

If PCM were to achieve DRAM-like performance, it would open up possibilities to realize completely new computer architectures. Very fast switching times have been achieved for several phase change materials, including Ge2Sb2Te5 and GeTe in actual PCM devices. The high cycle number remains an enormous challenge, but it appears that scaling to smaller dimensions of the phase change material is beneficial for cycling. Data measured on highly scaled PCM cells using an Sb-rich Ge-Sb-Te phase change material demonstrated 10^11 cycles under accelerated testing conditions using a switching power of 45 pJ, which leads to an extrapolated cycle number of 6.5 × 10^15 cycles under normal switching conditions using 3.6 pJ.

*and dram buffer.

P.S. In order to help determine the likelihood of Intel Xe essentially acting as a "bridge" for the coming of low latency (ie, memory gap closing) high endurance Optane to CPU how does writing to a GPU differ than writing to a CPU? Anyone? (I will do some research on this).
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Earlier in the thread I was wondering about increasing parallelism (and reducing voltage) to increase endurance (with the increased parallelism regaining some (or all) of the write speed).

I'm pretty sure in the future things can be improved, but as it stands, things are where they are.

But lets say it did work. Maybe Intel Xe (using a very high TSV Optane) could be the first place we see "XTL" (Xpoint version of FTL) removed? This in combination with X number of endurance boosting lithography shrinks*.

Just so we are clear, you are aware Xe is basically Intel's name for the post-Gen graphics architecture right?

There are LOTs of things in paper that do not pan out. If you go back to the article, it says they were getting >1 million cycles in simulations, but when they made the thing it dropped a lot. Memory technologies also tend to be in the absolute bleeding edge and are many times denser than CPU/GPU so shrinks aren't really the solution here. And we probably need 100x minimum gain in endurance to be even considering it without the translation layer.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Just so we are clear, you are aware Xe is basically Intel's name for the post-Gen graphics architecture right?

Yes.


There are LOTs of things in paper that do not pan out. If you go back to the article, it says they were getting >1 million cycles in simulations, but when they made the thing it dropped a lot.

I was unclear whether Mark Webb was referring to pre-production samples or simulations..... here is quote:

https://blocksandfiles.com/2019/03/08/optane-persistent-memory-mystery/

Mark Webb, an independent semiconductor analyst, says the Intel 3DXP cycling performance “was >1M cycles… until they actually made a product. Then it started to drop.”

This especially with the following also coming from that same article.

According to Handy, SK Hynix recently suggested, using internal modelling, that Optane endurance is 10 million cycles, with a slide deck image showing this:


SK-HYnix-Optane-Endurance.jpg

“Existing SCM” means Optane.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Memory technologies also tend to be in the absolute bleeding edge and are many times denser than CPU/GPU so shrinks aren't really the solution here. And we probably need 100x minimum gain in endurance to be even considering it without the translation layer.

Some information on scaling of Phase change memory (according to PC Perspective Optane is Phase change memory):

https://ieeexplore.ieee.org/document/6330621/

PCM devices using solution-processed GeTe nanoparticles of diameter range 1.8-3.4nm has been demonstrated. Highly scaled (<;2nm) PCM cross-point device using carbon nanotube as the electrode is fabricated proving the scalability of PCM to ultra small dimensions.

https://ieeexplore.ieee.org/document/7122300/

The PCM, another promising SCM candidate, relies on electronic switching between the low-resistance crystalline and high-resistance amorphous phases of chalcogenide alloys [5]. The superb scalability of PCM to ultrasmall dimensions (<5 nm) has also been explored in [9]–[11]

Right now Optane is at 20nm, but what will it look like at 16nm, 14nm, 10nm, 8nm, 7nm.....and so on.

P.S. Pretty amazing that unlike DRAM and NAND, Phase change memory can scale both lithography and layers.
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
I was unclear whether Mark Webb was referring to pre-production samples or simulations..... here is quote:

Yea the Hynix slide is likely referring to theoretical. Mark is saying it for Intel's 3D XPoint.

Right now Optane is at 20nm, but what will it look like at 16nm, 14nm, 10nm, 8nm, 7nm.....and so on.

I'm not as optimistic. There's something that can be easily done in labs, but when brought to production it changes a lot. Shrinks might bring few times better endurance, but that doesn't change the fundamentals.

We had 3nm transistors demonstrated more than a decade ago, but right now, companies are using fake nm conventions. TSMC's 14nm isn't really 14nm, but has a density about equal to 20nm. Their 7nm also seems to be significantly off 2x over 10nm. Rather than 7nm being 16x density of 28nm in actuality we have maybe 7x. And we have further muddying of the numbers with Samsung 8nm being a 15% reduction from their 10nm(rather than 56%), and TSMC 12nm 10% better rather than 37%). Performance gains are even less than this.

Also Micron said themselves that due to pure scale, DRAM has beaten all so-called DRAM killers. The fact that its already in production and Intel is pushing it is a plus, but right now NAND has the massive scale advantage.

So despite the theoretically better future, in reality NAND continues to scale using vertical stacking, and we've yet to see 4-layer 3D XPoint.

I'm just saying, keep your expectations in check.
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
There was some criticism on the interwebs about manufacturers(probably with support from Intel) labelling 16GB Optane Memory + 4GB system memory computers as having 20GB of memory.

While currently it amounts to misleading advertising, perhaps its also an indication of what Optane Memory could become. The brand "Optane Memory", with future versions(perhaps with premium versions that have endurance and performance like 9xxP) using consumer version of Memory Drive Technology to act as cheap memory extender.

cbn's experience of using Optane in low memory situations* seems to suggest at least in casual usage scenarios it may be viable. Eventually they can move to using DIMM versions for real system memory extension. This isn't just speculation, because some Intel presentations allude to this.

*This is where Optane's 10x advantage in latency will really show itself.
 
  • Like
Reactions: cbn

cbn

Lifer
Mar 27, 2009
12,968
221
106
There was some criticism on the interwebs about manufacturers(probably with support from Intel) labelling 16GB Optane Memory + 4GB system memory computers as having 20GB of memory.

While currently it amounts to misleading advertising, perhaps its also an indication of what Optane Memory could become. The brand "Optane Memory", with future versions(perhaps with premium versions that have endurance and performance like 9xxP) using consumer version of Memory Drive Technology to act as cheap memory extender.

cbn's experience of using Optane in low memory situations* seems to suggest at least in casual usage scenarios it may be viable. Eventually they can move to using DIMM versions for real system memory extension. This isn't just speculation, because some Intel presentations allude to this.

*This is where Optane's 10x advantage in latency will really show itself.

A consumer version of memory drive could be very useful.

https://www.intel.com/content/www/u...te-drives/optane-ssd-dc-p4800x-mdt-brief.html

Additionally, Intel® Memory Drive Technology intelligently determines where data should be located within the pool to maximize speed, enabling servers to deliver performance across many workloads—even when DRAM is only supplying one-third to one-eighth of the memory pool capacity.

Based on what I am reading above I am assuming it could or would be more aggressive at "Paging out" than Windows. More aggressive than Windows I think would really help out for Internet browsing based on my own experience (reason: at higher levels of page-out having enough free DRAM for writes becomes a problem).
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Optane Memory's main criticism is that NAND SSD drives are getting real cheap and having an SSD-only system is making more and more sense.

If it can supplement system memory, it can become much more attractive.

Regarding endurance, both the 16GB and 32GB Optane Memory were rated at 182.5TBW. Interestingly enough for the M10, all capacities from 16GB to 64GB are rated 365TBW. So at least for the 16GB version, the endurance is doubled. I think what's happening is that its still early for Intel to characterize the media and its a family TBW. There's no reason the 64GB should have 1/4 endurance of the 16GB one.

Per capacity, the 16GB M10 would then have better endurance than the 900P/905P series.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Regarding endurance, both the 16GB and 32GB Optane Memory were rated at 182.5TBW. Interestingly enough for the M10, all capacities from 16GB to 64GB are rated 365TBW.

Manufacturing must be getting better.

365 TBW should be plenty for memory extender (for a basic PC).
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
A nice article about the H10 from PCWorld: https://www.pcworld.com/article/3332199/intel-optane-memory-h10-nand-flash-hybrid.html

The Youtube video link shows some demo laptops that feature H10. Now the PCWorld guy says its not a guarantee we'll see such models with H10. Nevertheless, they are demonstrated using popular ultrabook models so its a good sign. "First week of April" it says.

So for the H10, we don't need sequentials to be limited to the sequential speed of Optane Memory just because we see it happen with the discrete parts. The DRAM used in NAND controllers are much faster in sequentials but we don't see such speeds.

And yes I am saying H10 will have 905P-like sequentials.
 
Last edited: