What capacity do you think the first Intel/Micron 3DXpoint DIMMs will have?

cbn

Lifer
Mar 27, 2009
12,968
221
106
Current Maximum Capacity for a single DDR4 ECC RDIMM is 128GB. (<---These use TSVs to achieve this capacity)

P.S. First processor to support 3DXpoint DIMMs is Intel Cascade Lake due out 2018.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Intel-Optane-Intel-Memory-Drive-More-Memory-Per-Server.jpg



The slide above uses an Intel Memory Drive example where a 2P system with 24 DIMM slots (each with a 128GB DIMM) is being used with 21TB worth of Optane SSDs.

So to duplicate that capacity purely in DIMMs would require sticks with 1TB capacity each.
 

Bier667

Member
Oct 31, 2017
35
1
11
It's not cost sensitive, that's for sure.

In the end all it takes to answer your question is how much BAIDU or Google needs...
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
One factor affecting Optane DIMM capacity is the DRAM limit (per socket) on Intel Cascade Lake.

Although the Skylake Xeons have a DRAM limit of only 1.54TB (M models) and 768GB (base models), the Broadwell Xeon E5 v4 had a DRAM limit of 1.54TB (per socket) and Xeon E7 v4 had a DRAM limit of 3.07TB (per socket)

M = Supports 1.5 TB DRAM per socket, up from 768GB as standard

So for Cascade Lake does the DRAM limit (per socket) return to 3TB? Increase to 4.5TB? 6TB? Greater than 6TB?
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Assuming a DRAM limit of 3TB, my guess is that the Optane DIMM size will be 512GB (maximum).

So 6 x 512GB Optane DIMMs occupying the six slots closest to the processor.

This compared to the following:

1. 12 x 128GB DDR4 ECC DIMMs with 1536GB Optane SSD (ie. 3072GB Memory Drive)

2. 18 x 128GB DDR4 ECC DIMMs with 768GB Optane SSD (ie. 3072GB Memory Drive)

3. 12 x 64GB DDR4 ECC DIMMs with 2304GB Optane SSD (ie. 3072GB Memory Drive)

4. 18 x 64GB DDR4 ECC DIMMs with 1920GB Optane SSD (ie. 3072GB Memory Drive)

5. 12 x 32GB DDR4 ECC DIMMs with 2688GB Optane SSD (ie. 3072GB Memory Drive)

6. 18 x 32GB DDR4 ECC DIMMs with 2496GB Optane SSD (ie. 3072GB Memory Drive)

(For #2, I do wonder how much distance from the processor will affect latency: A system with a high number of DRAM DIMMs farther away from the socket vs. a system with a fewer number of Optane DIMMs which are closer to the socket? For this, think about data sets (2304GB and smaller) that don't require the Optane SSD)
 
Last edited:

Bier667

Member
Oct 31, 2017
35
1
11
Does Intel have plans for the Socker R4 with Cascadelake aswell? And what about Support for NVDIMM-P?

Is it reasonable to wait for Cascadelake-X with Optane DIMM in the HEDT sector early H2 next year?
 

Charlie22911

Senior member
Mar 19, 2005
614
231
116
Do NVDIMMs have a place in enterprise though?
Wouldn’t that sector be better served with NVMe storage pools?
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Do NVDIMMs have a place in enterprise though?
Wouldn’t that sector be better served with NVMe storage pools?

I assume you mean Enterprise by, servers in general. What I'm going to describe specifically says about the Enterprise sector.

Yes they do.

The E7 platform does not have a successor to it in the current Xeon Scalable, "Purley" platform. E7s used buffer chips, which not only increased cost, but lowered memory performance. The advantage is it doubles the memory capacity. It must be very important to them.

They were saying on their earnings call that the Enterprise sector declined. I wouldn't be surprised if they were waiting on Optane DIMM support(which comes on Cascade Lake), which would increase memory capacity.
 
  • Like
Reactions: Charlie22911

cbn

Lifer
Mar 27, 2009
12,968
221
106
Some info from the following article:

http://www.techinsights.com/about-t...int-memory-die-removed-from-intel-optane-pcm/

TechInsights recently acquired and tore down an Intel OptaneTM M.2 80mm 16GB PCIe 3.0 and discovered a 3D X-Point memory die in the package. This is the first commercial 3D Xpoint product from Intel and Micron. The Intel 3D X-Point memory package size is 241.12 mm2 (17.6 mm x 13.7 mm) and a single X-Point Memory die is contained within. The 3D X-Point Memory die measures 206.5 mm2 with a 16.16 mm length and a 12.78 mm width. Memory efficiency in the die is 91.4% which is higher than the Samsung 3D 48L V-NAND (70.0%) and the Intel/Micron 3D FG NAND (84.9%). Memory density of the 3D Xpoint Memory is 0.62 Gb/mm2 which is lower than commercial 2D and 3D NAND products (2.5 Gb/mm2 for Toshiba/SanDisk and Samsung 3D 48L TLC NAND, and 1.28 Gb/mm2 for Toshiba/SanDisk 2D 15nm TLC NAND). However, compared to DRAM products, the 3D Xpoint memory density is 4.5 times higher than DRAM products with the same 20 nm technology or 3.3 times higher than Samsung’s 1x nm DDR4. Xpoint Memory products use 20 nm technology node for both WL and BL with 0.00176 µm2 cell size which is about half of the DRAM cell size. This is due to a stackable memory cell and 4F2 instead of 6F2 being used for memory cell array design.

So with the density being 4.5 times higher than DRAM products with the same 20 nm technology or 3.3 times higher than Samsung’s 1x nm DDR4, I am assuming a 512GB 3DXpoint DIMM would need TSVs.....but a 256GB DIMM would not.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Using the numbers in the article from previous post if Samsung Z-NAND is 48 layer SLC (with some tweaks) then by my "back of Napkin Math" 20nm 3DXpoint would have 74% as much density (or another way of looking at would be that Z-NAND has 34% more density per mm2 compared to 20nm 3DXpoint).

If Z-NAND ends up using 64 layers then the density would be 33% higher than it would be at 48 layers. (re: 64/48 = 1.333). This would make Z-NAND based on 64 layers 78% higher density than Intel/Micron 20nm 3DXpoint.

Of course, if Intel/Micron were to go 4 layers on 3DXpoint then the density would be 28% higher than 64 layer Z-NAND.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
E7s used buffer chips, which not only increased cost, but lowered memory performance. The advantage is it doubles the memory capacity. It must be very important to them.

I didn't realize the buffer chips did this via doubling the number of DIMM slots per processor.

Example below is how the Oracle Exalytics In-Memory machine has 24 DIMMs slots per E7 processor according to the specs.

2 TB of memory with sixty four 32 GB DDR3 ECC registered low-voltage LRDIMMs, upgradable up to 96 DIMMs (24 DIMMs per processor) to a maximum memory capacity of 3 TB

So if some Cascade Lake Xeon processors follow with buffer chips that would boost DIMMs per processor from 18 12 to 36 24.

This brings up a lot of questions on how various combinations of DIMMs will work.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
The following article was originally posted by Dayman1225 in another thread:

http://www.tomshardware.com/news/intel-xd-xpoint-dimm-lenovo-thinksystem,36573.html

512GB Optane capacity is mentioned.

Some additional info:

"The 3D DIMM[...] has a higher power profile and is slightly wider so most servers would not be able to accept them in a standard DDR4 slot without specific changes to the board to accommodate it."

The four 3D XPoint slots are also DDR4-capable, meaning they adhere to JEDEC spec, and you could use regular memory in them as well. But it is an important distinction that the 3D XPoint DIMMs will not function in standard memory slots in most servers. That means that accommodating the DIMMs will likely require specialized motherboard designs.

Lenovo designed the SD650 specifically to maximize density through effective thermal management, so like the rest of the system, the DIMMs are water cooled. The system uses gap pads, which are thermal pads that make contact with the DIMMs, draped over a waterblock (seen above). We've seen similar techniques in other watercooled servers, but we aren't sure if water cooling will be a strict requirement for 3D XPoint DIMMs.

The 3D Xpoint memory can consume about 3X the power of a standard 8/16GB DDR4 DIMM. The actual value could range from 15-18W depending on the workload.

Cooling for this memory required special focus and optimization of the cooling loop to ensure the next generation device is capable of being supported in the server with the warm water cooling technology. The memory cooling loop extracts heat from all heat transfer surfaces of the 3D XPoint memory efficiently with the shortest conduction path to the critical device from the water flow channel.

Power consumption is a huge consideration in the data center, and gaining 32x the memory capacity in exchange for a 3x increase in power consumption is a dramatic improvement. Intel's first prototype also featured an FPGA to manage the underlying media. Squeezing in the FPGA's power draw and 512GB of 3D XPoint into a 15W-18W envelope is impressive.

We also learned that the Apache Pass DIMMs, much like NVDIMMs, require a DRAM "chaperone," meaning the 3D XPoint DIMMs have to be accompanied by at least one standard DDR4 stick in the same memory channel. The block diagram shows the four Apache Pass DIMMs (also known as AEP) riding along with standard DDR4 DIMMs on the same channel.

Lenovo's video states that the water cooling system has some additional headroom to tackle processors with a higher TDP "should Intel come out with those." In the server documentation, we found that the system supports up to a 240W TDP. We've seen several indicators that Intel may have processors coming with much higher TDPs, so that functionality may prove useful.


aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9QL0UvNzUyNTk0L29yaWdpbmFsLzA5MC5KUEc=


lenovo-SD650-Water-Flow-Animation2.gif


P.S I am guessing the controller (on the Apache Pass DIMM) to be eventually integrated into the Xeon processor. This, if it happens, I am thinking would it help both reduce power and maybe even increase capacity of the 3DXpoint DIMM (although I think is more likely DRAM would be added instead of extra 3DXpoint) .