What do you think of Host Memory Buffer (HMB) technology ?

May 11, 2008
21,678
1,295
126
It is a new technology where the normal dram chache on the ssd is removed and its functionality is taken over by the host cpu and the flash mapping tables are stored in normal system memory.
Kind of scary at first, what happens when the os hangs or blue screens or when a power fail happens because of an aging psu. Then there will be data corruption or data loss i guess. More often that with an ssd controller with its own dram.

I seriously wonder if we are waiting for the inevitable, a dedicated core(a beefed up mmu) placed on the cpu die doing the storage activities. With 3d xpoint memory this comes closer.
 

Billy Tallis

Senior member
Aug 4, 2015
293
146
116
The host DRAM is used only as a cache for the mapping information. There's no added risk from power loss, because drives are required to be prepared for a sudden loss of access to the HMB if the OS decides it needs that memory back. Every drive supporting HMB can also operate as a normal DRAMless SSD.

OS hangs shouldn't cause any problem, because the SSD's use of the host DRAM only involves the PCIe root complex and the memory controller. It doesn't matter what the CPU cores are doing because the software running on the CPU cores doesn't even know when the SSD is accessing its little slice of host memory. Once the memory for HMB has been allocated and the drive is informed of what region of memory it may use, there is literally zero CPU time spent on providing the HMB feature, and only a very tiny bit of memory bandwidth and capacity.
 
  • Like
Reactions: William Gaatjes
May 11, 2008
21,678
1,295
126
The host DRAM is used only as a cache for the mapping information. There's no added risk from power loss, because drives are required to be prepared for a sudden loss of access to the HMB if the OS decides it needs that memory back. Every drive supporting HMB can also operate as a normal DRAMless SSD.

OS hangs shouldn't cause any problem, because the SSD's use of the host DRAM only involves the PCIe root complex and the memory controller. It doesn't matter what the CPU cores are doing because the software running on the CPU cores doesn't even know when the SSD is accessing its little slice of host memory. Once the memory for HMB has been allocated and the drive is informed of what region of memory it may use, there is literally zero CPU time spent on providing the HMB feature, and only a very tiny bit of memory bandwidth and capacity.

Interesting. But when you write : "shouldn't" it means it is not impossible. But i guess that really has to be a corner case where a driver going haywire starts writing strange things to the root complex PCIe configuration registers like for example changing the link powermanagement settings. Seems ok.

But if the controller can also work in dram less mode, it does mean that the mapping table in the system dram is the most updated version, if the controller is not able to write the latest updated version from system dram to flash in the ssd in an event where the ssd controller cannot access system dram , does that not mean lost or corrupted files ?
 
Last edited:

Billy Tallis

Senior member
Aug 4, 2015
293
146
116
But if the controller can also work in dram less mode, it does mean that the mapping table in the system dram is the most updated version, if the controller is not able to write the latest updated version from system dram to flash in the ssd in an event where the ssd controller cannot access system dram , does that not mean lost or corrupted files ?

The NVMe spec says:
The controller shall ensure that there is no data loss or data corruption in the event of a surprise removal while the Host Memory Buffer feature is being utilized.

When the controller writes updated mapping information to the HMB, it must still keep a copy on the SSD itself. That information has to be saved to the flash before the controller can flush it from its on-board memory. The HMB cannot be used as a writeback cache for metadata; it must be treated as a write-through cache.
 
  • Like
Reactions: William Gaatjes
May 11, 2008
21,678
1,295
126
The NVMe spec says:


When the controller writes updated mapping information to the HMB, it must still keep a copy on the SSD itself. That information has to be saved to the flash before the controller can flush it from its on-board memory. The HMB cannot be used as a writeback cache for metadata; it must be treated as a write-through cache.

Write through, that means that the HMB is primarily good for reading the cache in the HMB yes ?
Low latency reading, to a degree higher latency for writing.
When writing, it means that the ssd controller must first write any change of the updated mapping table to flash and then to HMB , yes ?