Question NVME M.2 capacities and possibilities

BonzaiDuck

Lifer
Jun 30, 2004
15,709
1,450
126
I've been building my systems to employ a range of storage devices: NVME-M.2's, SATA SSDs and SATA HDDs.

Now, in another thread, someone raised my attention to the fact that you can buy a 4TB NVME M.2 for between $480 and $700.

The limitations on using NVMEs would seem to be the number of PCIE lanes available in your system, plus the number of M.2 slots on your motherboard -- and finally -- the ease of swapping them in and out.

Has someone designed and produced some kind of hot-swap bay device -- probably 3.5" or something that fits a bay in a standard computer case? How would it connect to a motherboard? There are plenty of USB 3.0 external plug-in devices for NVME M.2s.

In any case -- "in any case" [pun] -- NVMEs that can be easily swapped in and out mean that there's one less reason supporting standard mid-tower -- possibly even ITX -- cases. You still need space for an ATX PSU and an ATX motherboard. But is it possible we could see the end for even needing or wanting a 2.5" HDD or 2.5" SSD?
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,672
578
126
I might be mis-interpreting what you're asking for, but IcyDock makes several NVMe products that fit in 3.5" or 5.25" external bays, as well as a product that turns an open PCIe slot into an M.2 storage slot with external removal capability. https://www.icydock.com/goods.php?id=319

For interconnection, almost all these solutions use some form of either Oculink, HD Mini SAS, or U.2 connector to connect between the drive bay and the Motherboard. There are of course adapter cables and such to get between a lot of these variants. To connect between a bay, and say, a consumer motherboard that might have an M.2 slot but not any of othe other mentions (because they are more enterprise oriented), you could use a PCIe to U.2 adapter or an M.2 to U.2 adapter like this one: https://www.startech.com/en-us/hdd/m2e4sff8643
 

BonzaiDuck

Lifer
Jun 30, 2004
15,709
1,450
126
I might be mis-interpreting what you're asking for, but IcyDock makes several NVMe products that fit in 3.5" or 5.25" external bays, as well as a product that turns an open PCIe slot into an M.2 storage slot with external removal capability. https://www.icydock.com/goods.php?id=319

For interconnection, almost all these solutions use some form of either Oculink, HD Mini SAS, or U.2 connector to connect between the drive bay and the Motherboard. There are of course adapter cables and such to get between a lot of these variants. To connect between a bay, and say, a consumer motherboard that might have an M.2 slot but not any of othe other mentions (because they are more enterprise oriented), you could use a PCIe to U.2 adapter or an M.2 to U.2 adapter like this one: https://www.startech.com/en-us/hdd/m2e4sff8643
Very informative -- thank you. I'll look into the links you posted, and comment after I've looked at them.
 

Steltek

Diamond Member
Mar 29, 2001
3,042
753
136
I'll warn you about my experience with the IcyDock MB840M2P-B PCIe racks as discussed in this thread. It was a great idea with enormous potential, but ultimately it is hobbled by a very poor tray design, IcyDock's inability to actually produce them for sale (even before supply shortages), and by pricing too high for what they do.

Due to the poor design of the trays, mounting the NVMe drives in the tray can be a pain due to the required thermal pad (which the tray won't work without), and only certain drives are guaranteed to work with the trays due to tiny physical size variances between brands that the thermal pad can't compensate for but that otherwise wouldn't be an issue when installing the drives in an actual m.2 slot.

Further, opening a single tray to physically swap NVMe drives in and out just isn't feasible for the long term (the thermal pad gets damaged, which will eventually affect proper tray function, and Icy Dock doesn't bother to sell the thermal pads separately at present. The pads also have to be perfectly sized to work with the tray, so 3rd party pads probably won't work very well if at all. There is also the issue of how even very slight variances in the sizes of various brands of NVMe drives affect how well the trays work). And, extra trays are practically unavailable for purchase (for which there is no excuse as they lack electronics, except that the base product probably isn't selling well enough to justify producing extra trays), and spare trays are excessively overpriced relative to the cost of a whole new unit.

If you are using Windows, there are also Windows OS limitations related to hot plug operation that hurt their usefulness as well for some use cases (especially if you want to make use of two of them in the same system, which is why I wanted them).

If you want to use something like this and you can afford it (unless your needs are very modest) go with an enterprise SAS enclosure that accepts u.2 NVMe drives and interfaces using an SAS card over this product.
 

BonzaiDuck

Lifer
Jun 30, 2004
15,709
1,450
126
StarTech is a reliable maker of peripherals, and I trust their products. But the U.2 to M.2 adapter card seems less appealing, and I'm not so sure it's compatible with the NVME M.2 cards we use.

The other product -- a PCIE card with a slot to insert an NVME in a caddy from the computer backside -- looks more promising. Perhaps just a little bit pricey, it is a realistic expense for the convenience. It's maybe double the price of a decent PCIE NVME adapter card.

I'm very impressed with ICY DOCK's products. Their build quality is rock solid. I've been building my PCs recently with a 5.25" ICY DOCK device that deploys a slim laptop ODD with two 2.5" hot-swap bays. The extra bay caddies can be had with a lock and standard key, or more cheaply with a lever-switch to extract the SSD or 2.5" HDD from its bay.

The worst shortcoming I've seen is the 40mm fan in the bay device which occasionally throws out some noises.

They make good stuff.
 

BonzaiDuck

Lifer
Jun 30, 2004
15,709
1,450
126
I'll warn you about my experience with the IcyDock MB840M2P-B PCIe racks as discussed in this thread. It was a great idea with enormous potential, but ultimately it is hobbled by a very poor tray design, IcyDock's inability to actually produce them for sale (even before supply shortages), and by pricing too high for what they do.

Due to the poor design of the trays, mounting the NVMe drives in the tray can be a pain due to the required thermal pad (which the tray won't work without), and only certain drives are guaranteed to work with the trays due to tiny physical size variances between brands that the thermal pad can't compensate for but that otherwise wouldn't be an issue when installing the drives in an actual m.2 slot.

Further, opening a single tray to physically swap NVMe drives in and out just isn't feasible for the long term (the thermal pad gets damaged, which will eventually affect proper tray function, and Icy Dock doesn't bother to sell the thermal pads separately at present. The pads also have to be perfectly sized to work with the tray, so 3rd party pads probably won't work very well if at all. There is also the issue of how even very slight variances in the sizes of various brands of NVMe drives affect how well the trays work). And, extra trays are practically unavailable for purchase (for which there is no excuse as they lack electronics, except that the base product probably isn't selling well enough to justify producing extra trays), and spare trays are excessively overpriced relative to the cost of a whole new unit.

If you are using Windows, there are also Windows OS limitations related to hot plug operation that hurt their usefulness as well for some use cases (especially if you want to make use of two of them in the same system, which is why I wanted them).

If you want to use something like this and you can afford it (unless your needs are very modest) go with an enterprise SAS enclosure that accepts u.2 NVMe drives and interfaces using an SAS card over this product.
Also -- very informative. As I said in my last post, I only have experience with ICY DOCK's more conventional bay devices. The product for NVME's apparently has some bugs to be ironed out.

1TB NVME M.2 drives now seem to be as cheap as KOOL cigarettes, if I may exaggerate. I'd previously gravitated toward Samsung's 960 line, and hadn't bought any more of them as the 970s and 980s were released. The Sammies have that encryption feature -- I forgot the acronym for the moment.

I remember the 1TB 960 Pro I purchased and its price-tag. It was close to $600. Now, I find that I can get 1TB SK Hynix Gold P31 NVME M.2's for around $135 bucks apiece. They lack the encryption feature, but supposedly are very power-efficient and cool-running.

I stockpile parts for new builds at the peril of being unable to return them except under factory warranty. I suppose I'll find out about these SK Hynix units within the next month. I'm still trying to decide whether to put sinks on them.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,841
3,189
126
i believe currently the physical limitation for M.2 is 8TB due to size.
If you want to scale the size up a bit more to U.2, ive head of 100TB nVME's on the enterprise sector.

The limitations on using NVMEs would seem to be the number of PCIE lanes available in your system, plus the number of M.2 slots on your motherboard -- and finally -- the ease of swapping them in and out.

Its mostly the PCI-E lanes, then comes to physical PCI-E Slot, and features if the board can support something called Bifurication or not.
Cards can used instead of physical m.2 slots on motherboards.
 

BonzaiDuck

Lifer
Jun 30, 2004
15,709
1,450
126
i believe currently the physical limitation for M.2 is 8TB due to size.
If you want to scale the size up a bit more to U.2, ive head of 100TB nVME's on the enterprise sector.



Its mostly the PCI-E lanes, then comes to physical PCI-E Slot, and features if the board can support something called Bifurication or not.
Cards can used instead of physical m.2 slots on motherboards.
Happy to see you contributed something to this discussion.

1TB NVME PCIE 3.0 SSDs seem to be pretty cheap these days. I was revisiting the availability of multi-M.2-NVME expansion cards today, and in a quick and dirty search, turned up this thread I posted on the "Motherboards" forum back in 2018:

"PCIE Bifurcation on Z170 Chipset boards?"

which I updated with a post a few hours ago.

The easy conclusion would seem to follow that you need motherboard CPU/PCH support for bifurcation, which allows you to divvy up the lanes in a PCIE slot and allocate them to specific devices connected to that slot.

So, despite all the other difficulties I'm having today, I ran some more searches and turned up some expansion card products:

10GTek NVMe SSD Adapter for M.2 4x M.2(M Key) built-in a PEX-8724 controller PCI-E X8
(for four M.2 NVME drives)

10GTek 4-drive PCIE.jpg

10GTek M Key M.2 NVMe/NGFF SSD to PCI-E X8 Adapter Card PEX-8724 Controller Bifurcation
(for two M.2 NVME drives)

Ableconn PEXM2-130 Dual PCIe NVMe M.2 SSDs Carrier Adapter Card - PCI Express 3.0 x8 Card Support 2X M.2 NGFF PCIe NVMe SSD for Mac & PC (ASMedia ASM2824 Switch) - Support Non-Bifurcation Motherboard

1620175270098.png
It seems odd, but I don't remember any discussion of these or other similar devices here on this forum. Maybe there had been some, but I missed them -- or haven't taken the trouble to look.

But it's also odd that there aren't many people buying the cards, because the offer-sites don't show any reviews.

Further search about the PEX 8724 chip/controller/switch explains why the flawed English usage on the 10GTek web page refers to "PEX" and "PLX" as the same, causing confusion. Apparently, the PEX 8724 is associated with Broadcom, but the company name is PLX Technologies.

The Ableconn expansion card uses an Asmedia "switch" or controller. -- ASM 2824.

I'm tempted buy one of these devices, as much as I have expectations for using it, as I'm just curious. Even the most modest offering is about $180 -- which some could say is a bunch-a-bucks for a single expansion card.

I've got an ASUS Z170-WS board, with the extra PCIE lanes through a bridge chip. Even with four x16 slots where I can use two as x16 and the other two as x8, it seems sort of wasteful to use any of the remaining three (for using one dGPU in the top slot) for x4 cards. On the other hand . . . between $180 and $250 . . . . I already spent my stimulus money on my SUV tires and repairs.

But I've always been a spendthrift with PC parts.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,841
3,189
126
Further search about the PEX 8724 chip/controller/switch explains why the flawed English usage on the 10GTek web page refers to "PEX" and "PLX" as the same, causing confusion. Apparently, the PEX 8724 is associated with Broadcom, but the company name is PLX Technologies.

mah to sum up short... these were introduced way back i believe during intel's P38 chipset, which followed right after nvidia's NFORCE4.
It was intel's way of supporting SLI/Xfire @ 16x bandwidth on both slots.

Now they are mostly used on either workstation/server boards or nVME Raid controllers.
 

BonzaiDuck

Lifer
Jun 30, 2004
15,709
1,450
126
mah to sum up short... these were introduced way back i believe during intel's P38 chipset, which followed right after nvidia's NFORCE4.
It was intel's way of supporting SLI/Xfire @ 16x bandwidth on both slots.

Now they are mostly used on either workstation/server boards or nVME Raid controllers.
That's useful to know, I suppose.

The only question I'd pose is -- "How do a particular NVME's bench-tests compare between a standard PCIE-x4 interface card, a motherboard's M.2 slot, and one of these bifurcation PCIE devices?" Maybe that's even a naive question, because the results are likely to be identical. Why wouldn't they?

Of course, what you're telling me is that the controller/chip/switch models -- Broadcom/PLX or Asmedia -- are established enough that I'm not likely to find any problem with them. They're just a better way to use PCIE resources, so you can take maximum or near-maximum advantage of a single PCIE slot.

I should probably take another look at NVME RAID devices, as well. Who knows? Future storage devices may not look like a 2280 stick. Maybe they'll look like Lincoln pennies, and you would put them in a coin slot to use them . .

UPDATE: YA GOTTA KNOW HOW TO SEARCH FOR THESE THINGS.

If there's a common "name" or description for them, there is an abundance of names and descriptions. I just ran a search on ASM2824 -- the Asmedia "switch-chip" .featured as alternative to the chip used by the 10GTek devices. there are several such devices available -- all of which do not require motherboard PCIE bifurcation. [Bee-Cau-use -- they use the ASM2824 switch chip.]

I've bought a lot of Startech parts and peripherals over the years, and I've had helpful customer support "encounters" with them --- many different products, which all worked properly:

StarTech.com Dual M.2 PCIe SSD Adapter Card - x8 / x16 Dual NVMe or AHCI M.2 SSD to PCI Express 3.0 - M.2 NGFF PCIe (M-Key) Compatible - Supports 2242, 2260, 2280 - RAID & JBOD - Mac & PC (PEX8M2E2)

It's price-tag is only $134, compared to the others. That is, an alternative unit I linked earlier is about $177. Well -- go for the gold, I say. Lemme see if they have a 4-drive module . . . Nope . . . . I'll have to look further, maybe on Startech's web-site . . . Huh . . . wonder if this Startech model can be found at NewEgg . . . We'll see . . .Ha-ha! YES! Newegg is selling it for $160. But I'm pulling the Amazon string.

AND! Doesn't the Startech product look identical to the "Ableconn" unit? What do you think -- they came off the same assembly line?
 
Last edited:

BonzaiDuck

Lifer
Jun 30, 2004
15,709
1,450
126
A FURTHER UPDATE

Always important to think through carefully -- your essential hardware needs and deployment strategy for a PC build.

I had this idea that I wanted to put one SK Hynix 1TB drive in the motherboard's m.2 slot for the "PCIE" configuration that shares bandwidth with the U.2 plugs - which I won't use. Another SK Hynix 1 TB NVME would need to use a PCIE x4 (or x8 or x16) slot.

And I had this idea that I wanted a SEPARATE NVME -- a cheap $50 250GB unit -- which would ALSO need a PCIE x4 slot. The idea for this small NVME was to cache some 2.5" spinners. So I pulled the string to buy the StarTech dual NVME PCIE (x8/x16) board. At least -- I got the cheapest one.

But it occurs to me that the only spinners in the system do not need any caching. One will be a media drive; the other a backup drive accessed solely by Macrium Reflect.

So I could've got by just fine with a $15 to $25 single NVME-to-PCIE card.

Also, I'm looking at case three-dimensional "real-estate". I'll never use a 3.5" spinner again in a computer build. Even SATA SSDs have limited value. The only thing I need in the enclosed drive bays is airflow from intake fans . .

NVMEs and mutli-NVME-to-PCIE adapters change the ball-game for storage.. They REALLY change the ball-game.

And -- OF COURSE -- with technology offered this year -- one shouldn't even need extra RAM to cache those drives. Unless you don't have PCIE v.4.0 and Sammy 980's. But that's still so new, and my Z170 is so old -- well -- you understand my meaning, don't you?