The $31 KryoM.2 we have been discussing in this thread is a solid choice:
https://forums.anandtech.com/threads/is-this-53-degrees-idle-normal-for-nvme-ssd.2497459/
The fact that you can find multi-NVMe-M.2 PCIE cards like these -- appears to be x8 -- gives one pause. I like to finalize my computer builds. And I can always set aside some part for which I paid less than $30 to replace with something like that. A device like that would mean a complete shift in my storage strategy, although I'd still do SSD-caching for my slow SATA devices. For instance, if you could RAID the drives on that device -- I assume that's what it's primarily intended for -- then cache to RAM, you could double your RAM size, pull the switch on StarTrek WARP drive, and disappear into the black hole of your virtual reali-titty.
What PCIe add in cards can boot a NVMe M.2 SSD? (assuming the PC has the necessary BIOS, etc)
STM said:The reason this card, and possibly the HP Z Turbo Quad Pro will be a bad fit for most systems is not just “BIOS locking” but it is an issue of PCIe bifurcation. We have a picture of the two cards we used in our adding 2.5″ NVMe to a desktop/ server article from a few months ago.
The Supermicro AOC-SLG3-2E4R will only work in motherboards that have PCIe switches onboard. Otherwise CPU PCIe lanes cannot handle two PCIe devices on a single PCIe slot. Likewise, the Supermicro AOC-SLG3-2E4 has an Avago/ PLX switch chip onboard and thus can take the single PCIe x8 slot and use it for two devices. (See the piece on Supermicro AOC-SLG3-2E4R and AOC-SLG3-2E4 differences.) When we look at the Dell card, it does not have the PCIe switch chip and is why the card is not compatible with every system. The big questions are whether the HP Z Turbo Quad Pro has a PCIe switch chip, and if so, are the cards BIOS locked to prevent them from being used in other systems? If they do have the PCIe switch chip and are not BIOS locked, they will be good solutions. If they do not have the PCIe switch chip or are BIOS locked, then they will be similarly hard to work with as this Dell 4x m.2 PCIe x16 card.
No legacy BIOS to my knowledge is able to boot from an NVMe device.
That sort of answers a question of mine, whether I would be able to install an NVMe M.2 SSD into my DeskMini PCs, and install Win10 in non-UEFI boot mode. Guess not. So I should stick my AHCI M.2 SSDs in there instead, and then I could potentially legacy boot off of one.
That is not the question. An add-in adaptor card has zero to do with being able to boot from a device. That is entirely down to UEFI* support, and any OROM present on the device you're trying to boot.
An adaptor will not magically allow you to boot an NVMe SSD on a system without NVMe support in UEFI, and any PCIe-to-M.2 adaptor will allow you to boot on a system with NVMe support. It is simply a passive electrical adaptor.
*No legacy BIOS to my knowledge is able to boot from an NVMe device.
However, I did find a card that will boot on my system - the Kingston HyperX Predator w/ HHHL Adapter (SHPM2280P2H/240G). It uses an Option ROM as my legacy BIOS will show it in my list of disk drives to select from.
The only thing I am not sure about is if the M.2 card itself has the legacy OROM, or if its in the adapter card. I am tempted to get another card and try it in the adapter and see what happens. If I do, I will report back.
You're welcome. I'm actually very familiar with the HyperX PCIe, having used one in my Z77 system for a few years. It is a pretty decent drive, if NVMe support isn't available. But therein lies the catch. It's "only" an AHCI drive, which means you'll lose out on some of the new features of NVMe, and a bit of performance.
BTW it's the M.2 drive that contains the OROM, not the adaptor card. The OROM itself simply works by presenting the drive to the BIOS as an additional bootable SATA controller, with a single connected drive. You can check this out with HWiNFO f.x.
You're welcome. I'm actually very familiar with the HyperX PCIe, having used one in my Z77 system for a few years. It is a pretty decent drive, if NVMe support isn't available. But therein lies the catch. It's "only" an AHCI drive, which means you'll lose out on some of the new features of NVMe, and a bit of performance.
BTW it's the M.2 drive that contains the OROM, not the adaptor card. The OROM itself simply works by presenting the drive to the BIOS as an additional bootable SATA controller, with a single connected drive. You can check this out with HWiNFO f.x.
Oh, you don't know about vROC? It stands for Virtual RAID On CPU. Nearly a decade ago, Intel started planning to bring more downstream features up to the CPU die, like the memory controller that eliminated the need for a North Bridge altogether.
Intel's vROC allows you to build extensive SSD arrays with up to 20 devices. That's something you can currently do via software with Windows Storage Spaces, but you can't boot from the software array. Intel's virtual RAID adds a layer of code and hardware before the Windows boot sequence, which makes RAID possible. This is similar to the PCH RAID we have now on more advanced Intel chipsets, but it's more effective with the hardware directly on the processor.
Storage buffs will get a massive dose of fun when Intel’s X299 chipset launches. The new Core i9 chipset will support up to 20 devices in a bootable RAID partition.
The overlooked featured is called Virtual RAID On CPU (VROC). We got taste of it courtesy of Asus, which showed the feature running in its new X299 motherboards using a 10-core Skylake-X CPU. Few motherboards support more than three M.2 slots, so Asus used its new Hyper M.2 PCIe card.
The Hyper M.2 lets you load up to four M.2 NVME PCIe drives into a single x16 card. You don’t have to worry about heat, because the Hyper M.2 features a beefy heat sink, thermal pads, and an active fan.
Storage buffs will get a massive dose of fun when Intel’s X299 chipset launches. The new Core i9 chipset will support up to 20 devices in a bootable RAID partition.
There’s always a catch, and with Intel’s VROC there’s actually multiple catches. The Asus Hyper M.2 card works perfectly fine with, say, a Samsung 960 Pro, and you could see four individual Samsung drives in RAID. If you want to use VROC to create a bootable partition, though, you can use only Intel SSDs.
Here's another catch: X299 will launch with Kaby Lake-X and Skylake-X CPUs, but Intel VROC will work only with Skylake-X.
What may the biggest catch is this little thing right here:
![]()
That little dongle is a key that Intel will sell to consumers. Out of the box, if you have Intel drives, an Asus Hyper M.2, and a Skylake-X, you can build a RAID 0 partition. But if you want to enable RAID 1, RAID 5, or other RAID schemes with redundancy to protect your data, you have to buy this Intel key to enable it. If you scroll back to the first BIOS shot, you can see the “premium” mode is enabled, which means it supports the premium RAID support. How much? No one seems to know.
So why would Intel put all this work into a feature and then add a surcharge which could kill its adoption? Intel officials weren’t on hand to say, but the surcharge is related to the enterprise underpinnings of the Core i9 and X299 platform, where the feature is an up-sell. Intel can’t sell it to one customer and give it away to another, can they?