Question SDD raid, jbod, storage spaces or ?

Midwayman

Diamond Member
Jan 28, 2000
5,723
325
126
I have too many volumes on my PC already.
C:\ as a nvme 512gb
D:\ as a nvme 512gb
Z:\as a hdd 5TB
plus several network drives and a media server PC

Now I'm running out of space frequently again and am looking at getting more SSD space, but its already annoying and confusing to use and I really don't want more volumes, and the ones I have are too big to just toss.
  • I want to hopefully get down to no more than two logical volumes.
  • Be able to differentiate between Spinny platter space and SSD space
  • I don't really care much about parity and don't want to dedicate a ton of space to it (so mirroring is out)
  • If a drive fails it can't take the whole array with it. (So no striping) but I'm okay with the risk of losing one drive of data
  • Hopefully not much of a performance penalty.
  • I should be able to expand the array over time.
What is the best solution for this?
Windows 10 looks like it supports 2 solutions
  1. Span
    1. Seems to require deleting new physical drives when you add them which isn't ideal.
    2. Not sure if I lose everything if one drive fails.
  2. Storage Spaces
    1. Can't integrate the OS drive which is a bummer.
    2. Limited redundancy options.
    3. Simple space I'm not sure if one drive fails it takes the whole array or just the drive.
I don't know much about raid other than I have a NAS that's running mirroring for archival stuff.
My MB says it supports a couple modes (z370)
6 x SATA3 6.0 Gb/s Connectors, support RAID (RAID 0, RAID 1 and RAID 10), NCQ, AHCI and Hot Plug • 1 x Ultra M.2 Socket (M2_1), supports type 2230/2242/2260/2280 M.2 SATA3 6.0 Gb/s module and M.2 PCI Express module up to Gen3 x4 (32 Gb/s) (with Ryzen CPU) or Gen3 x2 (16 Gb/s) (with A-Series APU)* • 1 x M.2 Socket (M2_2), supports type 2230/2242/2260/2280 M.2 SATA3 6.0 Gb/s module and M.2 PCI Express module up to Gen2 x2 (10 Gb/s)* * Supports NVMe SSD as boot disks * Supports ASRock U.2 Kit

I was reading about unraid, which sound interesting, but it looks like you'd have to run your windows in a VM which sounds like a performance hog.

Any good solutions?
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
  • I don't really care much about parity and don't want to dedicate a ton of space to it (so mirroring is out)
  • If a drive fails it can't take the whole array with it. (So no striping) but I'm okay with the risk of losing one drive of data

Hate to say it, but those requirement are for all intents and purposes mutually impossible when using M.2 NVMe SSDs. Unless you have a HEDT-class system and plenty of drives. That sort of RAID is not really relevant for home users either. Outside of the occasional home lab.

I'd advise to sell the two drives you already have, and buy a 2TB NVMe drive. That'll be the easiest and most practical solution.
 
  • Like
Reactions: killster1

Midwayman

Diamond Member
Jan 28, 2000
5,723
325
126
Hate to say it, but those requirement are for all intents and purposes mutually impossible when using M.2 NVMe SSDs. Unless you have a HEDT-class system and plenty of drives. That sort of RAID is not really relevant for home users either. Outside of the occasional home lab.

I'd advise to sell the two drives you already have, and buy a 2TB NVMe drive. That'll be the easiest and most practical solution.

I'm curious as to why it would be impossible? Something specific about NVME drives?

  • Raid 5 or 6 would appear to satisfy that, but have more protection than I really need, but if you go over the fault tolerance you lose everything.
  • Unraid has parity, can lose 1 drive and rebuild, but you only ever lose the data on the drives that actually die. Big downside is that its an OS layer and requires windows in a VM if you run it on the same machine.
  • Drivepool looks like it basically tacks drives end to end so the OS sees one drive. You can mirror specific folders or files automatically. Can be setup with a parity system like snapraid if you like.
My biggest concern specific to NVME is that parity calculations might not keep up with their write speed. However I'm not doing a lot of high performance write like video editing. Just want the read.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
I'm curious as to why it would be impossible? Something specific about NVME drives?

Mainstream platforms simply do not have the PCIe lanes available for more then 2 NVMe drives. You'd need at least 3 for a basic RAID5 array. 4 for a RAID6 array. There are a few exceptions in the AMD corner, and if you're using Intel you'll eventually be bottlenecked by the DMI link, even if there are "enough" lanes.
 

Midwayman

Diamond Member
Jan 28, 2000
5,723
325
126
Ahhhh. Gotcha. It would a problem if you wanted like 5 nvme drives all working at the same time and in standard raid thats what you get. Annoying that consumer boards have so few PCI lanes. You can combine them with SATA SSD, or even magnetic discs if you want in some systems though. I think unraid will use a nvme to cache all the writes so you don't have to wait on parity.
 

Golgatha

Lifer
Jul 18, 2003
12,381
1,004
126
Mainstream platforms simply do not have the PCIe lanes available for more then 2 NVMe drives. You'd need at least 3 for a basic RAID5 array. 4 for a RAID6 array. There are a few exceptions in the AMD corner, and if you're using Intel you'll eventually be bottlenecked by the DMI link, even if there are "enough" lanes.

Wow, I never considered the DMI link. I was looking at a X299 system with 44 PCIe lanes, but the DMI link is "only" good for 3.93 GB/s. Wow, that link would get saturated, or mostly saturated, by a single high end NVMe drive. Not to mention, even if you had 10Gb ethernet in your house, that becomes a major bottleneck for NVMe to NVMe transfers as well; that's just crazy!
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
Annoying that consumer boards have so few PCI lanes. You can combine them with SATA SSD, or even magnetic discs if you want in some systems though.

SATA SSDs work fine for pure storage. They also tend to be cheaper per GB then NVMe drives. It doesn't help in this case, since you've already got two 512GB drives.

But if you're willing to use regular SATA drives in M.2 format, something like this should fit the bill:

https://www.delock.com/produkte/1140_M-2/89588/merkmale.html

Its actually just a PCIe SATA controller card, but with M.2 slots instead of regular SATA ports.

Wow, I never considered the DMI link. I was looking at a X299 system with 44 PCIe lanes, but the DMI link is "only" good for 3.93 GB/s. Wow, that link would get saturated, or mostly saturated, by a single high end NVMe drive. Not to mention, even if you had 10Gb ethernet in your house, that becomes a major bottleneck for NVMe to NVMe transfers as well; that's just crazy!

X299, and X399 on the AMD side, do not have this problem, since there are plenty of lanes available directly from the CPU. You do need bifucation support and a PCIe x16-to-4x M.2 adaptor card.

Edit; AM4 isn't that hard hit either, since it has a separate PCIe complex for one NVMe drive.