These days with the size of SSDs it is many time a better solution to Put the OSs' on separate SSDs. Each one normally install and individually capable to Boot.
Then put on Win10 EZBDC to make a Multiple Boot menu. EZBCD (freeware version) does not care if you point to a partition or to another Drive . It take less then 10 min. to install and configure it for multiple Boot.
https://neosmart.net/wiki/easybcd/dual-boot/
https://neosmart.net/EasyBCD/
It is much Safer, Flexible, and better performance to deal with few OS installation on independent SSDs.
When changes need to be made, or something goes wrong, it is to deal with Multiple drives than the Horror of Mufti Partition on one drive.
VirtualLarry said:
I agree, Jack, but for my DeskMini units, they have three drive spots. Two 2.5" SATA drive bays, one of which you have to remove the entire motherboard to mount a drive, so I'm not using that bay, and one that is easily accessable from the side. Plus, a PCI-E M.2 SSD socket on the top of the mobo.
So that effectively give me two drive slots. I don't mind a boot menu for my "legacy" OSes.
On this, I've been thumping my chest for the last five or six hours.
This was the first time I ever tried to clone a dual-boot single-disk-device, and it took me a while to find the right software for free. then I had to image the drive and restore it to change the sizes. Discovered an alignment anomaly and corrected it. SFC /SCANNOW turns up nothing and perfect. CHKDSK on everything is sweet. I have now purged all the red-bangs from the Event Logs, and the double-instance of the same EvID 219 is benign and owing to that stupid Windows cloud service. Everything sleeps and wakes properly; Win 7 and Win 10 all tip-top, with separate program installations on different associated volumes and a common area where file modification under one or the other OS doesn't create a problem for the other OS, but the drive is still cached for a given session.
I'm going to do either a differential or incremental image of the drive, set a backup of important non-OS components to the server, and clean it up a bit.
See -- I think it's a personal optimization problem with constraints. You chose a motherboard. You choose a RAM kit. The motherboard only has so many SATA ports, but you'd otherwise like to keep the OSes separate. The NVMe configuration kills one pair of SATA ports, either way. A PCIE x1 drive controller likely goes through the chipset for my Z170, and I don't really need the extra ~ 0.75 to 1.00% difference between running the graphics as x16 or x8.
As long as I keep track of what volumes are backed up on the server in terms of driver letters and volumes, I can restore everything to any non-OS drive that goes bad. And I have the drive-image of the 1TB dual-boot disk. Both OSes share a data and media disk cached only to RAM for any given OS session, but I've now made them save and then prefetch caches on restart or return from hibernate. All the hardware and driver bugs are gone now; all the troublesome error and warning messages have been corrected. If there's anything left, I know what it is, I know why it is, and that it's benign or can be deferred in my attentions.
And all my eSATA ports on the case and I/O panel are ready for in-session hot-swap. All my 2.5" drive-bay and caddies are operational. This is going to be good . . .
Still, more trouble to set up than some would want, but -- it's done, it's simple enough, and it's good.