NEW DEVELOPMENTS: Seeking thoughts about single GFX-card and choice of PCIE slot

BonzaiDuck

Lifer
Jun 30, 2004
16,364
1,900
126
OK. The system is in my sig -- the Skylake. The sig will need some updates soon.

I wanted to test SSD-caching with Primo on a 250GB EVO, determined that it worked (see my verbose and prolix thread about it), and I learned the ups and downs of trying to integrate any (SATA or NVMe) SSD caching in a dual-boot setup. I'm expecting some feedback soon from another forum which may -- or may not -- resolve that shortcoming.

But because of these results, I decided -- finally -- to spring for a 1TB NVMe. I looked at all the options; compared the speeds and other factors; agonized over the price. I was on the verge of buying the EVO for something like $480. I finally chose to shell out more than $150 extra and invest in the Pro rather than wait for prices to drop, or live with the EVO. But I might have lived well with the EVO -- already proven that, too. I just want to buy more PBW expectations -- maybe greater than 1.0 for the Pro.

I suppose I'd been thinking what to do with the small EVO I have, and I'll have an extra PCIE x4 expansion card.

I have decided not to SLI two GTX 1070s. One is plenty fast for my plans over the next few years. So right now, the Giga OC Mini is occupying PCIE-x16_1 and enjoying the full 16-lane bandwidth. And this is all PCIE 3.0, whether the graphics card runs with 8 lanes or 16.

I think I will be wise to set my NVidia overclock to stock and then test again, only because I don't know for sure what the x8 restriction will imply for the settings. Anyone with any experience about this have an idea?

I assume that I should be able to run the NVMe x4 expansion card run in that PCIE x16_2 slot, and I assume that this now leaves the graphics card with only 8 lanes. Does anyone differ?

Thanks for any brief remarks, and -- like I said about the other thread -- yeah -- verbose and prolix -- my apologies. Maybe arthritis in the hands is a good thing -- but they're still just like "Runaway Train."

Extra info as I explore this on my own. Somehow Skylake by itself only offers 16 PCIE lanes. The Z170 chipset offers another 20, but communicates to the CPU by what is equivalent to four extra lanes over the CPU's offering.

If both NVMe cards are configured, that's 8 lanes. the GTX 1070 is another 8 lanes. I have two x1 cards in x1 slots of the system. I count 18 lanes used, but some info from other forums makes me wonder about a possibility that the graphics will get 16 anyway, although my understanding about performance might leave me happy with it using 8.

What will happen to the bandwidth available for those NVMe cards?
 
Last edited:

BonzaiDuck

Lifer
Jun 30, 2004
16,364
1,900
126
I really am looking for a second opinion about this.

the two slots usually allocated for SLI/CF will use a total or maximum of the 16 CPU PCIE lanes.

I assume that this will degrade the graphics-card performance to x8, which -- we all know -- isn't much of a loss with PCIE-3.0.

I cannot imagine that the graphics card would run at other than x8 with an x4 card in the second x16 slot.

The NVMe allocated to the "third PCIE x16" slot which maxes out at x4 goes through the chipset, with the DMI transfer limited to the same 4-lanes to the CPU.

And I would now think that two x1 devices will degrade that x4 bandwidth if they are used simultaneously with the other slots. One is an x1 SATA Marvell controller for eSATA and a hot-swap backup disk -- backups weekly. the other is a Hauppauge 2250 tuner-card.

Anyone got ideas about my expectations that -- most of the time -- two NVMe PCIE devices will function at near-full bandwidth?
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
I assume that I should be able to run the NVMe x4 expansion card run in that PCIE x16_2 slot, and I assume that this now leaves the graphics card with only 8 lanes. Does anyone differ?

There shouldn't be any issues doing that. The NVMe drive will only use 4 of the available lanes of course. I have run this way for years now, haven't had any issues so far.

And I would now think that two x1 devices will degrade that x4 bandwidth if they are used simultaneously with the other slots. One is an x1 SATA Marvell controller for eSATA and a hot-swap backup disk -- backups weekly. the other is a Hauppauge 2250 tuner-card.

Anyone got ideas about my expectations that -- most of the time -- two NVMe PCIE devices will function at near-full bandwidth?

The only limit is what you can put through the DMI link, but if you only run a single NVMe drive there should be plenty of bandwidth for everything. It's only multiple high-performance SSDs over the DMI link that might have issues with peak bandwidth.

Unless you're running a really massive RAID array, HDDs aren't an issue at all.
 

BonzaiDuck

Lifer
Jun 30, 2004
16,364
1,900
126
There shouldn't be any issues doing that. The NVMe drive will only use 4 of the available lanes of course. I have run this way for years now, haven't had any issues so far.



The only limit is what you can put through the DMI link, but if you only run a single NVMe drive there should be plenty of bandwidth for everything. It's only multiple high-performance SSDs over the DMI link that might have issues with peak bandwidth.

Unless you're running a really massive RAID array, HDDs aren't an issue at all.

I started out with a $140 experiment to see how NVMe works as a caching drive for other SATA devices with PrimoCache. It was more or less successful, but I ran into a boot-time BSOD problem when implementing Primo for the Win 10 part of dual-boot. I was careful to define a separate partition and caching volume for each OS.

To resolve the boot-time problem, I had to pull the NVMe, drop the caching tasks, and put it back in.

Primo tech-support suggests that I did all this properly, and that two separate caching partitions or one for each OS was the way to make it work. But the problem leaves us puzzled.

However -- I have been using the NVME M.2 PCIE expansion card in the bottom "long" slot that is x4. That slot is exceptional in that it must go through the chipset and the DMI link. It could even be part of the causation for the boot-time BSOD problem -- I cannot say.

But if I can put the 1TB 960 Pro in the 2nd ("SLI") PCIE x16/[x8/x8] slot, I suspect I won't need to worry about such problems, and I can think of a better way to do the caching, insofar as I'd only need it for SATA devices purely for data storage -- and less so for SSDs.
 
Last edited:

BonzaiDuck

Lifer
Jun 30, 2004
16,364
1,900
126
Looking for more insights from fellow members. NVMe M.2 PCIE has thrown some new twists into my established understanding of prior technology. But this is about my GTX 1070.

Motherboard Saber-Z170-S in the sig with everything else except the new Samsung 960 Pro 1TB. [. . And after I'm finished with things in general, I'll be glad I bought the Pro because it's already showing 2 TBW for a total of ~500GB in partitions and volumes.]

I had overclocked my graphics card with Afterburner, getting more familiar than I'd wish with artifacts, NVidia driver resets -- and I can count one system-reset and two BSODs during that process. I found a sweet spot using the "Mhz/V" curve in Afterburner. Under all stress tests and a favorite game, the AB monitor shows the core clock hovering at 2,038Mhz and memory clock at 4,448Mhz (DDR 8,896 but AB shows 8,904 under stress once the slider and "Apply" have been set for +448).

Now that my 960 Pro is functioning in the PCIE x4 slot using lanes to the chipset->DMI-link->CPU, I find after a few days of "fixing things" that the GFX OC is no longer stable. I might go into BIOS and change all the PCIE "version" auto settings to "PCIE 3.0." I had seen these settings when configuring BIOS, but would assume that "auto" would default to PCIE 3.0 according to the device in the slot.

Next, I'll do a clean re-install of the NVidia driver and software. Since I've reset the GFX card to default, there should be no worry about OC implications for installation of the driver.

Would this arise from adding the additional PCIE M.2 drive, even if in that slot?

Has anyone had any experience with this situation personally?

Any other comments, second-opinions, recommends or advice on this?

Not a disaster, but a setback, until I can either figure this out, or go through the process of finding a new stable overclock setting for the GTX 1070.