Junior Member
May 9, 2018
I've seen many reviews of motherboards briefly point out, when motherboards have one or more m.2 slots, which slots utilize PCIe lanes from the CPU and which slots use lanes from the Chipset. I agree that having lanes that directly interface with the CPU as opposed to through the Chipset sounds better, as placing a storage device, essentially, behind a bottleneck like the DMI 4x connection between the CPU and Chipset (at least on Intel based boards) would at best still yield a higher latency connection.
What I haven't seen is a break down of how much performance suffers when a high performance (or a low performance, I suppose too) m.2 NVMe SSD is placed on a Chipset fed slot vs a CPU fed slot. Is this really something to be worried about or is it negligible?


Diamond Member
May 6, 2012
The answer to that really depends on what sort of workload you're running. For regular desktop usage you will not notice any difference what-so-ever, unless all you run is benchmarks. Even then it so small as to be completely negligible. It is only if you're doing something exotic like RAID or need multiple drives, that this becomes an issue.

tomshardware did an article some years back on this issue, but its not updated:,3826-3.html

The DMI 3.0 link (really PCIe 3.0 x4, with a few extra features) on LGA-1151 and newer has enough bandwidth to support a single NVMe drive just fine, but as you write there is a small but noticeable latency penalty because of the PCH. Older AMD (pre-Zen) and Intel (pre-LGA1151) only have a PCIe 2.0 x4 link (AMD A-Link Express III, Intel DMI 2.0) between the CPU and PCH/FCH*, which means everything connected to it has to share 2GB/s (realistically ~1600MB/s) both ways. Newer NVMe drives can saturate that all by themselves.

*The AMD FM1/2/2+ does have 4 general purpose lanes directly from the CPU, but I've never seen them used that way. The AM3(+) socket would would be able to provide more then enough lanes (the 990FX features 38) for multiple drives, but its only PCIe 2.0 and an effectively obsolete platform.


Senior member
Mar 19, 2005
Keep in mind also that other devices (depending on implementation) will be hanging off that link as well (SATA, USB, Ethernet etc). That can lead to bandwidth contention in some unlikely scenarios.


Jun 30, 2004
I didn't do any rigorous comparisons, but I have a 960 Pro NVME connected through the PCIE x4 slot and chipset lanes, and another 960 EVO NVME in an x8 slot. The benchies show a slight advantage for the EVO. But the Pro is fast enough through the DMI 3.0 link. So the Pro drive might show 3,000 MB/s sustained read when you expect 3,500 given the drive's spec.
Last edited:

Jason Carmichael

Junior Member
Mar 20, 2019
This a raid 0 setup of Samsung 960 EVO. z270. Before I understood how the Northbridge worked, I thought I was going to get a lot better performance out of this. Actually, I am getting 1/3rd the iops with a raid 0 setup. About a 5% gain in read, and about a 1.7x in write.


  • 2019-03-20 (5).png
    2019-03-20 (5).png
    28.2 KB · Views: 12