• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Question What is the bandwidth between chiplets in EPYC 7742?

t1gran

Member
Jun 15, 2014
34
1
71
Does anyone know the bandwidth between chiplets (via I/O die) in EPYC 7742?

And just to make clear - there are 2 types of connections in EPYC 7742:

1) Infinity Fabric - connects 8 chiplets (CCDs) with I/O die
2) PCIe 4.0 128 lanes - connects I/O die with motherboard (the other socket, memory etc)

Is it correct? Or may be additional memory channels?
 
  • Like
Reactions: misuspita

nicalandia

Member
Jan 10, 2019
195
120
76

nicalandia

Member
Jan 10, 2019
195
120
76
Going by that and the fact that CCX now don't talk to each other internally but thru the IO Chip, I would say that based on Infinity Fabric first gene we can use the Die To Die performance numbers


Die-to-Die, Same Package 42.6 GB/s, so Infinity Fabrics doubles this to about 85-90 Gb/s
 

t1gran

Member
Jun 15, 2014
34
1
71
Die-to-Die, Same Package 42.6 GB/s, so Infinity Fabrics doubles this to about 85-90 Gb/s
I've read this review among others when searched answer to my question, but doesn't 85-90 GB/s seem too small for die-to-die bandwidth, while 8-channel DDR4-3200 bandwidth (between I/O die and motherboard's DDR4-3200) is 297 GB/s?

Besides bus width doubled in PCIe 4.0 (vs PCIe 3.0) - Infinity Fabrics only supports PCIe 4.0. But it's quit different interface itself, which connects 8 chiplets (CCDs) with I/O die. Is it not?
 

t1gran

Member
Jun 15, 2014
34
1
71
Also:
1) Nvidia NVSwitch's bandwidth between two GPUs is 300 GB/s
2) Quadro GV100 memory bandwidth is up to 870 GB/s.

How can AMD's die-to-die bandwidth be so small (85-90 GB/s)?
 
Last edited:

Gideon

Senior member
Nov 27, 2007
612
720
136
Also:
1) Nvidia NVSwitch's bandwidth between two GPUs is 300 GB/s
2) Quadro GV100 memory bandwidth is up to 870 GB/s.

How can AMD's die-to-die bandwidth be so small (85-90 GB/s)?
They don't want to blow their entire power budget on chiplet-to-chiplet communication. Already their Infinity Fabric is set to 18 GT/s transfer rate, not their theoretical maximum 25.6 GT/s as it is on MI50/MI60 GPUs.

Some discussion about the matter here:
 

moinmoin

Golden Member
Jun 1, 2017
1,206
992
106
Interesting topic.
I've read this review among others when searched answer to my question, but doesn't 85-90 GB/s seem too small for die-to-die bandwidth, while 8-channel DDR4-3200 bandwidth (between I/O die and motherboard's DDR4-3200) is 297 GB/s?
Giving full 8-channel DDR4-3200 bandwidth to every single die would be insane. :)

Note that while Rome is UMA, each group of CCDs is technically still linked with specific memory controllers (though the latency difference is by far not as high as on Naples).

From WikiChips:
"Due to the performance sensitivity of the on-package links, the IFOP links are over-provisioned by about a factor of two relative to DDR4 channel bandwidth for mixed read/write traffic. They are bidirectional links and a CRC is transmitted along with every cycle of data. The IFOP SerDes do four transfers per CAKE clock."
(As Gideon mentions above the over-provisioning in IFv2 moved from ~100% to ~40%.)

Also the IMC bandwidth use can be further increased from your quoted 297 GB/s by turning on 4 NUMA, making each 2 channel memory controller local to the two adjacent CCDs. From the same AT article:
"AMD can reach even higher numbers with the setting "number of nodes per socket" (NPS) set to 4. With 4 nodes per socket, AMD reports up to 353 GB/s. NPS4 will cause the CCX to only access the memory controllers with the lowest latency at the central IO Hub chip."

The same text also explains why even more bandwidth than this isn't necessary:
"Those numbers only matter to a small niche of carefully AVX(-256/512) optimized HPC applications. AMD claims a 45% advantage compared to the best (28-core) Intel SKUs. We have every reason to believe them but it is only relevant to a niche."
 
  • Like
Reactions: IEC and t1gran

ASK THE COMMUNITY

TRENDING THREADS