The ASUS manual is particularly unclear on this point.
If I have two sockets and 4 DIMMs, the ASUS guide wants me to put two DIMMs in the "A and B" banks, and another 2 DIMMs in the "E and F" banks. This comes out looking like 128-bit per CPU, right?
How can each socket get 52 GB/sec of memory bandwidth on only 2 DIMMs per socket?
It would seem we need 8 DIMMs, and not 4, to get the 256-bit bus populated, right?
The only explanation I can think of is if there some stuff being done by QPI to give CPU#2 direct access to the DIMMs on CPU#1? Certainly there are not traces going from socket2 onto socket1's memory banks. In my mind I envision memory bus width to be fixed like any geometry, and there are a fixed amount of wires there...
If I have two sockets and 4 DIMMs, the ASUS guide wants me to put two DIMMs in the "A and B" banks, and another 2 DIMMs in the "E and F" banks. This comes out looking like 128-bit per CPU, right?
How can each socket get 52 GB/sec of memory bandwidth on only 2 DIMMs per socket?
It would seem we need 8 DIMMs, and not 4, to get the 256-bit bus populated, right?
The only explanation I can think of is if there some stuff being done by QPI to give CPU#2 direct access to the DIMMs on CPU#1? Certainly there are not traces going from socket2 onto socket1's memory banks. In my mind I envision memory bus width to be fixed like any geometry, and there are a fixed amount of wires there...
Last edited:
