How come there's no "SLi" for CPUs?!

ibex333

Diamond Member
Mar 26, 2005
4,090
119
106
So I was doing some research on the Ryzen Threadripper, and it looks like their top of the line chip is basically two 1800x "glued" together into one. (I am sure it's more complicated than that, but that's the gist of it)

I know there are some dual cpu boards out there, but those don't seem to be very widespread or accessible, especially in the gaming world. I wonder how come this is the case..

It would make a lot of sense, at least to me, if I could combine TWO LGA 775 chips or TWO 1155 chips to match, or come close to the performance of newer chips and extend the life of an older platform. Even if doing something like this would not increase the base clocks, at least it would double the amount of available cores doing the processing. surely this would improve overall performance at least somewhat?
 

BSim500

Golden Member
Jun 5, 2013
1,480
216
106
I know there are some dual cpu boards out there, but those don't seem to be very widespread or accessible, especially in the gaming world. I wonder how come this is the case..
I used to own an Abit BP6. Insert 2x 366MHz Celeron's, then OC both +50% to 550Mhz, and enjoy a dual "core" CPU years before dual-cores were invented. But that was back when overclocking meant gaining +50% for nothing in any old case, before it became an industry in itself to be milked with premium "leet overclocker" or "gamer" branding. Remember though at the time, the Win 9x (uniprocessor only) "branch" of Windows was prevalent, and you had to use Windows NT 3.51 / 4.0 (which wasn't too good for gaming as DX versions lagged behind Win 9x at the time) to use the second CPU. It took until XP to make multi-cores viable for consumers on an OS level by which time we had actual multi-cores not just multi-sockets of single cores. Dual sockets then simply got replaced by dual cores. Then process nodes got smaller so it became easy to simply add more cores on the same socket for less cost.
 
Last edited:

whm1974

Diamond Member
Jul 24, 2016
9,460
1,570
96
Depending on your work load, more cores may not offer more performance anyway. Also to keep in mind Dual socket motherboards cost way more then single socket boards, and you will need E-ATX cases and better PSUs on top of that.
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
What is every 2P+ server board ever?
QPI, HT and IF are all a thing.
Its way more complicated than 2 1800x's. The new Ryzens use IF fabric to connect all their CCX's. The 2990wx is 4 8 core CCX's, and its a monster.
They stitch Zeppelin dies.
CCX is just a macro-block consisting of 4cores+L3$.
 

DrMrLordX

Lifer
Apr 27, 2000
21,617
10,826
136
So I was doing some research on the Ryzen Threadripper, and it looks like their top of the line chip is basically two 1800x "glued" together into one. (I am sure it's more complicated than that, but that's the gist of it)

I know there are some dual cpu boards out there, but those don't seem to be very widespread or accessible, especially in the gaming world. I wonder how come this is the case..

It would make a lot of sense, at least to me, if I could combine TWO LGA 775 chips or TWO 1155 chips to match, or come close to the performance of newer chips and extend the life of an older platform. Even if doing something like this would not increase the base clocks, at least it would double the amount of available cores doing the processing. surely this would improve overall performance at least somewhat?

It all has to do with whatever interconnect you would have between the sockets.

The legendary multi-CPU Celeron systems ran on an FSB system. Correct me if I'm wrong old-schoolers, but I'm pretty sure each individual socket had its own bus link to the chipset. The CPUs were not linked to one another, and there was no on-die or on-package L3 to help with cache coherency, so there was no real need for inter-CPU links The downside was that if you were trying to do anything like interthread communication between CPUs on a system like that, the latency for something like a cache snoop had to be awful. You were basically going out to the system RAM no matter what. On the plus side, you could basically use the same die on MP systems of the day as you used in single socket systems.

Compare this to a fairly modern MP system where individual CPUs have their own memory controllers and large L3 caches. Each socket has to have links to other sockets (for cache coherency protocols to work, at a minimum). That means more pinouts and a different design from a single-socket desktop variant. The only way to have "SLI"-like behavior would be to sell a 4P or 8P-capable die to everyone, using a package that could satisfy the pinout requirements of a 4P/8P system. Due to cost controls and segmentation, nobody wants to do that.
 
  • Like
Reactions: VirtualLarry

Tuna-Fish

Golden Member
Mar 4, 2011
1,346
1,525
136
This absolutely exists and is widely used in servers. The reason they are not used for desktops is that both the major CPU makers want to segment that feature to the server space where they can get higher ASP for it.

If you could effectively string together a bunch of cheap desktop cpus to make large systems, plenty of the "cheap crappy webhost" style operators would buy those instead of actual server chips, causing the CPU makers to lose a lot of money.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,326
10,034
126
but I'm pretty sure each individual socket had its own bus link to the chipset. The CPUs were not linked to one another, and there was no on-die or on-package L3 to help with cache coherency, so there was no real need for inter-CPU links
Actually, the FSB mechanism, was a shared bus between CPUs, and the chipset, IIRC. Because you didn't need a specific "server" chipset on those dual-CPU PGA Celeron boards.

The FSB mechanism, for "sharing" CPU cores / dies, was also used on-package, for the Core2Quad CPUs, which are each dual-core CPU dies, two of them, sharing the FSB connection to the board/chipset, and sharing the FSB connection between the two dual-core dies on the package, MCM-style.
 

Ichinisan

Lifer
Oct 9, 2002
28,298
1,234
136
I used to own an Abit BP6. Insert 2x 366MHz Celeron's, then OC both +50% to 550Mhz, and enjoy a dual "core" CPU years before dual-cores were invented. But that was back when overclocking meant gaining +50% for nothing in any old case, before it became an industry in itself to be milked with premium "leet overclocker" or "gamer" branding. Remember though at the time, the Win 9x (uniprocessor only) "branch" of Windows was prevalent, and you had to use Windows NT 3.51 / 4.0 (which wasn't too good for gaming as DX versions lagged behind Win 9x at the time) to use the second CPU. It took until XP to make multi-cores viable for consumers on an OS level by which time we had actual multi-cores not just multi-sockets of single cores. Dual sockets then simply got replaced by dual cores. Then process nodes got smaller so it became easy to simply add more cores on the same socket for less cost.
Same here. BP6 with overclocked Celeron's were pretty damn amazing.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
Actually, the FSB mechanism, was a shared bus between CPUs, and the chipset, IIRC. Because you didn't need a specific "server" chipset on those dual-CPU PGA Celeron boards.

The FSB mechanism, for "sharing" CPU cores / dies, was also used on-package, for the Core2Quad CPUs, which are each dual-core CPU dies, two of them, sharing the FSB connection to the board/chipset, and sharing the FSB connection between the two dual-core dies on the package, MCM-style.
As far as I remembered the CoreDuo and C2Q didn't have a crossbar. So all communication between them had to go out the chipset first.

But to answer the main point. The idea of multi socket desktops basically died with the x59-x79 platforms and the increase in single CPU core counts.

Intel's segmentation policies pushed this along. But in the end a 1950x is going to be better than 2 1700x's. The expensive x399 board is still going to be cheaper than a dual AM4 board. You could do a TR in a reg ATX case. You can't with 2x AM4.

In the end there are platforms for more cores and your example on Threadripper is a perfect example because it is fast and better and roughly the same price as doubling up on CPU's.
 

DrMrLordX

Lifer
Apr 27, 2000
21,617
10,826
136
Actually, the FSB mechanism, was a shared bus between CPUs, and the chipset, IIRC. Because you didn't need a specific "server" chipset on those dual-CPU PGA Celeron boards.

The FSB mechanism, for "sharing" CPU cores / dies, was also used on-package, for the Core2Quad CPUs, which are each dual-core CPU dies, two of them, sharing the FSB connection to the board/chipset, and sharing the FSB connection between the two dual-core dies on the package, MCM-style.

That sounds right. There may have been some high-end server boards with multiple FSBs but I don't remember exactly. In any case, the direct CPU-to-CPU links that showed up in systems like AMD's Hypertransport didn't exist.

With Opterons, the major difference between the 1x, 2x, and 8x Opterons were the HT links, so obviously you had to buy the ultra-expensive 8x CPUs for 4P and 8P systems, just to get the dice with the extra HT links built-in.
 

thigobr

Senior member
Sep 4, 2016
231
165
116
Actually the AMD 760MP/MPX had two independent FSBs, one for each CPU. It was sad that it didn't have dual memory controllers...
 

whm1974

Diamond Member
Jul 24, 2016
9,460
1,570
96
Same here. Really a sad day when Abit went out of business. They were on par with Asus boards back in the day.
At the time I was considering doing a build using that board and two Celerons, but alas I didn't have the money for it.