I predict the return of cartridge based CPU's (poll inside)

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

YES OR NO

  • YES

    Votes: 6 7.5%
  • NO

    Votes: 74 92.5%

  • Total voters
    80

firewolfsm

Golden Member
Oct 16, 2005
1,825
2
81
#27
The system clearly works for GPUs, which are even larger and more energy intensive. And bandwidth, with PCIe4 and eventually 5, will start to become sufficient even for CPUs. I could see the card system being shared by both GPUs and CPUs for server farms which are going for density.

If you had say, 64 lanes of PCIe per CPU, it could connect to the two nearest GPUs with 16 lanes each, and keep 32 lanes for RAM. These three components could be interleaved across the motherboard and stick to their neighbors for most communication.
 

moinmoin

Senior member
Jun 1, 2017
652
173
96
#28
The system clearly works for GPUs
Which have RAM on board for a reason. While new PCIe versions multiply bandwidth, latency is usually still measured in 1000s of ns, a far cry of the sub 100ns common for accessing RAM (which is what CPUs need the many pins for).
 
Apr 27, 2000
10,805
534
126
#29
Which have RAM on board for a reason. While new PCIe versions multiply bandwidth, latency is usually still measured in 1000s of ns, a far cry of the sub 100ns common for accessing RAM (which is what CPUs need the many pins for).
AMD used to have an "HT over PCIe" function for older server platforms. I think the only iteration that ever saw the light of day was HTX. They used to use it for InfiniBand controllers and similar.
 

Kenmitch

Diamond Member
Oct 10, 1999
7,129
53
126
#30
Which have RAM on board for a reason. While new PCIe versions multiply bandwidth, latency is usually still measured in 1000s of ns, a far cry of the sub 100ns common for accessing RAM (which is what CPUs need the many pins for).
Nothing saying you couldn't put ram slots on the back of the cpu cartridge. Probably would wind up faster and MB makers would rejoice as it'd cut down their end cost.
 

whm1974

Diamond Member
Jul 24, 2016
6,306
243
96
#31
Nothing saying you couldn't put ram slots on the back of the cpu cartridge. Probably would wind up faster and MB makers would rejoice as it'd cut down their end cost.
You will need to use SO-DIMMs so the cartridge will be a reasonable size.
 
Oct 27, 2006
19,531
84
106
#32
If done well it could be a really cool thing that solves a number of problems. In fact, with the right design, it could even be badass for enthusiasts, if you could get good birectional cooling going (eg; have the CPU die package have conductive shell that faces both directions, perhaps 16 cores on side 'A', 16 cores on side 'B'). HSF radiating in both directions.
 

TheELF

Platinum Member
Dec 22, 2012
2,665
61
106
#33
They already did a card based CPU in knights landing / xeon Phi ,they don't need to reinvent the wheel,if hell ever freezes over and this becomes attractive for the mainstream market they already have a working model.

For now these are completely useless for normal people.
 

Veradun

Senior member
Jul 29, 2016
254
16
86
#34
Chiplet design (however you want to call it) is the new cardridge design.

They used that design to pull cache off die, the limiting factor being the process technology. We are now facing the same problems with miniaturisation and the chiplet design is our new and better answer.
 
Oct 9, 1999
12,502
7
91
#35
Heh. I still remember putting nickels between my Athlon's cache memory and the heat spreader.
 
Oct 9, 1999
11,236
39
126
#36
Yes, just because i think that would be cool to see again. The PII/III era was back when i was assembling computers for a living, i probably installed several 1000 Slot CPUs in my lifetime.
 

NTMBK

Diamond Member
Nov 14, 2011
8,171
148
126
#38
That's not the host CPU on a cartridge, that's an accelerator card that happens to use CPUs. About as relevant as the old graphics cards powered by an Intel i860.



EDIT: Or the Intel Visual Compute Accelerator, that put 3 Xeon E3s on one card so that you could use their Quicksync encoders:



Or the Xeon Phi, that hosted an entire separate Linux OS on the PCIe card (even if the host was Windows!).

 
Last edited:

moonbogg

Diamond Member
Jan 8, 2011
9,744
22
126
#39
Come on man. This is happening. Totally happening. You know this.
 

Batmeat

Senior member
Feb 1, 2011
679
6
91
#40
The last time we saw cartridge-style CPUs, I bet many of the newer users on here weren't even born yet.
Remember flip chips? Hook a socket based cpu onto a board you would then insert into the cartridge slot. Allowed faster CPUs in motherboards that otherwise wouldn’t support it....if my memory serves me right.
 

TheELF

Platinum Member
Dec 22, 2012
2,665
61
106
#41
That's not the host CPU on a cartridge, that's an accelerator card that happens to use CPUs. About as relevant as the old graphics cards powered by an Intel i860.

Or the Xeon Phi, that hosted an entire separate Linux OS on the PCIe card (even if the host was Windows!).
What's that got to do with anything? Windows can only see that many CPUs/cores, that won't change even with cartridge CPUs,if you go for -too many cores for windows to see- you will have to run some software layer for it.
 

NTMBK

Diamond Member
Nov 14, 2011
8,171
148
126
#42
What's that got to do with anything? Windows can only see that many CPUs/cores, that won't change even with cartridge CPUs,if you go for -too many cores for windows to see- you will have to run some software layer for it.
My point is that these accelerator boards aren't extra cores that the OS can just schedule an arbitrary thread on- they're standalone devices that are managed by drivers, and run their own software internally. Whereas the old-school cartridge CPUs were just the same as a socketed CPU, they just happened to have a weird shaped socket.

EDIT: From the Mustang-200 product description:

With a dual-CPU processor and an independent operating environment on a single PCIe card (2.0, x4),
These are standalone compute systems, which can communicate with the host over PCIe.
 
Last edited:

NTMBK

Diamond Member
Nov 14, 2011
8,171
148
126
#43
Come on man. This is happening. Totally happening. You know this.
CPUs are way too memory bandwidth heavy, you can't get enough pins on the edge of a card. Only way the CPU would move onto a "card" is if main memory moved onto it too.

Also bear in mind that I'm talking about mainstream, high performance CPUs. There's plenty of examples out there in embedded computing of "compute modules" that put the CPU and memory onto a daughterboard, which slots into a bigger board that is basically just an I/O expander. Take a look at COM Express, or Q7.
 


ASK THE COMMUNITY