Question Why don't they (or people) start making graphics cards with several old processors on them?

SaltyNuts

Platinum Member
May 1, 2001
2,399
275
126
With graphics cards prices being INSANE these days, I mean LITERALLY insane, why haven't the graphics card makers, or even aftermarket people like you guys, started pulling processors off old graphics cards and putting them all together on a single graphics card to make graphics cards that are very powerful but still very cheap because the processors were pulled off decade plus old graphics cards? Like hell, get 20 RIVA 128 cards. Pull off the processors. Sure, although each was badass back in its day, they are slow by today's standards. But.... what about 20 of them soldered on a single graphics card? Write some simple drivers to make them work in concert, and I bet it would be fast as fuk, and cheap, or hell probably free if you accept old PC donations... why has no one done this?
 
  • Haha
Reactions: scineram

BFG10K

Lifer
Aug 14, 2000
22,709
2,958
126
RIVA128 is DirectX 5.0 so it physically cannot run anything made in the last ~25 years.

"Simple drivers", heh. Even 2 x multi GPU required millions of lines of code to maintain and still had a lot of problems. Good luck scaling that to 20 GPUs.

And where's the VRAM going to come from? How about 4K video decoding? WDDM acceleration? 32 bit color?

You'd have much better luck (and performance) using the WARP rasterizer on a Threadripper.
 
Last edited:

VirtualLarry

No Lifer
Aug 25, 2001
56,327
10,035
126
Write some simple drivers to make them work in concert
Eh, that's the rub.

And technically, (well, maybe if they were DX12 spec), you could get a quad-GPU PCI-E riser adapter, four risers, and plug in four GPUs into each PCI-E x1 slot on your mobo, and DX12 games (if written properly to take advantage of multi-GPU) will do that automagically.
 
Feb 4, 2009
34,554
15,766
136
Manufacturing wise even assuming old designs could be updated to modern components like memory and available board parts and such.
How would this solve the manufacturing bottleneck?
Seems like it would be less efficient to make an older chip/card design would actually slow manufacturing.
 

DeathReborn

Platinum Member
Oct 11, 2005
2,746
740
136
I keep seeing ATI Rage XL 8MB cards pop up. If they wanted to bring back say the 1070/1080 they'd be stuck by other component shortages like GDDR5 being hard to get now let alone GDDR5X and other small components. Even if they made the GPU's they'd end up with nothing to sell due to shortages anyway.
 

NTMBK

Lifer
Nov 14, 2011
10,232
5,013
136
Even ignoring the (huge) problem of splitting tasks between that many cards- modern GPUs are massively more efficient. A Geforce 560ti is about 7GFlops/Watt, and a Geforce 3060 is about 55GFlops/Watt. The power consumption would be insane.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,226
5,228
136
started pulling processors off old graphics cards and putting them all together on a single graphics card to make graphics cards that are very powerful but still very cheap because the processors were pulled off decade plus old graphics cards?

This doesn't work on so many levels.

For practical physical reasons you probably won't get more than 4 GPUs working on card. The most old multi-GPU cards ever had were two GPU chips ( maybe some 3DFX prototypes escaped with more).

Who is going to create the cards? It's hard to make serious business manufacturing a card based on old chips that you need to scour EBAY for.

CF/SLI software is not that good/flexible. You can barely get a couple of modern cards working let alone 4+ ancient ones.

Go back very far, and you lack SW support at all.

Even if the software could make it work, memory does't stack across SLI/CF. So at Minimum you would need 4GB card, or more likely 8GB cards, limiting how far back you can go.

So what is the realistic expression of this idea.

You buy 2 or 3 older cards and use SLI/CF.

Let's look at NVidia cards:

NVidia has killed Driver support for Kepler and older cards. So Maxwell.

980 4GB selling around $300 on Ebay.
980 Ti 6GB selling around $400 on Ebay

Buy multiple of these suddenly doesn't looks so hot...

Right now I think you best bet for AAA gaming is to become a Console gamer. Just two days ago I got an email from Microsoft saying the had XBSX at MSRP in stock...
 
Feb 4, 2009
34,554
15,766
136
Even ignoring the (huge) problem of splitting tasks between that many cards- modern GPUs are massively more efficient. A Geforce 560ti is about 7GFlops/Watt, and a Geforce 3060 is about 55GFlops/Watt. The power consumption would be insane.

If you can make four 560ti’s why couldn’t four 1660ti/supers be made instead?
 

Fallen Kell

Diamond Member
Oct 9, 1999
6,036
430
126
Because using several old chips will cost more than using a single new chip. The main issue is manufacturing capacity. Old FABs are being shutdown/upgraded to the latest (or newer) tech. So there are not a lot of places to still manufacture the older chips. It is also not cost effective to attempt to make the old chips on new fabrication due to the capacity issues (i.e. why should they build the old design when they could build the newer design for effectively the same resources of fabrication line usage?). Add to the problem that making multiples of the older chips to have similar performance levels of the single new chip would require much more power and cooling requirements on the computers themselves (to the point that most people could not run them aside from boutique builders and DIYers which is a fraction of the market), it just doesn't make sense or help solve the real problem.

The solution to the problem is not easy. I think any amount of new fabrication will not solve the underlying issues, which is due to crypto mining. Until crypto mining has dedicated hardware that performs better/faster and more cost effective than a GPU, GPUs will always be a severe shortage. On the other hand, the only other solution is for the GPU manufactures to create a true queuing system to sell their GPUs in such a way that eliminates the scalping of product. A signal queue at Nvidia and AMD's levels would help eliminate this (i.e. you can only get into the queue for a single card, one at a time, and once your name gets to the top of the list, you can then pass that voucher on to the retailer to purchase a card of that class/category. But, we know that will never happen....
 

Mopetar

Diamond Member
Jan 31, 2011
7,835
5,981
136
Maybe Intel kept a tight update cycle, but if you look at the financials from TSMC you would see that a substantial part of their revenue is derived from nodes a decade or more old at this point.

The reason is that the older nodes dont have nearly the same density as modern processes even with Moore's law slowing down. A 980 die isn't any more efficient than it ever was and there's no support for modern APIs or things like ray tracing.

You can't even use a queue system for sales. I'll gladly sign up for a GPU I don't need but can sell for a tidy profit if the money is worth my time.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
This doesn't work on so many levels.

For practical physical reasons you probably won't get more than 4 GPUs working on card. The most old multi-GPU cards ever had were two GPU chips ( maybe some 3DFX prototypes escaped with more).

Voodoo 2 used three GPUs, each with its own memory. But it acted as a single GPU to the PC/Mac they were installed in.

But even if you could get twenty(!) Riva 128s to work together, they would still be unusably slow for even 10 year old games.
 
Aug 16, 2021
134
96
61
Better idea would be to make something like GTX 1650 on bigger lithography and then replace vRAM with DDR4 chips, but put more chips than there was, so you get more bit and thus more bandwidth. And then use PCB of higher tier card like RTX 3060 with leftover coolers from 1660, 1660 Super. Or maybe even go bonkers and put 4 So-DIMM slots on card instead of vRAM to keep costs low, so that user would configure it for their needs.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,841
3,189
126
SLI is broken... it doesn't work as well as intended back in voodoo2 days, and each passing generation it only got worse.
This is why Nvidia and AMD stop supporting scaling in aspect of gaming.

But we had these cards... infact i had 2 of them from each generation.

ATi... had the 4870X2... it was basically 2 dies on a single card connected together with a PLX chip.
IMG_0008.jpg

You can see where the 2 dies are here....
IMG_0009.jpg

This allowed the system to run quadfire for the first time. Basically +1 over Nvidia's TriSLI....

But Nvidia, also had one which was wierd.... it was 2 GPU's litterally sandwitched together. It had a internal SLI cable which tied the Two PCB's together making it 1.

Very Difficult to cool, hence why it was recommended to watercool it.

IMG_1278.jpg

in consensus tho, SLI / Xfire doesn't work as we gamers hoped it would.
And it only got WORSE each generation, that even Nvidia decided it was not worth the time or effort to make drivers for them.
It had to do with each cards sync properly and displaying out in areas.
Problem tho, they never sync'd properly and you would almost always see micro shutters.

I would assume the same would happen now, and it would be even worse with monitors that aren't properly Gsync'd / FreeSync'd bcause now you have both display and Gpu which also needs to sync properly.
 
  • Like
Reactions: CP5670 and bigboxes

bigboxes

Lifer
Apr 6, 2002
38,576
11,968
146
SLI is broken... it doesn't work as well as intended back in voodoo2 days, and each passing generation it only got worse.
This is why Nvidia and AMD stop supporting scaling in aspect of gaming.

But we had these cards... infact i had 2 of them from each generation.

ATi... had the 4870X2... it was basically 2 dies on a single card connected together with a PLX chip.
View attachment 48930

You can see where the 2 dies are here....
View attachment 48931

This allowed the system to run quadfire for the first time. Basically +1 over Nvidia's TriSLI....

But Nvidia, also had one which was wierd.... it was 2 GPU's litterally sandwitched together. It had a internal SLI cable which tied the Two PCB's together making it 1.

Very Difficult to cool, hence why it was recommended to watercool it.

View attachment 48932

in consensus tho, SLI / Xfire doesn't work as we gamers hoped it would.
And it only got WORSE each generation, that even Nvidia decided it was not worth the time or effort to make drivers for them.
It had to do with each cards sync properly and displaying out in areas.
Problem tho, they never sync'd properly and you would almost always see micro shutters.

I would assume the same would happen now, and it would be even worse with monitors that aren't properly Gsync'd / FreeSync'd bcause now you have both display and Gpu which also needs to sync properly.

Hey, thanks for sharing. I know I like seeing my old builds. I'm sure you've had some cool ones over the years. Is this mobo inverted or is the picture inverted?
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,841
3,189
126
Hey, thanks for sharing. I know I like seeing my old builds. I'm sure you've had some cool ones over the years. Is this mobo inverted or is the picture inverted?

its from a TJ-07.
So yeah, its inverted layout.
It was one of the few cases which allowed the inverted build, along with some lian li v series.
 

bigboxes

Lifer
Apr 6, 2002
38,576
11,968
146
its from a TJ-07.
So yeah, its inverted layout.
It was one of the few cases which allowed the inverted build, along with some lian li v series.

I built a few inverted ones back in the day. I found case with an air duct that moved air over an inverted board. I think it was a Silverstone. Heavy AF EATX case.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,841
3,189
126
I think it was a Silverstone. Heavy AF EATX case.

thats probably the same case shown above...
The Temjin - 07 by silverstone.... aka TJ-07.

I honestly think its probably the best case silverstone ever made, followed by the TJ-09 with its unique 90 degree rotated layout.
 

CP5670

Diamond Member
Jun 24, 2004
5,510
588
126
in consensus tho, SLI / Xfire doesn't work as we gamers hoped it would.
And it only got WORSE each generation, that even Nvidia decided it was not worth the time or effort to make drivers for them.
It had to do with each cards sync properly and displaying out in areas.
Problem tho, they never sync'd properly and you would almost always see micro shutters.

I would assume the same would happen now, and it would be even worse with monitors that aren't properly Gsync'd / FreeSync'd bcause now you have both display and Gpu which also needs to sync properly.

Microstuttering was a real mess. Many games ran at a high framerate but subjectively felt much choppier than you would think. SLI/CF basically only worked properly in a few AAA games from the time a new card came out, and everything else either had this microstuttering or only a minor performance increase over one card. The only game I recall where it worked well was Doom 3 with SFR. As single cards became more powerful in recent years it became even more niche.
 
  • Like
Reactions: bigboxes

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
How will you reliable pull the GPU off a really old computer? The Riva 128 has hundreds of ball-grid array pins. Not as complex as modern GPUs, but from a standpoint of reusing it, makes no sense.

And "soldering 20 on one chip". You need to route all those connections coming from each and every one of the 20 chips. That means you'd have an engineer getting paid to do all the work, and sending it to a PCB manufacturing house to do so.

And you likely need some sort of a multiplexer so the computer sees it as one card. Nevermind creating a modern Windows driver!

By the way, the way more advanced Riva TNT2 gets 2000-3000 in 3DMark01. Intel's much criticized GMA 900 gets 9000 points.

Riva 128 - 1 Pixel pipeline, 100MHz = 100MTexels/sec
Riva TNT2 Pro - 2 Pixel pipeline, 143MHz = 286MTexels/sec
Intel GMA 900 - 4 pixel pipeline, 333MHz = 1.3GTexels/sec
Intel Iris Xe G7 - 24 ROPs/48 TMUs, 1.1GHz = 53GTexels/sec(100x GMA 900 performance)
GTX 1650 - 32 ROPs/56 TMUs, 1.5GHz = 84GTexels/sec

Better idea would be to make something like GTX 1650 on bigger lithography and then replace vRAM with DDR4 chips, but put more chips than there was, so you get more bit and thus more bandwidth. And then use PCB of higher tier card like RTX 3060 with leftover coolers from 1660, 1660 Super. Or maybe even go bonkers and put 4 So-DIMM slots on card instead of vRAM to keep costs low, so that user would configure it for their needs.

That doesn't work in a practical sense.

Bigger lithography = higher cost, and higher power consumption. Even if the costs are not higher, newer processes lower power use.

Also, the GTX 1650 doesn't have DDR4 support, meaning it'll have to be redesigned. DDR4 also has lot less bandwidth per channel, since it officially tops at 4GT/s, while GDDR5 uses GDDR5 with 8GT/s. You would need quad-channel DDR4 to achieve the same bandwidth, which means significantly increased board complexity due to double the amount of traces required.

Overall, it'll result in higher costs at every level. The cost of production on the GTX 1650 is amortized because it's based on a cut down version of Turing. If you make a GTX 1650 on an older process, you'll be dedicating production all for that chip, meaning high costs. No one makes GPU boards with SO-DIMM slots, so again, increased costs.
 
Last edited:
Aug 16, 2021
134
96
61
Bigger lithography = higher cost, and higher power consumption. Even if the costs are not higher, newer processes lower power use.
Bigger lithography chips are much cheaper to make and do we really care about power consumptions during these times? I'm pretty sure that those would be sold out really fast and nobody would give a damn about their power use.

Also, the GTX 1650 doesn't have DDR4 support, meaning it'll have to be redesigned. DDR4 also has lot less bandwidth per channel, since it officially tops at 4GT/s, while GDDR5 uses GDDR5 with 8GT/s. You would need quad-channel DDR4 to achieve the same bandwidth, which means significantly increased board complexity due to double the amount of traces required.
We are out of silicon, not PCBs.

Overall, it'll result in higher costs at every level. The cost of production on the GTX 1650 is amortized because it's based on a cut down version of Turing. If you make a GTX 1650 on an older process, you'll be dedicating production all for that chip, meaning high costs.
No, most industry today is making chips on older processes. GPU and CPU industry doesn't, because saving money isn't as valuable as having high performance, thus they are at diminishing return side of processes, not only that, but also capacity.


No one makes GPU boards with SO-DIMM slots, so again, increased costs.
They are assembled probably in same plat that assembles laptops and other computer things. If you don't solder vRAM, you save cash, if you solder slots, you spend more, but since chips are still more expensive than soldering, it should be cheap.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Microstuttering was a real mess. Many games ran at a high framerate but subjectively felt much choppier than you would think. SLI/CF basically only worked properly in a few AAA games from the time a new card came out, and everything else either had this microstuttering or only a minor performance increase over one card. The only game I recall where it worked well was Doom 3 with SFR. As single cards became more powerful in recent years it became even more niche.

AMD did solve the micro-stuttering once they moved to using the PCIe bus to transfer the data between the cards. But by that point it was just too late as games were already moving to deferred rendering which made SLI/CF unusable.