Why stop at Quad SLI/Xfire? Why not 5 way or 6 way SLI/Xfire?

Don Karnage

Platinum Member
Oct 11, 2011
2,865
0
0
I know i'd personally love to run 3 690's just to even bench. Is driver development that difficult?
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
I think it has to do with the number of PCIE lanes.I think most ATX motherboards will be hard pressed to accommodate so many cards.All motherboards need to be XL-ATX to do such setup.
 

Dark Shroud

Golden Member
Mar 26, 2010
1,576
1
0
AMD just recently (last 6 months) updated their drivers to allow more GPUs in one system because of Bitcoin miners. I've seen people with three 6990s running in crossfire and were having problems getting the 6th GPU to work before the update.

After that it comes down to the motherboard bandwidth and driver profiles. These are very rare set-ups for both Nvidia & AMD.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Has to do basically with the available pci express lanes. By default most motherboards have enough available lanes for 2x sli, or quad gpu if you have a dual gpu card. For more than that you're looking at a specialized motherboard. For most quad sli motherboards you're looking at extremely specialized motherboard such as P8Z77 WS, P8Z77 premium, EVGA Z77 FTW with the PLX chip that will usually cost you 350$ or more. These are some of the very very few motherboards that support quad sli with single gpu cards....I shudder at the thought of how much a 6x sli motherboard would cost...Easily 600$ or more..

In the meantime, instead of upgrading GPU's do something even more fun: upgrade your displays. I would never touch a 1080p TN panel, 2560 panels are so much better. 3 x 30" 2560x1600 screens is nirvana. :p
 
Last edited:

formulav8

Diamond Member
Sep 18, 2000
7,004
523
126
PCI Express limitations and the diminishing results after 4 or so gpu's. Not to mention the complexity.
 

pantsaregood

Senior member
Feb 13, 2011
993
37
91
The issue definitely isn't in PCI-E bandwidth. A Radeon HD 6990 only runs marginally slower on PCI-E 2.0 4x than it does on 16x. PCI-E 2.0 4x provides the same bandwidth as AGP 8x did.

If a motherboard can support two 16x PCI-E 3.0 slots, then it seems like the bandwidth should be there for plenty of PCI-E 2.0 4x slots.
 

borisvodofsky

Diamond Member
Feb 12, 2010
3,606
0
0
They don't do it because of poor scaling and lack of market.

I mean, the manufacturers want your money, but they know that there's only like 3 to 4 Don Karnages out there whose computer fetish has reached the last level of Hell.

So you see, they can't possibly create products for a market of 3 - 4 people.
 

Dark Shroud

Golden Member
Mar 26, 2010
1,576
1
0
Seriously I remember people who had two 5970s in crossfire complaining about various issues that were not getting fixed just because it was such a rare set up.

AMD didn't start really paying attention until Bitcoin miners started buying 5970s & 6990s by the boat load or running 5+ cards in one system.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Framesyncing is already a huge problem. Just look at the problems with 2 cards.

Then imagine 6 cards, utterly nightmare.
 

Don Karnage

Platinum Member
Oct 11, 2011
2,865
0
0
Seriously I remember people who had two 5970s in crossfire complaining about various issues that were not getting fixed just because it was such a rare set up.

AMD didn't start really paying attention until Bitcoin miners started buying 5970s & 6990s by the boat load or running 5+ cards in one system.

Amd xfire drivers have always been horrible tho. :whiste:
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Amd xfire drivers have always been horrible tho. :whiste:


Did you start this thread just so you could say that somewhere?

I had two 5870's and had zero problems. I got in to it when it was fairly mature, but none the less, I have nothing but good things to say about my experience.

To answer your question in the OP, as others have said I think it has to do with a lack of market. The vast majority of the market doesn't use even four GPU's. Even SLI/CF is probalby a pretty small percentage. So how many resources will AMD/Nvidia choose to use to make what is likely a very small minority, and a very vocal one at that (enthusiasts) happy with their potentially pretty tricky six way GPU problems? My guess is one of them wieghed what was 'easy enough' to do and still worth doing, the other had to match that to keep up.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Amd xfire drivers have always been horrible tho. :whiste:

I understand some people had issues, but crossfire was problem free for me on 7970s. I know its the popular thing to hate on AMD these days, but anyway I went to nvidia because like you I always love having the latest and greatest nerd toys. But I enjoyed CF7970 here while I had them.
 
Last edited:

sandorski

No Lifer
Oct 10, 1999
70,858
6,394
126
Diminishing returns. Tri-Fire/3way SLI is only beginning to make sense. Quad makes no sense at all, beyond bragging rights.
 

lavaheadache

Diamond Member
Jan 28, 2005
6,893
14
81
The real reason is because multigpu actually sucks. It baffles me how so many people can't see microstutter. I can't friggin stand it. I have/had just about every concievable multi gpu configuration and it is something that I have always been able to exploit.

Powerful single gpu's or bust.
 

pcm81

Senior member
Mar 11, 2011
598
16
81
1. Compute tasks do not use SLI/CF. In SLI/FS 1 large task is being impliciely paralelized by the driver between 2+ gpus. In compute tasks, each GPU runs its own proccess.
2. CF/SLI have deminishing returns, because of lack of bandwidth and each iteration overhead. When you double the number of cores you essentially devide by 2 the time it takes to run 1 iteration of the algorithm, but the multiple cores synchronization delay stays the same. What this means is that you get to the point where the true computation takes up less time that it takes to synchronize the excecution que. Currently there is no data sets in graphix world to keep more than 3 GPUs busy, hence 4GPU setup, like mine, is overkill for graphics. In scientific computing however each GPU runs its own task, so there is no shortage of data and the only data limit is the PCIE bus, but that is not a problem for scientific comuting, since most of data is stored on cards ran anyways.

The exceess of computing power vs. array size in graphics can also be seen in low resolution, low AA benchmarks. What we see is the 2+ gpu setups are closer to 1 GPU setups, but they break away in large resolutions / large AA settings. Inthose extreme cases there is enough data to feed the multiple GPUs and keep each iteration significantly longer that gpu synchronization overhead.
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
Yes SLI is not needed but you can run a process for each GPU.
This is why it's not uncommon to use an adapter so seven dual slot cards can be connected to the system. This would be useless for gaming obviously.

In this arrangement one could use seven 590 cards in a single board for 14 simultaneous tasks. Watercooling highly recommended of course.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Yes SLI is not needed but you can run a process for each GPU.
This is why it's not uncommon to use an adapter so seven dual slot cards can be connected to the system. This would be useless for gaming obviously.

In this arrangement one could use seven 590 cards in a single board for 14 simultaneous tasks. Watercooling highly recommended of course.


I imagine that would have to be quite the power supply for seven current high end dual-GPU cards.
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
I imagine that would have to be quite the power supply for seven current high end dual-GPU cards.

It's easy to split them.

Say 800W for motherboard and peripherals and 1200 watt for every three cards. For continuous loads it's best to keep the supplies loaded at the peak of their efficiency curve. It saves energy and the supplies will last longer.
 

borisvodofsky

Diamond Member
Feb 12, 2010
3,606
0
0
The real reason is because multigpu actually sucks. It baffles me how so many people can't see microstutter. I can't friggin stand it. I have/had just about every concievable multi gpu configuration and it is something that I have always been able to exploit.

Powerful single gpu's or bust.

There are 2 schools of thought on Microstutter.

Competitive play vs Visual experience

in Competitive play, all that matters is Maximum Frames per second.

SLI + Minimum settings + Vsync off, can generate some crazy frames in modern games 200-300fps is easily possible.

At those frame rates, gameplay is extremely fluid, and gives the player a competitive edge. microstutter is not a problem when you have vsync off, because frames are jumping around anyway. the point is it's refreshing fast enough that you'd pwn your enemies.^_^


As for the Visual experience Camp. They do what they do, look at pretty pictures. So obviously they need SLI just to hit 60. not much of a choice.
 

OVerLoRDI

Diamond Member
Jan 22, 2006
5,490
4
81
Yes SLI is not needed but you can run a process for each GPU.
This is why it's not uncommon to use an adapter so seven dual slot cards can be connected to the system. This would be useless for gaming obviously.

In this arrangement one could use seven 590 cards in a single board for 14 simultaneous tasks. Watercooling highly recommended of course.

What os for such a system? I was under the impression that Linux maxed out at 8 gpus.