• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Seven way 580GTX SLI

bbs lm-r

Senior member
Jan 25, 2011
301
0
0
I thought 4-way was the most possible (because of software limitations, maybe)?
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
For 3D probably but for CUDA it should allow all of 'em.
I saw 8X SLI on the P6T7 before but that was with GTX295s.
Come to think of it 12X should be possible with 590s but it requires an expensive offset jig because of the dual slot cooler design. A single slot waterblock for the 590 would fix that.
That's a lot of flops! :eek:
 

artvscommerce

Golden Member
Jul 27, 2010
1,143
17
81
Would it still be in SLI mode if you were using multiple cards for CUDA? I was under the impression that when using multiple cards for CUDA you would disable SLI.
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
Would it still be in SLI mode if you were using multiple cards for CUDA? I was under the impression that when using multiple cards for CUDA you would disable SLI.
It works with it enabled however it probably would with it disabled too. I'm sure one of the DC guys could chime in with some information.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
998
126
Even if it is technically possible, my wallet dictates that 7 x $700 in video cards is in fact not possible. ;)
 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
I don't think it's feasible to water cool 7 of them for 24/7 load. Each card puts out ~300W of heat, that's 2100W of heat to deal with.

If somehow manage to somehow parallellize all 7 cards into one loop, the flow rate will be terrible. If you serialize them, the water will be boiling hot by the time it reaches the last card. I don't think you have space to create more than one loop because they will be tightly packed together.
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
do they even make a mobo that will do 7x pcie in at LEAST x8?
Dual NF200 is no problem. Even at 4X PCI-E it would be fine.

I don't think it's feasible to water cool 7 of them for 24/7 load. Each card puts out ~300W of heat, that's 2100W of heat to deal with.

If somehow manage to somehow parallellize all 7 cards into one loop, the flow rate will be terrible. If you serialize them, the water will be boiling hot by the time it reaches the last card. I don't think you have space to create more than one loop because they will be tightly packed together.
That load is a piece of cake for a central cooling supply that has megawatts of capacity at nine degrees Centigrade supply over 65 psig. If anything they will be too cold and have to be insulated. :eek:
 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
Dual NF200 is no problem. Even at 4X PCI-E it would be fine.



That load is a piece of cake for a central cooling supply that has megawatts of capacity at nine degrees Centigrade supply over 65 psig. If anything they will be too cold and have to be insulated. :eek:

Oh well I was going to suggest a chiller, but I see that you're on top of that.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
I believe there are 2/3/4 SLI connectors, but not 5/6/7. Also, the driver supports up to 4xSLI only. You can have a 2x3xSLI + 1 or 4xSLI+3xSLI setup. You will probably need 2x1Kwatt PSU to power them, and a normal household outlet will trip under that draw. Also, x58 and the up coming x68 32 PCIe lanes. Putting 7 video cards in a system means each system with have around 4-5x lanes, assuming the mobo bios supports it.
 
Last edited:

artvscommerce

Golden Member
Jul 27, 2010
1,143
17
81
I don't think it's feasible to water cool 7 of them for 24/7 load. Each card puts out ~300W of heat, that's 2100W of heat to deal with.

If somehow manage to somehow parallellize all 7 cards into one loop, the flow rate will be terrible. If you serialize them, the water will be boiling hot by the time it reaches the last card. I don't think you have space to create more than one loop because they will be tightly packed together.
It actually can be done quite easily with air. Tyan sells a system that contains 8 Tesla C2070's in a 4U. These may not run quite as hot as a GTX 580, but they are pretty close. And this was designed for a 24/7 load.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Dual NF200 is no problem. Even at 4X PCI-E it would be fine.



That load is a piece of cake for a central cooling supply that has megawatts of capacity at nine degrees Centigrade supply over 65 psig. If anything they will be too cold and have to be insulated. :eek:
I'm a little confused. You seem to have all the bases covered, what's the question here?
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
It actually can be done quite easily with air. Tyan sells a system that contains 8 Tesla C2070's in a 4U. These may not run quite as hot as a GTX 580, but they are pretty close. And this was designed for a 24/7 load.
Except that a server room has independent air conditioning which is usually kept around 20 degree celcius. The heat produced by server will not increase the ambient temp as the purpose of the independent air conditioning is for. Within each of those server, you probably want to mount 3x 10k rpm 120mm fan, each probably made more noise than all the fans on a normal home system on max.
12CM 10,000 RPM CASE FAN TURBO
Yes it is still air cooling.
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
I'm a little confused. You seem to have all the bases covered, what's the question here?
I'm wondering if anyone has done/tried it and what the outcome was.
I'd hate to have to uncrate fourteen cards for nothing. ;)
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
0
0
wow. i just looked it up. 7 x PCIe 2.0 ( 6:mad:x8, 1:mad:x16) on an Asus p6t7 ws
The slightly less wallet-crushing Asus Revolution model in my sig does 6 slots at 8x. You have to use single slot cards or custom coolers, though. The main draw for these boards is that they can do tri-SLI and give each card x16. I was running three GTX 260s in it until recently.
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
0
0
Fairly sure that the Nvidia drivers can only handle up to 3 cards (not GPUs) in SLI due to the bridge designs. Quad and Hex SLI is then just 2 or 3 dual-gpu cards in SLI.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
I'm wondering if anyone has done/tried it and what the outcome was.
I'd hate to have to uncrate fourteen cards for nothing. ;)
...and I'm just trying to get you DO IT because I want to know how it turns out. ():)
 

OVerLoRDI

Diamond Member
Jan 22, 2006
5,491
3
81
Whatever operating system you are using should see them as 7 separate devices, so as long as whatever job you are using these for can be split into 7 cuda "threads" or instances you should be fine.

SLI isn't a factor.
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
Whatever operating system you are using should see them as 7 separate devices, so as long as whatever job you are using these for can be split into 7 cuda "threads" or instances you should be fine.

SLI isn't a factor.
That's what I'm hoping for! :)
 

ASK THE COMMUNITY