Can you disable SLI on DualGPU card?

Timmah!

Golden Member
Jul 24, 2010
1,571
933
136
As the title says, can anyone please answer this question?

I am thinking about selling my 2 gtx460s and replace them with gtx590, however the possibility of switching SLI off on it is critical feat for me, as i do not want it for gaming, but producing archviz with Octane render. For that, i need even with my current setup SLI to be off...and i am not sure if it can be done with dual-GPU card.

Second question, will 750W PSU be enough for that card? Exact model is SEASONIC S12-D 750W Silver. I do not plan to overclock the card and my CPU is 980X OCed to 3,78 with Turbo (do not plan to overclock that one any higher too).

Thanks in advance for your answers
 

pcm81

Senior member
Mar 11, 2011
598
16
81
I would not bet on your PSU surviving the load.

Dont now answer about SLI, but would be surprized if you can. Also, I am surprised that your apps can't access gpus separately. Usually CF or SLI is a grapix feature, rather than a limiting factor for utilization of individual gpus... but clearly depends on your setup.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
Yes, you can. You can disable sli and make the 2nd gpu work as a dedicated physX gpu.

edit: disregard my earlier psu advice, I was thinking of the gtx 460 2win card for a moment. Sorry.

If you don't o/c the 590 at all. The 750 should be enough. Figure 420 watts for the 590 stock or there abouts.

Here is a screenshot from the NV panel of a gtx 295, the past dual gpu card.

Maybe Keys, who owns the 590 can chime in on exactly what you want to do, there is area in NV panel concerning the cuda gpu's.
295.jpg
 
Last edited:

Timmah!

Golden Member
Jul 24, 2010
1,571
933
136
Huh, i am bit confused now. You posted pic with 2 GTX295s and one of them is set for Physx. Or am i reading it wrong?
I want only one 590 and need to disable the internal SLI bridge between its 2 cores.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
Maybe this thread will help :
http://forums.nvidia.com/index.php?showtopic=196269
It sounds like, applications can have control of the individual gpu's, but it depends on the applications. Especially if you want it to use 2 at once. For the one application.
A fellow in this thread is using a gtx 295 (dual gpu card) and a couple 480's, all gpu's running cuda apps.
 

Cuular

Senior member
Aug 2, 2001
804
18
81
Owning a 590, yes you can easily just check the box in the control panel that turns off SLI mode.

You then have the option of forcing physx to use the second gpu, or allowing the driver to determine which GPU gets used for physx.

So it is extremely easy to turn off SLI if you decided you don't want dual cards running.
 

Zap

Elite Member
Oct 13, 1999
22,377
7
81
Yes you can "disable SLI" with a dual GPU Nvidia card. The Nvidia Control Panel does not call it SLI if it is one physical card with two GPUs, but rather something like "multiple GPU acceleration" or something like that.
 

Ghiedo27

Senior member
Mar 9, 2011
403
0
0
If you don't mind me asking, what makes you interested in a dual gpu card when you can't use the 2nd gpu? It seems like you'd be much better off buying a gtx 580 (with its faster clocks) than a crippled 590. :confused:
 

lsv

Golden Member
Dec 18, 2009
1,610
0
71
He probably wants to save power and money. Which sounds stupid considering he wants a 590 in the first place. OP your PSU will have a hard time with a 590 anyway. Get a 580 and be happy.
 

Timmah!

Golden Member
Jul 24, 2010
1,571
933
136
Thank you all for the answers.
I want to use both GPUs and i will, but not in SLI mode. I have now 2 460 with SLI turned off. The Octane Render still sees the second card (not the one plugged to the monitor) and i can use it for rendering. For some reason, Octane requires SLI to be off (i do not know why, but when i render with both cards, the rendered frame is divided into 2 halves and each card renders half of the final image, with SLI i suppose there might be a problem with this).
So thats why. Other thing, when i render with both cards, i can do nothing else with the compy, as the card which works as display adapter is busy with raytracing :D I need to constantly pause the render then and unpause and vice versa, its really pain, so its allways good to have 2 GPUs, one working as display adapter and second for rendering....and for doing the final renders, just use both GPUs and cut the render time to half.
BTW gtx460 is really weak for this, apparently it can use only 224 out of its 336 cores, probably cause of its superscalar nature (it does only 2 warp schedulers for every 3 SM blocks or something along the line...), therefore my 2 460s are as good for this as one 470, but hotter and louder. So i have all the reasons to replace them. just needed to answer those 2 questions.
Regarding PSU, i think it will be enough...its Seasonic FFS, not some crappy noname PSU. The card apparently needs about 450 W at load, so 750W seems to be plenty. I am not going to have both CPU and both GPUs at load at the same time, as Octane is GPU render only, not some hybrid CPU+GPU renderer (ala iray or Arion).
 
Last edited:

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Yes, there is a "Disable multi-GPU mode" option in the "Configure Multi-GPU, PhysX, Surround" section of the Nvidia Control Panel.

There isn't any guarantee that Octane will "SEE" two GPU's on one card even when SLI is disabled. It may require 1 GPU per PCI-e slot. Before spending 700 clams on something, make sure it's going to work for your purposes. Do some more research. You now know that SLI can be disabled on a single card. But that's only one part of the puzzle, unless you know the answer already.

Finally, 700W seems to be the minimum power requirement for the 590. I feel you would be pushing things a bit with an o/c'd system on top of that.
 
Last edited:

Timmah!

Golden Member
Jul 24, 2010
1,571
933
136
Yes, there is a "Disable multi-GPU mode" option in the "Configure Multi-GPU, PhysX, Surround" section of the Nvidia Control Panel.

There isn't any guarantee that Octane will "SEE" two GPU's on one card even when SLI is disabled. It may require 1 GPU per PCI-e slot. Before spending 700 clams on something, make sure it's going to work for your purposes. Do some more research. You now know that SLI can be disabled on a single card. But that's only one part of the puzzle, unless you know the answer already.

Finally, 700W seems to be the minimum power requirement for the 590. I feel you would be pushing things a bit with an o/c'd system on top of that.

Yeah, thanks. I already put a question into Octane forums and waiting now for the answer from the devs. In the meantime i found a guy over there who has gtx295 and apparently he can run it on both cores. I suppose gtx295 and 590 are the same in this regard then, despite the fact 295 is dual PCB.
The PSU though, i do not want it to change...hm, we will see about that. Anyway i have to sell at least one of the current 460s first and earn some money (have lined up some extra job) to finance this, so i have time to do some more research. But i have to say, i like the idea :D, its funny, when i finally have a money to buy a really highend GPU, its not cause of gaming and in fact there is no game, i can put it to good use (the game i would like to play).
 

sgrinavi

Diamond Member
Jul 31, 2007
4,537
0
76
Put a single 570 in there for use with octane and keep one of your 460's as your display driver.
 

Timmah!

Golden Member
Jul 24, 2010
1,571
933
136
Put a single 570 in there for use with octane and keep one of your 460's as your display driver.

Thanks for suggestion, to be fair this was my initial idea, but sadly i suffer from delusions of grandeur :D When i thought about gtx570, inevitably i had to think about 580 as well, then again 570 with the possibility of buying second later, therefore have 570SLI... but really the interior renders take ages, so i would like to have the best HW available.
for the moment i like the idea of 590, especially with the extra money coming, but in the end maybe i will be reasonable and go for this cheaper option. But it really wont be much of a improvement over 2 460s performance wise (comapred to 590), and the heat/noise issue will be still there.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
BTW gtx460 is really weak for this, apparently it can use only 224 out of its 336 cores, probably cause of its superscalar nature (it does only 2 warp schedulers for every 3 SM blocks or something along the line...),
Actually its because manufacturing isn't magic, there are defects. CPUs and GPUs are build with redundancy and the ability to disable portions which contain a defect. In addition they disable parts because they want to more easily create a wide array of cards, to create the cheaper ones they take the exact same die and just disable portions on it, even if there is nothing wrong with them. Everyone does it, not just nvidia.

therefore my 2 460s are as good for this as one 470, but hotter and louder. So i have all the reasons to replace them. just needed to answer those 2 questions.

How in the world do you figure the 470 is more powerful than your 2x 460? 2x 460 are much more powerful than one 470
http://www.anandtech.com/bench/Product/314?vs=311
 

cusideabelincoln

Diamond Member
Aug 3, 2008
3,275
46
91
Your power supply should be enough. It's a really, really good 750W unit. 65A on the +12V rails is plenty for a non-overclocked single 590. You have overhead to keep the power supply within its optimal and safe operating loads.
 

Cuular

Senior member
Aug 2, 2001
804
18
81
How in the world do you figure the 470 is more powerful than your 2x 460? 2x 460 are much more powerful than one 470

Based on his previous message, for the job he is looking for the 460 doesn't get fully used, where the 470 probably does.

BTW gtx460 is really weak for this, apparently it can use only 224 out of its 336 cores, probably cause of its superscalar nature (it does only 2 warp schedulers for every 3 SM blocks or something along the line...), therefore my 2 460s are as good for this as one 470, but hotter and louder. So i have all the reasons to replace them. just needed to answer those 2 questions.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Based on his previous message, for the job he is looking for the 460 doesn't get fully used, where the 470 probably does.

His previous statement was full of incorrect assumptions and guesses that have nothing to do with how it actually works.
And does not in any way explain why he thinks the 470 is more powerful for his workload.

I am not saying it isn't possible, I just don't see anything in his preceding statement to support it.
 

Timmah!

Golden Member
Jul 24, 2010
1,571
933
136
Actually its because manufacturing isn't magic, there are defects. CPUs and GPUs are build with redundancy and the ability to disable portions which contain a defect. In addition they disable parts because they want to more easily create a wide array of cards, to create the cheaper ones they take the exact same die and just disable portions on it, even if there is nothing wrong with them. Everyone does it, not just nvidia.



How in the world do you figure the 470 is more powerful than your 2x 460? 2x 460 are much more powerful than one 470
http://www.anandtech.com/bench/Product/314?vs=311


Nope, i know what i am talknig about:) GTX460 has 378 CUDA cores, but only 336 are active, the last SM block is deactivated, as it was probably defective...thats what you say. However Octane does not use all these 336 cores. There are probably more reasons, one of them was, that in the beginning it was coded using CUDA 3.0, which did not support the 460 shaderblock system (48 cores per block instead of 32 on GF100/110). When CUDA 3.2 showed up, people expected the missing 1/3 of cores to start working and basically 1/3 performance improvement, but unfortunately it did not happen. The most possible reason is the superscalar nature of 460, but its only a guess, cannot confirm this.

And yes, because of this, one 470 with 448 cores is performance wise about the same as my 2 460s (2x224=448). There is indeed slight advantage on the 460s side, cause of the higher core frequency, but its not enough to make up for the noise/temp issues, which i wont have with one 470. I am sick, when i think about the fact, that bought second 460 at the same price as was at the time 470.
The benchmark you posted is unfortunately irrelevant. I do not care about gaming performance, obviously in games the card can use all its cores, as computing graphics can be coded perhaps more efficient in regard to that superscalar thing, than computing pathtracing in Octane.
 
Last edited:

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Nope, i know what i am talknig about:) GTX460 has 378 CUDA cores, but only 336 are active, the last SM block is deactivated, as it was probably defective...thats what you say. However Octane does not use all these 336 cores. There are probably more reasons, one of them was, that in the beginning it was coded using CUDA 3.0, which did not support the 460 shaderblock system (48 cores per block instead of 32 on GF100/110). When CUDA 3.2 showed up, people expected the missing 1/3 of cores to start working and basically 1/3 performance improvement, but unfortunately it did not happen. The most possible reason is the superscalar nature of 460, but its only a guess, cannot confirm this.

I see, so octane itself is limited to an older CUDA version that is limited to fewer then max cores on the 460? how bizzare...
but why would it not have the same limitation with the 590? the architecture didn't change... what changed is the use of medium leakage transistors selectively replacing high leakage ones where-ever appropriate.
 

Timmah!

Golden Member
Jul 24, 2010
1,571
933
136
I see, so octane itself is limited to an older CUDA version that is limited to fewer then max cores on the 460? how bizzare...
but why would it not have the same limitation with the 590? the architecture didn't change... what changed is the use of medium leakage transistors selectively replacing high leakage ones where-ever appropriate.

It does not have this older CUDA limitation anymore, the latest version is built on CUDA 3.2.
AFAIK, there is indeed a architectural difference between the GF104 and GF100/110...its explained in the 460 review right here on Anandtech as well...basically gf110 is not superscalar and one SM block has 32 cores compared to 48 on gf104.

BTW looking at the benchmark you posted, is there any way to filter this message to people who do this benchmarks/reviews here on Anandtech? I would really love to see, if they tested GPUs for their GPGPU capabilities as well. There is bazillion of other HW sites on the internet, who do exactly the same stuff as Anand, they test the cards for their gaming performance, but this is sooo 2005 :D GPGPU is future, the Octane is not the only app, which runs on CUDA and people do not buy cards anymore just for games. There are thousands of people around the globe, who make living from doing visualisations and IMHO in 3 years all of them will render their stuff on GPUs. So why not start taking them seriously now and provide some info for them, which GPU shoud they buy? If i was aware of the GPGPU limitation of 460, i would never bought them, but there was really no single HW site on the net to draw attention to this...all of them were busy with comparing number of FPS in Call of Duty.
 
Last edited:

cusideabelincoln

Diamond Member
Aug 3, 2008
3,275
46
91
BTW looking at the benchmark you posted, is there any way to filter this message to people who do this benchmarks/reviews here on Anandtech? I would really love to see, if they tested GPUs for their GPGPU capabilities as well. There is bazillion of other HW sites on the internet, who do exactly the same stuff as Anand, they test the cards for their gaming performance, but this is sooo 2005 GPGPU is future, the Octane is not the only app, which runs on CUDA and people do not buy cards anymore just for games. There are thousands of people around the globe, who make living from doing visualisations and IMHO in 3 years all of them will render their stuff on GPUs. So why not start taking them seriously now and provide some info for them, which GPU shoud they buy? If i was aware of the GPGPU limitation of 460, i would never bought them, but there was really no single HW site on the net to draw attention to this...all of them were busy with comparing number of FPS in Call of Duty.

They probably don't do it because the demand isn't as high. They only have limited resources so they cover the things most people want.

Also you were able to piece together the performance problem with GF104 by doing research. I suppose this could have been done before you made your purchase. Since these applications are so specific, the specificity creates even more of a workload for websites looking to get some attention.

I see, so octane itself is limited to an older CUDA version that is limited to fewer then max cores on the 460? how bizzare...
but why would it not have the same limitation with the 590? the architecture didn't change... what changed is the use of medium leakage transistors selectively replacing high leakage ones where-ever appropriate.

You should try to understand what the OP is saying and how GF104 differs from GF110/100 before contesting him. You brought gaming benchmarks into the discussion which didn't have any relevancy at all. The OP clearly explained the problem, with specificity, which we normally don't get.
 

Timmah!

Golden Member
Jul 24, 2010
1,571
933
136
They probably don't do it because the demand isn't as high. They only have limited resources so they cover the things most people want.

Also you were able to piece together the performance problem with GF104 by doing research. I suppose this could have been done before you made your purchase. Since these applications are so specific, the specificity creates even more of a workload for websites looking to get some attention.


Well, its understandable and i do not expect them to test every GPGPU app out there, after all application like Octane require some level of knowledge, how to use them in first place, to be able to do the benchmark...but they could at least do the test with the Nvidia Garage. I suppose it works in a similar manner to Octane, Arion, iray etc... as its basically raytracing the car model, so it could tell the difference between various architectures.

Anyway i did not want to say, i blame the HW sites for my bad choice, actually i followed the advice of the Octane devs. They recommended 460, as it was the card with largest RAM available, and you need big RAM for Octane, as the scene needs to fit inside it.
But now after few months and few projects done with it, i see that speed is more important to me personally than RAM capacity, as i do mostly smaller scale scenes, which do not require so much RAM. But who could know that at the time.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
You should try to understand what the OP is saying and how GF104 differs from GF110/100 before contesting him. You brought gaming benchmarks into the discussion which didn't have any relevancy at all. The OP clearly explained the problem, with specificity, which we normally don't get.

1. as far as I understand they don't differ the way he describes. And if I did not "confront" people on knowledge which differs from my own I will never:
a. Help someone learn when they are wrong.
b. Find out when I am wrong and learn things myself.
Its not like I am insulting him or personally attacking him, we have a disagreement, one of us is wrong... I wish to know who and why.
2. I did not bring gaming benchmarks, I linked to anandtech bench which contains gaming benchmarks AND noise AND power AND GPGPU benchmarks. He wants the GPGPU
 
Last edited:

Timmah!

Golden Member
Jul 24, 2010
1,571
933
136
1. as far as I understand they don't differ the way he describes. And if I did not "confront" people on knowledge which differs from my own I will never:
a. Help someone learn when they are wrong.
b. Find out when I am wrong and learn things myself.
Its not like I am insulting him or personally attacking him, we have a disagreement, one of us is wrong... I wish to know who and why.
2. I did not bring gaming benchmarks, I linked to anandtech bench which contains gaming benchmarks AND noise AND power AND GPGPU benchmarks. He wants the GPGPU

I am sorry then, i looked at it only briefly, saw only games and power consumption/noise in the end.
Anyway i suppose by GPGPU benchmark you mean the Cyberlink Transcode thing...well that basically proves my point as both 460SLI has the exact same score as one 470 (40 seconds), in contrast to the gaming benchmarks were obviously 460SLI is about 1/3 faster.

You have to admit, you were bit "sharp" in your initial post :D but its okay