[Again -- not an SLI expert . . ]: slight discrepancies in core clocks

BonzaiDuck

Lifer
Jun 30, 2004
16,382
1,911
126
So . . . "We" got it all sorted out about my driver problem -- Moral of the story: Always do a "clean" install.

Now I'm flirting with AfterBurner again, in the known range I verified for a single card. I've applied "modest" settings to core and memory clocks -- between +100 and +120 for the core and +250 for the memory. +120 and +250 gives me ~ 1,440 core and 7,500 memory.

Problem is not the memory, and I'm not sure there IS a problem, but I see that GPU1 loads up to a 1,450 clock with GPU2 @ ~1,430+. The reported voltages seem about right: 1.218 for one card; 1.209 for the other.

I quickly discovered this thread at the GeForce Forum:

https://forums.geforce.com/default/...y-lower-voltage-than-the-other-driver-bug-/1/

If I'd noticed the 50mV voltage discrepancy before I got my driver problem sorted out, it's gone. What about this small clock discrepancy?
 

wilds

Platinum Member
Oct 26, 2012
2,059
674
136
http://www.overclock.net/t/1267918/...-disable-boost-fixed-clock-speed-undervolting

I would download NvidiaInspector and create a few .bat files forcing Pstate and clockspeeds.

Pstate allows the GPU to reach certain voltages, so forcing p5/p0 and a fixed clock speed will provide the consistent performance you are looking for.

I would create a gaming .bat file, an idle one, and a reset .bat to disable fixed voltage/clocks.

It looks rather complicated, but the .bat you would create would be rather long.

A single GPU example:
nvidiaInspector.exe -setBaseClockOffset:0,0,135 -setMemoryClockOffset:0,0,500 -setpowertarget:0,111 -setVoltageOffset:0,0,187500 -setGpuClock:0,2,1311 -setMemoryClock:0,2,3500 -forcepstate:0,2

The 0 after each set is the GPU. Since you have 2, you'd basically repeat the whole thing again with a 1 to include GPU 2.

I know this works, but I bet there are other more elegant solutions for you.
 
Last edited:

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
Each card will boost to the clocks it needs to do its job. Because it is dynamic, as it has been since the 600 series, your cards may not match, but with frame metering, it simply doesn't matter that they are slightly different.
 

BonzaiDuck

Lifer
Jun 30, 2004
16,382
1,911
126
Each card will boost to the clocks it needs to do its job. Because it is dynamic, as it has been since the 600 series, your cards may not match, but with frame metering, it simply doesn't matter that they are slightly different.

Not dismissing ZGR's answer -- since the GeForce Forum thread I linked deals with that and similar issues (voltage) through NVInspector. But their problem was more extreme, with one GPU running at 1.18V and the other running 0.050V higher. They didn't know if it was a BIOS bug, a driver problem, or something else.

But with regard to this matter of speed, I think you may be right.

Do you think this may derive from the selection of the Master card among the two?
 

BonzaiDuck

Lifer
Jun 30, 2004
16,382
1,911
126
Well . . . this is interestin' . . .

The discrepancy I mentioned shows up in the Kombustor stress test with the tweak made to the Kombustor 3D settings so both graphics cards are loaded.

I just ran GRID2 with the AB monitor running in the background. The core speed on both cards is 1430 under those conditions. . . . . identically . . .
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Not dismissing ZGR's answer -- since the GeForce Forum thread I linked deals with that and similar issues (voltage) through NVInspector. But their problem was more extreme, with one GPU running at 1.18V and the other running 0.050V higher. They didn't know if it was a BIOS bug, a driver problem, or something else.
Thanks to the magic of statistics, the actual operating characteristics of a GPU vary on a chip-by-chip basis. Some will require more voltage than others to operate, often as a result of variations in leakage (fun fact: chips that are operating at a lower voltage are typically leakier; you reduce the voltage to cut down on the leakage).

As a result of this and the fact that maximum attainable clockspeeds are in part dictated by voltage (you don't want to fry your transistors) there will be minor variations in the maximum clockspeeds attainable by any card. With NVIDIA cards operating in increments of 13MHz, a variation of 13-26MHz is to be expected.