• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

F@H on NVidia: How much is too much

Insidious

Diamond Member
So we are all amazed at the PPD that comes from our NVidia cards and want to OC to get it maxed out.... Here's a couple tips that I have found useful when trying to decide 'how much is too much'

First of all, you have the option of OC'ing the core, the memory, and the shaders. To the best of my knowledge, only the shaders clock will have any significant affect on your PPD. So when I overclock, I uncheck the (RivaTuner) box for linking the shader clock to the core clock and just leave the core and memory at their defaults. This saves quite a bit in terms of heat (and of course, power consumption)

Secondly, the NVidia GPU client is actually pretty graceful when it does EUE. Except for extreme circumstances, you won't even know it happened unless you go scrolling through that huge fahlog.txt file... that's a genuine PITA.

Soooooooo, here's a shortcut.

The GPU client actually does a good job of cleaning out it's work folder when it is working properly (unlike the SMP client). If you are using fahmon to monitor your clients, just double-click on any client and it will open an explorer window for that client's work folder. Open the work folder. It is obvious which files are associated with a given 'queue slot'. If you see files for more than one slot (slots are numbered 0 thru 9) you have had EUEs!

Further, if you leave the remnants of these failed WUs in the folder, it can lead to mis-interpretation the next time that slot is used (not always) and your client will become more and more unstable as more remnants are left. So, look at the Fahlog.txt file to see what slot is currently being processed and then delete the files assiated with any other slot. This will help prevent some future EUEs.

Then........... lower your clocks for that client because that's where the EUE came from (almost) every time!

-Sid

YMMV :beer:
 
Thanks Sid. No EUE's from my GTX 260 (so far) and shaders have been up a few notches. It has been cranking out the numbers @ 7000 PPD +/-. Didn't know about the core clock though. That is great info to know. Some pretty good helpful hints.....:thumbsup:

Oh btw, you could have thrown this out to us before the race started😀 :laugh:
(😉 you know im kidding around 😉 )
 
I'm a necro!

Sorry to res this oldie but I have a directly related question.

I've got two 8800GS/9600GSO cards running F@H. 8800GS in the second slot on P5Q Pro with e8400 and the 9600GSO in a Shuttle with Opty 165 @ 2GHz.

They are both clocked to 702/?/950 and both pass furmark & atitool testing with absolute stability. However, in F@H - the 9600GSO cranks out 3800 ppd and actually finishes WUs. The 8800GS keeps giving errors and restarting.

Now, from the thread above, it looks like I could back down my clocks & RAM and just push the shaders to the max? That would certainly help temps & possibly stability. Can anyone confirm that shaders are the driving force behind F@H performance?
 
What's an EUE? Execution Unit Error? You wouldn't believe what Wikipedia pops up with when you search for "EUE"!

I also wonder, is it possible to underclock the core clock while overclocking the shaders? Might save some more power and heat.
 
That's exactly my question, is it the core/RAM speed that drives F@H performance or the shaders (or all three?).

Saw in another thread unstable performance on GTX 260 at 620/1296/950 (factory OC), they turned down to 576/1296/1000 (stock speeds) and EUEs quit. Thinking I will try this tonight with my 9600GSO to see what happens to ppd when core is decreased and shaders remain the same.

Now, if I can remember how to use nVflash...
 
Confirmation:

Yes, it's the shaders that are the predominant driver of PPD on Video Cards.

I (got over myself) and returned my video cards to stock clocks and (manual) 100% fan. Haven't had an Early Unit End since (although it is VERY normal to have one crap out every once in a while as they release new families of work units)

-Sid
 
Yup, since i've been running stock speeds, no EUE's!😛

I missed this thread...thanks for the bump!
 
so, I have a geforce 260 with the current settings of 693/1536/999. My clock and shaders are not linked. I don't remember what stock speed was. How far should I decrease my clock and how far can I push my shader?

I'm getting no eue's btw.

Thanks,

Gravity
 
My best advice... Use nvidia system tools to find the settings that are optimal, then reduce the core clock 20 mhz. Very fast and works very well for me.
 
Confirmed - core clock has very little impact on F@H ppd.

9600GSO @ 702/1712/999 cranks ~3850 ppd on WU5764 (384 point WU)
8800GS @ 612/1524/954 does 3770 ppd on same WU

I had to back down the 8800GS because it kept failing - system unstable errors. Adjusting the core speed had virtually no impact on ppd while the shader clock has a very strong influence. Clocked down both until it would complete WUs, then slowly cranked up the shaders until ppd were nearly back to normal. May fine-tune tomorrow, then think I will nVflash to make permanent.

Anyone know how to get around a card that doesn't have fan control? And no temp monitoring either... I want to crank up the fan but cannot figure out how. I think that's why this one is failing at the speeds the 9600GSO runs fine.
 
Uh, do you have EVGA precision or rivatuner? I have a conflict on one box with the Ntunservice.exe. I have to kill it each time I reboot otherwise it tries to dominate the fan controls. The most current driver and the most current version of Precision should allow you to control the fan.

Stay cool,

Gravity
 
Back
Top