- Aug 23, 2007
- 165
- 0
- 0
Hi, I recently jumped on the Zotac 8800GTS 512 deal at fxvideocards. I've been out of the loop for a while on overclocking the latest cards (my previous card, a 7900GS, is obviously significantly different than the unified architecture of the Geforce 8000 series and beyond).
Anyway, I briefly used the Nvidia Control Panel to overclock the core, memory, and shader on my card, while running the ATITools stress test to check for any errors.
While testing each individually (leaving the other two at stock setting), I achieved a maximum core setting of 812MHz, memory 1105MHz, and shader at 1963MHz ? maximum being defined as the highest setting where I did not run across any graphical corruption (yellow dots, etc) on the ATITool window. Granted, I didn't leave the stress test running too long on these ?max? levels, so they're probably closer to the highest settings that did not result in any errors in the first minute or so.
After reaching those ?max? levels, I then overclocking all three components at the same time, to the following results.
Error @ 803/1105/1963
Error @ 803/1087/1963
Error @ 794/1062/1941
Error @ 785/1044/1918
Fine @ 776/1027/1896 (apparently, didn't see any yellow dots pop up in ATITool after running for several minutes)
So it appears that last overclock setting above is my highest stable (for several minutes anyway) overclock setting for the card. Still, I imagine it would not be a very good idea to regularly run the card just below an overclock that produced graphical corruption. Which brings me to several questions:
First off, I was wondering if there is any approximate ratio I should stick to between core and shader speeds. I know a while back Nvidia's software (and probably most other overclocking tools at the time) would not let you individually adjust core and shader speeds, but instead adjusted both to maintain some specific ratio. I thought I had read at some point that overclocking one proportionally significantly greater than the other resulted in little performance gains, but that may just be my imagination.
Beyond that, how do users here limit their max overclocks for general use while gaming? Any basic rule of thumb to follow?
Anyway, hopes that's all legible after a long day at work. Thanks for your help.
Edit: In case it matters for some reason, I'm using 177.66 on Vista 64-bit.
Anyway, I briefly used the Nvidia Control Panel to overclock the core, memory, and shader on my card, while running the ATITools stress test to check for any errors.
While testing each individually (leaving the other two at stock setting), I achieved a maximum core setting of 812MHz, memory 1105MHz, and shader at 1963MHz ? maximum being defined as the highest setting where I did not run across any graphical corruption (yellow dots, etc) on the ATITool window. Granted, I didn't leave the stress test running too long on these ?max? levels, so they're probably closer to the highest settings that did not result in any errors in the first minute or so.
After reaching those ?max? levels, I then overclocking all three components at the same time, to the following results.
Error @ 803/1105/1963
Error @ 803/1087/1963
Error @ 794/1062/1941
Error @ 785/1044/1918
Fine @ 776/1027/1896 (apparently, didn't see any yellow dots pop up in ATITool after running for several minutes)
So it appears that last overclock setting above is my highest stable (for several minutes anyway) overclock setting for the card. Still, I imagine it would not be a very good idea to regularly run the card just below an overclock that produced graphical corruption. Which brings me to several questions:
First off, I was wondering if there is any approximate ratio I should stick to between core and shader speeds. I know a while back Nvidia's software (and probably most other overclocking tools at the time) would not let you individually adjust core and shader speeds, but instead adjusted both to maintain some specific ratio. I thought I had read at some point that overclocking one proportionally significantly greater than the other resulted in little performance gains, but that may just be my imagination.
Beyond that, how do users here limit their max overclocks for general use while gaming? Any basic rule of thumb to follow?
Anyway, hopes that's all legible after a long day at work. Thanks for your help.
Edit: In case it matters for some reason, I'm using 177.66 on Vista 64-bit.
