• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

67.10 xg (beta) overclock

I've yet to overclock my Ultra, but tried to do an auto overclock last night. What was strange is that after excepting the disclaimer in coolbits, the option for auto overclock never illuminated. Any suggestions other than "don't use beta modified drivers". Additionally if I manually overclock should my 2d and 3d clocks match. Thanks.
 
Your 2d and 3d memory clocks will match, but the cores shouldn't. Also, I'm using the XG's as well, and the coolbits is rather buggy with the displays. For example, things will be greyed out and you can still adjust them. I'd try it again and pretty much ignore the "greyed-out-edness."

Also, as far as OCing is concerned, you may want to manually OC them. Auto OC on coolbits doesn't exactly give an accurate picture of the sweet spot of your card... so to get the sweet spot, try OCing conservative amounts and running some graphicly intense program to make sure there are no artifacts, BSOD's, etc.
 
On my driver I have to select manual overclock, then select "performance 3D" before the "Determine optimal frequency" button will light up. Honestly though I don't find this feature that great, you are better off manually bumping the core speed little by little. The reason is the tool seem to bump the core speed and mem speed simultaneously until it fails, and doesn't try to bump one by itself.
 
Originally posted by: yliu
On my driver I have to select manual overclock, then select "performance 3D" before the "Determine optimal frequency" button will light up. Honestly though I don't find this feature that great, you are better off manually bumping the core speed little by little. The reason is the tool seem to bump the core speed and mem speed simultaneously until it fails, and doesn't try to bump one by itself.

I also feel it is not very accurate. With the automatic settings, it says my GPU core can hit 455mhz... if I attempt that, I get huge amounts of artifacts. The core does 440 stable.
 
Thanks. I was planning on bumping the core clock 5 at a time; hopefully to around 445. Then I guess i would bump the mem clock 5 at a time to around 1,175. Is that how y'all do it?
 
Originally posted by: Pipes
Thanks. I was planning on bumping the core clock 5 at a time; hopefully to around 445. Then I guess i would bump the mem clock 5 at a time to around 1,175. Is that how y'all do it?

I bump the core/memory to a certain amount, but nothing absurd.

I put my core at 430mhz for example, then run 3dmark. If there are artifacts I will lower it by 5 until they are gone. If it runs just fine, I'll raise it up 5mhz until they start appearing. Once I find it's limit, I put it down 1 from its highest stable setting (5mhz less). For example, my core has artifacts at 445mhz, but does 440mhz stable, so I run 435.

For the Memory, I do the same thing, I'd probally put it somewhere around 1.2ghz, and do intervals of 10mhz. My memory was running 3dmark 1.26 stable, but I have it at 1.2 for everyday settings. However, I am debating running it up to 1.25ghz for everyday.

I tend not to just push things on the boardline, but pretty close to it.
 
Back
Top