Memory Clock speed at half?

Aug 9, 2006
51
0
0
Alright, so I've installed the latest version of gainward's expertool, the program I was using to overclock my 7900GT.

It lists my clock speeds
625MHz Core
900MHz Memory
1566MHz Shader

The memory clock seems low- and I looked on websites where it states the memory data rate is 1800MHz. What's fishy about this?

The other, is I've never done a shader overclock before. My previous overclock on the 7900GT was done by keeping the core at stock, then inching up the memory until it failed, keeping track of the last number it was stable at. Then do the same thing with the core clock, keeping the memory at stock. Then, I'd subtract 20MHz from the last speed they were stable at and run the card at that. Would overclocking the shader be the same?

I appreciate the help.
 

vhx

Golden Member
Jul 19, 2006
1,151
0
0
Nothing is Fishy about it. 900x2 (DDR2) = 1800. Just how the external/internal clocks work.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Uninvited Guest
The other, is I've never done a shader overclock before. My previous overclock on the 7900GT was done by keeping the core at stock, then inching up the memory until it failed, keeping track of the last number it was stable at. Then do the same thing with the core clock, keeping the memory at stock. Then, I'd subtract 20MHz from the last speed they were stable at and run the card at that. Would overclocking the shader be the same?

Yep your memory clock is fine, the 1800 you're seeing on websites is the effective rate on 900MHz DDR. Those Qimonda chips tend to cap out between 1000-1050MHz. No need to push them higher as there's very little benefit from increasing bandwidth on G92.

As for shader clocks, if you use a program like RivaTuner it'll keep the core/shader clock linked by default. You can unlink it by clearing a checkbox but keeping it linked is safe. If you use a program that doesn't keep them linked, you can calculate the stock ratio and then use that as you increase the core clock to try and find a "safe" range. For instance, 1566/625 = 2.5. So basically run your shader clock at ~2.5x your core clock and that'll keep it at the same ratio. If one or the other is holding you back then you can adjust each higher or lower manually.
 

CP5670

Diamond Member
Jun 24, 2004
5,657
760
126
The "effective" clock concept doesn't make much sense anymore, given that practically all the memory out there these days is DDR. I guess it's good for marketing departments to be able to double the numbers though, so the convention remains in use. :p
 

betasub

Platinum Member
Mar 22, 2006
2,677
0
0
Originally posted by: chizow
Yep your memory clock is fine, the 1800 you're seeing on websites is the effective rate on 900MHz DDR. Those Qimonda chips tend to cap out between 1000-1050MHz. No need to push them higher as there's very little benefit from increasing bandwidth on G92.

:confused: I thought the G92's weak spot (if you can call it that) compared to the 8800GTX was memory bandwidth. If so, isn't OCing the memory a good step?
 

error8

Diamond Member
Nov 28, 2007
3,204
0
76
It's something fishy about this Qimonda chips. On my 8800 GT, after reseating the heatsink and turning the speed of the fan to 75 %, I started doing some everclocking. I got the ram stable at 2000 MHZ, one hour and a half of testing with ATI tool. After that I've been running all sorts of games to see if the videocard is ''healthy''. I've played Crysis and Unreal 3 for hours with no problems at all, but when I've played Test Drive Unlimited for just 20 minutes or so, the image became green and the computer froze. I've tried it again and the same thing happened. Then I took the ram some 50 mhz lower and the problem went away. It could be something about the variations of temperature from my room, I really don't know, but I will never try using the memory at 2.0 GHZ again.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: betasub
Originally posted by: chizow
Yep your memory clock is fine, the 1800 you're seeing on websites is the effective rate on 900MHz DDR. Those Qimonda chips tend to cap out between 1000-1050MHz. No need to push them higher as there's very little benefit from increasing bandwidth on G92.

:confused: I thought the G92's weak spot (if you can call it that) compared to the 8800GTX was memory bandwidth. If so, isn't OCing the memory a good step?

Some would have you think that, but I've yet to see a single bench published or from personal experience (with a GTX, GTS and GT) that substantiates that. My guess is that NV went back to 256-bit for a few reasons:

  • 1) it was cheaper as they could cut 2x64-bit controllers and 8 ROPs (linked as 4 ROP feed 1x64-bit memory controller)
    2) 512MB/1GB configurations suit current and future needs better than 320/640 or 384/768
    3) better core yields/process allowed them to hit higher clockspeeds/performance to make up for the ROP hit
    4) there was little to no benefit from more bandwidth on this generation of parts so it was a win-win situation for NV and the end-user.
    5) similar performace with "less" due to the higher core clocks leaves room for a future high-end part
Again, none of this is "official" from NV, I've just kept a close eye on specs and relative performance between the parts both in reviews and when I had them in-hand. If you have a GT though its really quite simple to see by simply moving the memory slider. Run some benchmarks with your GT at stock clocks, raise RAM to max, lower RAM to stock and raise core to max, raise both RAM and core to max and compare results.