• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Theoretical: Core or Memory?

SpeedZealot369

Platinum Member
Theoretical question, What yealds more performance, overclocking core or memory? Lets say you have a 7800gt clocked at 500/500. Now what would give you better results, 600/500 or 500/600? (GDDR3 memory)

Opinions, facts, lets hear 'em
 
Depends on whether you are fillrate/shader or bandwidth-limited.

Generally speaking, core is probably going to do more for you on newer cards, especially if you are on a card with very fast memory to begin with. On an older card with slower memory (or a 128-bit memory interface), you're more likely to end up bandwidth-limited.
 
It depends on where your bottleneck is located. I have developed an OpenGL application that draws millions of triangles. At home on my 6600GT, it is all vertex limited. I can o/c my CPU by 50% from 1.8 GHz to 2.7 GHz and see absolutely no improvement. I can o/c my GPU memory and see absolutely no improvement. But o/c my GPU core, and the framerate scales linearly. This is just one case. Each application is different. Even if you remove one bottleneck, another will pop up.
 
It depends on the game and the settings. For example, AA puts an increased stress on the video memory. Shader effects and high resolutions put more stress on the gpu itself.
 
What is it to increase 3dmark05 mark, core or memory is what I'd like to know too. Good question btw.
 
I noticed increased scores from both mem and core OC, but I think the core would make a bigger difference because of the heavy shader load.
 
Again, it's not one particular card, it's just a general question.
Although, with my 6800gt, would I be better off doing 450/1100 or 400/1200?

*edit*

I mean 400/1100 or 450/1000
 
Originally posted by: SpeedZealot369
Again, it's not one particular card, it's just a general question.
Although, with my 6800gt, would I be better off doing 450/1100 or 400/1200?

*edit*

I mean 400/1100 or 450/1000

well the problem is that each card has it's own strengths and weaknesses. with a 6600gt, a memory oc will be better than a core oc.
opn a 6800gt, i''d say it's balanced. maybe find a compromise, though i've heard a memory oc will benefit slightly more, at higher resolutions while for pure image quality(like say yuo want to turn on softshadows, a core clock is better.
 
cool, right now it's oc'd 50 past core and 50 past mem (as you see in my sig)
*edit*

So I guess the conclusion is that it largely depends on what settings your gaming at?
 
Originally posted by: SpeedZealot369
cool, right now it's oc'd 50 past core and 50 past mem (as you see in my sig)
*edit*

So I guess the conclusion is that it largely depends on what settings your gaming at?

It depends on both the settings and the application. AA/AF increase memory bandwidth usage substantially, whereas enabling more demanding in-game visual settings usually increases general shader or texturing load.

Basically, your question is too broad, because it depends on both the hardware and software involved.

Generally, on newer cards using high-speed GDDR3, you have plenty of bandwidth and the limitation is on the performance of the GPU core itself. With older cards (especially ones like the 6600GT that have relatively low bandwidth to begin with), you're more likely to find yourself in a bandwidth-limited situation where a memory OC would help. But you can swing it either way depending on what you are trying to run.
 
Back
Top