Theoretical: Core or Memory?

SpeedZealot369

Platinum Member
Feb 5, 2006
2,778
1
81
Theoretical question, What yealds more performance, overclocking core or memory? Lets say you have a 7800gt clocked at 500/500. Now what would give you better results, 600/500 or 500/600? (GDDR3 memory)

Opinions, facts, lets hear 'em
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Depends on whether you are fillrate/shader or bandwidth-limited.

Generally speaking, core is probably going to do more for you on newer cards, especially if you are on a card with very fast memory to begin with. On an older card with slower memory (or a 128-bit memory interface), you're more likely to end up bandwidth-limited.
 

lifeguard1999

Platinum Member
Jul 3, 2000
2,323
1
0
It depends on where your bottleneck is located. I have developed an OpenGL application that draws millions of triangles. At home on my 6600GT, it is all vertex limited. I can o/c my CPU by 50% from 1.8 GHz to 2.7 GHz and see absolutely no improvement. I can o/c my GPU memory and see absolutely no improvement. But o/c my GPU core, and the framerate scales linearly. This is just one case. Each application is different. Even if you remove one bottleneck, another will pop up.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
It depends on the game and the settings. For example, AA puts an increased stress on the video memory. Shader effects and high resolutions put more stress on the gpu itself.
 

videopho

Diamond Member
Apr 8, 2005
4,185
29
91
What is it to increase 3dmark05 mark, core or memory is what I'd like to know too. Good question btw.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
I noticed increased scores from both mem and core OC, but I think the core would make a bigger difference because of the heavy shader load.
 

SpeedZealot369

Platinum Member
Feb 5, 2006
2,778
1
81
Again, it's not one particular card, it's just a general question.
Although, with my 6800gt, would I be better off doing 450/1100 or 400/1200?

*edit*

I mean 400/1100 or 450/1000
 

mwmorph

Diamond Member
Dec 27, 2004
8,877
1
81
Originally posted by: SpeedZealot369
Again, it's not one particular card, it's just a general question.
Although, with my 6800gt, would I be better off doing 450/1100 or 400/1200?

*edit*

I mean 400/1100 or 450/1000

well the problem is that each card has it's own strengths and weaknesses. with a 6600gt, a memory oc will be better than a core oc.
opn a 6800gt, i''d say it's balanced. maybe find a compromise, though i've heard a memory oc will benefit slightly more, at higher resolutions while for pure image quality(like say yuo want to turn on softshadows, a core clock is better.
 

SpeedZealot369

Platinum Member
Feb 5, 2006
2,778
1
81
cool, right now it's oc'd 50 past core and 50 past mem (as you see in my sig)
*edit*

So I guess the conclusion is that it largely depends on what settings your gaming at?
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: SpeedZealot369
cool, right now it's oc'd 50 past core and 50 past mem (as you see in my sig)
*edit*

So I guess the conclusion is that it largely depends on what settings your gaming at?

It depends on both the settings and the application. AA/AF increase memory bandwidth usage substantially, whereas enabling more demanding in-game visual settings usually increases general shader or texturing load.

Basically, your question is too broad, because it depends on both the hardware and software involved.

Generally, on newer cards using high-speed GDDR3, you have plenty of bandwidth and the limitation is on the performance of the GPU core itself. With older cards (especially ones like the 6600GT that have relatively low bandwidth to begin with), you're more likely to find yourself in a bandwidth-limited situation where a memory OC would help. But you can swing it either way depending on what you are trying to run.