• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[VR] NVIDIA GeForce GTX 680 Specifications Revealed

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Correct me if I'm wrong, but are they all tied together? In a sense you'd create a bottleneck (the ever famous "ROP starved" or "bandwidth limited" comes to mind.)

I get the feeling that jucing the shaders would require the ROPs/TMUs themselves to kick it up a bit to match their output. If I'm totally wrong on this, by all means let me know.

I don't know, I'm just speculating/thinking loud. Maybe it can detect the TMUs are running at let's say 70% load, so it doesn't make sense to increase shader power by more than maybe 50%
But honestly I doubt it is that sophisticated.
 
I don't know, I'm just speculating/thinking loud. Maybe it can detect the TMUs are running at let's say 70% load, so it doesn't make sense to increase shader power by more than maybe 50%
But honestly I doubt it is that sophisticated.

I get ya. I'd love for more info on this too. Considered my interest sparked 😀

Let's go nVidia, roll it out!
 
boxleitnerb said:
But throttling did impact performance, didn't it? Here you may throttle only unused parts of the chip, maintaining TDP and performance.
Except that is not what happens in Intel's turbo boost for example. Overclocking and speed step are to different things.
Boosting clock speed to maintain a xxfps point over a given intervalxx is not the same as throttling. It's boosting when necessary, if that is how it ends up working.

You can't magic more performance when needed without overshooting the TDP.

Think about it this way, you've got base clocks that get reduced when you're overshooting the TDP. This is called throttling. Now you've got a "base clock" that's actually just another state and a "turbo clock" that's the real expected base clock, that gets lowered when you're overshooting the TDP. The only thing you gain from the second scenario, is that you won't get 200+ fps in less demanding games thus lowering the power draw and temps.

Unless it goes over the TDP, then we go back to:
Oh boy, it's not like Nvidia's TDP ratings aren't too misleading already. "Hey guys, it's a 250W part, we promise!*"

*Unless you're thinking of actually, you know, using it.
 
Last edited:
The spin on adaptive power is interesting. So long as you can disable it via the drivers, it's good to have it there for people who are concerned with their power usage.

Adaptive power settings are buggy with nvidia hardware to the point it is best to leave it on maximum performance to avoid the issues it causes, so I would want the same control over it for this new implementation of it.
 
I guess magic is what technology appears to be for those that can't fathom it.

What does one think happens when the slider in AMD's powertune is set to +20%.
Magic perhaps ?
 
I guess magic is what technology appears to be for those that can't fathom it.

What does one think happens when the slider in AMD's powertune is set to +20%.
Magic perhaps ?

To be fair, I edited this part in later but still:

Unless it goes over the TDP, then we go back to:
Oh boy, it's not like Nvidia's TDP ratings aren't too misleading already. "Hey guys, it's a 250W part, we promise!*"

*Unless you're thinking of actually, you know, using it.
 
You can't magic more performance when needed without overshooting the TDP.

Think about it this way, you've got base clocks that get reduced when you're overshooting the TDP. This is called throttling. Now you've got a "base clock" that's actually just another state and a "turbo clock" that's the real expected base clock, that gets lowered when you're overshooting the TDP. The only thing you gain from the second scenario, is that you won't get 200+ fps in less demanding games thus lowering the power draw and temps.

Unless it goes over the TDP, then we go back to:

Yes you can get more performance without overshooting the TDP. Think of the turbo that is used on CPUs. Depending on workload you have medium clocks on all cores or high clocks on a few cores. At the same TDP.
 
No you don't get it. Let's say you have a scene that is very heavy on the shaders but not on the TMUs, ROPs etc. Then the saved power from the underused TMUs, ROPs etc. is used to drive the shaders even higher, giving you more fps.

As it seems, this is not applicable to the whole chip, but parts of the chip, depending which parts are stressed and which parts are not (so much).


I doubt its that sophisticated, do you have a link providing proof? Sounds like its just something to save power for mobile parts. Doesn't sound exiting to me or beneficial to performance, I could be wrong though. Its happened twice in the past :thumbsup:
 
Last edited:
I doubt its that sophisticated, do you have a link providing proof? Sounds like its just something to save power for mobile parts. Nothing more nothing less...I could be wrong but I don't see this benefiting performance whatsoever.

Do you have a link providing proof to confirm your doubts?
 
I doubt its that sophisticated, do you have a link providing proof? Sounds like its just something to save power for mobile parts. Nothing more nothing less...I could be wrong but I don't see this benefiting performance whatsoever.

Just speculation.
@balla: Kudos 😎
 
Yes you can get more performance without overshooting the TDP. Think of the turbo that is used on CPUs. Depending on workload you have medium clocks on all cores or high clocks on a few cores. At the same TDP.

The issue is that CPU's usually deal with totally different workloads, and as such, a similar turbo scheme wouldn't be any good for GPU's. Anything you'd run on a GPU is by definition massively parallel, so unused shaders point to a bottleneck and that bottleneck definitely isn't single thread performance. Basically 500/900 utilized cores @ 800mhz almost always equal 500/900 utilized cores @ 900mhz.
 
Honestly I don't know how the workloads on a GPU differ in todays games. Maybe they rather differ from game to game than within a game. Say Crysis 1 and Crysis 2.
 
Honestly I don't know how the workloads on a GPU differ in todays games. Maybe they rather differ from game to game than within a game. Say Crysis 1 and Crysis 2.

My point was that CPU-esque turbo is a kind of pointless for GPU's, because you don't care much for single-thread performance.
 
My point was that CPU-esque turbo is a kind of pointless for GPU's, because you don't care much for single-thread performance.

As TPU describes it the turbo doesn't reduce clocks on single units, clocking the rest higher but different segments of the chip (with different units). The CPU turbo was just an analogy.
 
As TPU describes it the turbo doesn't reduce clocks on single units, clocking the rest higher but different segments of the chip (with different units). The CPU turbo was just an analogy.

Having different parts on different clock domains would take up quite a bit of additional die space, so I wouldn't expect it from GK104, but if it does, I'll be pleasantly surprised.
 
Charlie really can't shove his head any further up there, can he.....
He really does just throw sh*t against a wall. I didn't click on the link. Just assuming it's Charlie D.
 
Charlie really can't shove his head any further up there, can he.....
He really does just throw sh*t against a wall. I didn't click on the link. Just assuming it's Charlie D.
Do you have some info Charlie doesn't? Are we going to see a paper launch on March 12?
 
Are you putting stock into Charlie's well known dart board "journalism" strategy then?
Here is some pity for you, and for him.
Seriously though, you should know better than to take Charlies word on anything by now.
 
Um, well, he hasn't been proven wrong YET. And his writings on GK104 may be spot on (beat Tahiti in every metric).
 
I thought this was about his paper launch prediction? Shall we lump all of his predictions together now? I'm game.
 
More PCB shots...

1225367ejpuza7aiwvq9av.jpg


1225360whgoaaodm15od52.jpg


122536uvvr4try8w44fyaz.jpg


122536umm6mmka7s9zkq12.jpg


Source: http://we.pcinlife.com/thread-1849001-1-1.html
 
Back
Top