[VR] NVIDIA GeForce GTX 680 Specifications Revealed

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Just wanted to chime in and say that Alan Wake is surprisingly a beast of a game. Even my POS 7970 could use a little more umph @ 2560x1600 with 8x aa. I'm sure my definition of needing more umph is very different from a lot of peoples though. I was getting as low as mid 40fps in heavy areas of the game.

Yes, its a great game for benchmarks. i hope more websites pick up on it.

With that said, looks like nvidia has some driver work to do with it.

aw%20vh%201920.png
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Not that I want to derail your point but those benchmarks don't include Physx, needless to say a 580 or 7970 would need an add in Physx card for optimal performance at high res anyway with Physx enabled.

Too late, you derailed my point :( I was more hoping someone would catch the resolution haha.

As for PhysX, either on (with a companion card) or off, it "shouldn't" affect the load on the primary card, as you said. That and "Radeon's can't do PhysX LOLZ! FAIL!!!!!1!"
 

lavaheadache

Diamond Member
Jan 28, 2005
6,893
14
81
Yes, its a great game for benchmarks. i hope more websites pick up on it.

With that said, looks like nvidia has some driver work to do with it.

aw%20vh%201920.png




those 5870 numbers are pretty impressive. It's pretty crazy how well that card does in some games and terrible in others. 5870's suck in Skyrim yet a pretty powerful in this game
 
Last edited:

lavaheadache

Diamond Member
Jan 28, 2005
6,893
14
81
Too late, you derailed my point :( I was more hoping someone would catch the resolution haha.

As for PhysX, either on (with a companion card) or off, it "shouldn't" affect the load on the primary card, as you said. That and "Radeon's can't do PhysX LOLZ! FAIL!!!!!1!"




I saw the resolution but I felt that the elephant in the room was the lack of physx in a sponsored title. Point taken regardless. The 7970 outclasses the 580 and using a 120hz monitor demonstrates it perfectly.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
those 5870 numbers are pretty impressive. It's pretty crazy how well that card does in some games and terrible in others. 5870's suck in Skyrim yet a pretty powerful in this game

They are only impressive because this was not the highest setting:

43113.png
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
They are only impressive because this was not the highest setting:

<snip>

Still, considering the card is now 2 years old and only chokes on the heavy tess - that's still impressive. I played Batman: AC in the 40-50FPS range High (Ultra now I guess) with PhysX on high, only issues was a few dips into the 20-30 range.

The fact that my PhysX card is a 9800 GTX+ shows there has been little push for higher demanding cards when at 1920x1080 in PC gaming. Which is sad :(
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
I believe he was talking about the Alan Wake benchmark...

Oh... :'(

I think it's easy to explain. Cypress is a great Pre-DX11 card. But with Tessellation and some kind of ComputeShader it's has no change against Cayman/Fermi/Southern Island.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Oh... :'(

I think it's easy to explain. Cypress is a great Pre-DX11 card. But with Tessellation and some kind of ComputeShader it's has no change against Cayman/Fermi/Southern Island.

Since majority of the games being dished out are DX9, easy to see why some old cards still have some value left in them (that and the bitmining craze.)
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
With latest patches, Skyrim works fine for me at 1080P with Ultra + FXAA. I believe the 6950 might perform better, but for it's age the 5870 does a good job. I rank it as AMD's heavyweight competitor to the 8800 in terms of fight and longevity.

those 5870 numbers are pretty impressive. It's pretty crazy how well that card does in some games and terrible in others. 5870's suck in Skyrim yet a pretty powerful in this game
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
Latest rumors point to a turbo mode that would utilize the TDP better.
So lets say 160W TDP, game X, scene A: 158W power -> 705 MHz. scene B: 130W power -> 950 MHz.

Dynamic Clock Adjustment is very similar to former &#8220;hot clocks&#8221;, with the difference that GK104 comes with several dozen power planes, and will operate on varying clocks depending on the computational load, card temperature and the power consumption. As we posted several days ago, the complete GK104 chip will operate at a lower clock, very similar figure to GTX 480: low power mode is 300 MHz, standard is 705 MHz extendable to 950 MHz &#8211; while the cores alone will be able to reach 1411 MHz when the chip is loaded to 100%
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Latest rumors point to a turbo mode that would utilize the TDP better.
So lets say 160W TDP, game X, scene A: 158W power -> 705 MHz. scene B: 130W power -> 950 MHz.

This is indeed an interesting concept. I wonder at what wattage point does it perform like an GTX 580? And how much headroom does this provide for OCing?

So many things left unanswered. WTB leaks dammit!
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
I would think TDP would remain the same, but clocks would increase/decrease based on what parts of the gpu itself are being stressed.

Say for instance you had one area where there was very little tess, the gpu isn't using a lot of power for the poly engines so it uses that extra wattage to boost clock speeds. Once the poly engines are called upon to do work the wattage headroom for the rest of the gpu is lessened and so then are the clock speeds?
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
I fail to see how this is any different. Practically all cards throttle depending on use, the Fermi had 3 presets, past 3 generations of AMD cards also have 3 presets and clock down to 150 during low use.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
The difference is that it dynamically overclocks parts of the chip, not throttles the whole thing. This is during 3D (games, no powerviruses), while the previous implementations are 3D, low 3D and 2D.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I fail to see how this is any different. Practically all cards throttle depending on use, the Fermi had 3 presets, past 3 generations of AMD cards also have 3 presets and clock down to 150 during low use.

This isn't throttling per say the usual "we're going to explode!"

This is more like Intel Turbo Technology it seems. I don't understand the workings of a GPU enough to really fathom how it would idle parts of it and ramp others, I always assumed due to parallelism, all parts were active and load balanced evenly. I didn't know games could be "multi-threaded" to a point where they can focus heavily on specific cores ala how a CPU does it (single threaded, idle 3 cores on a quad turbo the remaning, double threaded, idle 2 ramp up 2, etc.)

In a GPU, where does the balance come? I'd guess if not done right, it could bottleneck itself. And if this only benefits games that use low GPU usage (Ie older junk) it seems almost like a waste. With turbo mode you can get an extra 20% in a DX9 game? So, going from 100 FPS to 120FPS would really make it better?

WTB more info! And some benches be nice.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
The difference is that it dynamically overclocks parts of the chip, not throttles the whole thing. This is during 3D (games, no powerviruses), while the previous implementations are 3D, low 3D and 2D.

So the only benefit is better power consumption? It doesn't benefit 3d applications at all because most cards go from 150mhz to full speed with a 3d application. Most games that are worth their salt will use 90%+ gpu anyway, so this doesn't seem useful except for maybe mobile parts.
 

Arzachel

Senior member
Apr 7, 2011
903
76
91
The difference is that it dynamically overclocks parts of the chip, not throttles the whole thing. This is during 3D (games, no powerviruses), while the previous implementations are 3D, low 3D and 2D.

The counterargument being that it is essentially throttling, except that the base clock is lower, so it can go both ways.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
So the only benefit is better power consumption? It doesn't benefit 3d applications at all because most cards go from 150mhz to full speed with a 3d application. Most games that are worth their salt will use 90%+ gpu anyway, so this doesn't seem useful except for maybe mobile parts.

No you don't get it. Let's say you have a scene that is very heavy on the shaders but not on the TMUs, ROPs etc. Then the saved power from the underused TMUs, ROPs etc. is used to drive the shaders even higher, giving you more fps.

As it seems, this is not applicable to the whole chip, but parts of the chip, depending which parts are stressed and which parts are not (so much).

The counterargument being that it is essentially throttling, except that the base clock is lower, so it can go both ways.

But throttling did impact performance, didn't it? Here you may throttle only unused parts of the chip, maintaining TDP and performance.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
We don't know enough to say anything, so I can't really counter your theoretical assumptions with my own.

I could see it having a great impact in several areas, obviously with 4xx blowing up in Nvidia's face power became a much larger deal for them than before. Already we've heard reports that the Kepler card used to demo Samaritan was using less than 200 watts.

I don't think this is a system where it simply throttles itself down to save power. I think it's a system like what Intel uses, but to a greater effect AMD is using in their CPU.

Instead of simply turboing based on core usage, it turbos based on TDP. Different games and different effects must place different power demands on a gpu, perhaps nvidia figured out a way to use the TDP that is wasted when less demand is present on one part of the gpu to boost the performance of the part that is in high demand.

No way to know though until real information comes out and it's reviewed.
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
No you don't get it. Let's say you have a scene that is very heavy on the shaders but not on the TMUs, ROPs etc. Then the saved power from the underused TMUs, ROPs etc. is used to drive the shaders even higher, giving you more fps.

As it seems, this is not applicable to the whole chip, but parts of the chip, depending which parts are stressed and which parts are not (so much).



But throttling did impact performance, didn't it? Here you may throttle only unused parts of the chip, maintaining TDP and performance.

Correct me if I'm wrong, but are they all tied together? In a sense you'd create a bottleneck (the ever famous "ROP starved" or "bandwidth limited" comes to mind.)

I get the feeling that jucing the shaders would require the ROPs/TMUs themselves to kick it up a bit to match their output. If I'm totally wrong on this, by all means let me know.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
The counterargument being that it is essentially throttling, except that the base clock is lower, so it can go both ways.

Except that is not what happens in Intel's turbo boost for example. Overclocking and speed step are 2 different things.
Boosting clock speed to maintain a xxfps point over a given intervalxx is not the same as throttling. It's boosting when necessary, if that is how it ends up working.
 
Last edited: