***Official GeForce GTX660/GTX650 Review Thread***

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

railven

Diamond Member
Mar 25, 2010
6,604
561
126
The card in kitguru's review is overclocked 3%.


3. Percent.

But again, waiting for reviews from more reputable sites.

Thanks for answering my question that I had asked earlier (since that site is blocked at work.)

In the THG review, the 650 lost by ~4%, and I asked what the KG sample was OC'ed by.

While I'm not saying one card is worse than the other, it was omitted by yourself and Keys that the card was OC'ed, which still makes a difference.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
One of the reasons the THG review shows somewhat different results, might also be the fact that THG tests the cards at actual playable settings whereas the others sites has a tendency to test at completely unplayable settings (often getting fps in the 10s or low 20s).
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
One of the reasons the THG review shows somewhat different results, might also be the fact that THG tests the cards at actual playable settings whereas the others sites has a tendency to test at completely unplayable settings (often getting fps in the 10s or low 20s).

Since playable is subjective, I personally prefer a raw "balls to the wall" ideology where nothing can be disputed.

"Sure it only gets 10FPS, but at the same settings it gets 2 FPS more than X-Brand thus 25% faster."

Versus

"At X-options Y-product is 10% faster than Z-product at XX-Options."

But that's just me.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
Since playable is subjective, I personally prefer a raw "balls to the wall" ideology where nothing can be disputed.

"Sure it only gets 10FPS, but at the same settings it gets 2 FPS more than X-Brand thus 25% faster."

Versus

"At X-options Y-product is 10% faster than Z-product at XX-Options."

But that's just me.
what kind of logic is that? if you are testing low end cards then you dont put them through the same settings as high end cards. nobody gives a crap about seeing all comparable cards deliver unplayable performance.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
what kind of logic is that? if you are testing low end cards then you dont put them through the same settings as high end cards. nobody gives a crap about seeing all comparable cards deliver unplayable performance.

You misunderstood my post. I never said test them at 1600P w/8xSSAA/16xAF.

I meant more so that all settings are the same, not as HardOCP does with their "highest playable options" where one card uses Settings-X and the other uses Setting-Y. Both should use Settings-X regardless if one turns into a slide show (or both.) Then turn the settings down a tad.

For cards like these, 1080P is where I'd draw the line for resolution then bench with different MSAA options.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
You misunderstood my post. I never said test them at 1600P w/8xSSAA/16xAF.

I meant more so that all settings are the same, not as HardOCP does with their "highest playable options" where one card uses Settings-X and the other uses Setting-Y. Both should use Settings-X regardless if one turns into a slide show (or both.) Then turn the settings down a tad.

For cards like these, 1080P is where I'd draw the line for resolution then bench with different MSAA options.
well Toms used the same settings for the whole group of comparable cards. in most games he just lowered the settings from those used on the higher end card reviews. we all know that a gtx650 is not going to play Metro 2033 on very high settings with 4x MSAA.

btw my gtx560 se is only 2fps better than the gtx650 in Batman yet I beat the gtx650 by 47% in Metro 2033. Batman loves the new Kepler cards.
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
well Toms used the same settings for the whole group of comparable cards. in most games he just lowered the settings from those used on the higher end card reviews. we all know that a gtx650 is not going to play Metro 2033 on very high settings with 4x MSAA.

Well that is my fault on misunderstanding the post I commented on. When he said "playable settings" I assumed HardOCP style where they picked and choose settings that gave some subjective opinion of what "playable" was.

btw my gtx560 se is only 2fps better than the gtx650 in Batman yet I beat the gtx650 by 47% in Metro 2033. Batman loves the new Kepler cards.

Optimization or better tessellation? Batman: AC has a good amount of tessellation.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Wth? I am not labeling RS as pro or anti anything. Why don't you read the whole thread and come to your own conclusions about what really happened: http://forums.anandtech.com/showthread.php?t=2247849

Basically RS was talking smack about GCN arch and saying Kepler was more futureproof, and now it appears he's done a 180 even going so far as to use the same Unigine argument I made four months ago but which he shrugged off four months ago.

You talk about DC but look at his comments about MSAA and tess and FP16.

My point is not that RS was/is wrong, and sorry to RS if you think I'm picking on those posts of yours yet again. My point is that things change--sometimes quite quickly in favor of one arch or the other--so don't get too hung up on trying to buy something you think will be more futureproof than the other company's card. They will both be obsolete soon anyway. Can't really futureproof in a fast-moving industry like GPUs.

Hmm, a big part of his argument 4 months ago was using test results of a DEMO of Dirt Showdown. The official release of the game turned out very different:

http://gamegpu.ru/images/stories/Test_GPU/Simulator/DiRT Showdown/ds 1920.png

And his previous argument regarding tessellation must have been made invalid due the AMD driver improvements? Previously he said the 670 smokes the 7970. Now he's saying they are close, and most reviews do show the cards close in performance in tessellated games.

I think attempting to label Russian pro-AMD or pro-NVIDIA is inaccurate. He's pro-what-he-thinks-at-this-time. And obviously he's subject to change his mind as he gathers more information. That's somewhat of an admirable trait. 4 months ago he was using data from earlier AMD drivers, using a benchmarks of a DEMO of a game, and comparing cards at pricepoints before AMD lowered them on the 7900 series. And as I've heard reviewers say, "There isn't a bad card just a bad price."
 
Feb 19, 2009
10,457
10
76
These cards are definitely NOT going to be obsolete anytime soon, especially the high end ones.. ie. 670/680 & 7950/70.

We have legit specs on console and they are all going to be crap. We are getting another 8 years of consolitis.

DX11 is here to stay for a long long time. It just seems GCN is the better DX11 architecture, and i have no doubt GK110 will be heaps better than gk104 in that regards.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
It just seems GCN is the better DX11 architecture, and i have no doubt GK110 will be heaps better than gk104 in that regards.

And yet GCN is losing in more DX11 games than it's winning. A long way to go for the "better DX11 architecture".
 
Feb 19, 2009
10,457
10
76
And yet GCN is losing in more DX11 games than it's winning. A long way to go for the "better DX11 architecture".

Where's this magic sauce that makes you think such?

Cos from reputable review sites, im not seeing any of that. The 7970 Ghz is simply dominant in DX11 games.

http://www.techpowerup.com/reviews/MSI/GTX_660_Twin_Frozr_III/7.html

Not seeing it here too:

http://www.xbitlabs.com/articles/gr...gtx-680-superclocked-signature-2_7.html#sect2

Unless you feel somehow winning in Hawx2 is so much more relevant than recent dx11 titles?
 
Last edited:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
These cards are definitely NOT going to be obsolete anytime soon, especially the high end ones.. ie. 670/680 & 7950/70.

We have legit specs on console and they are all going to be crap. We are getting another 8 years of consolitis.

DX11 is here to stay for a long long time. It just seems GCN is the better DX11 architecture, and i have no doubt GK110 will be heaps better than gk104 in that regards.

They will be in so far as the older cards are now, 20-25% faster is just a joke after two years.

My 9800 GT plays BF3 just fine, unbelievable how much arguing takes place over this awful generation.

It just seems GCN is competing with a mid-range nvidia design with a 256 bit bus, weather it's better or not seems an obtuse comment since it's slower at 1080p and only slightly faster at 1600p which is a trivial market segment no matter how loud they talk.

Did you notice any staggering IQ differences with compute shader tax in these resent Gaming Evolved Titles? You obviously didn't, neither does anyone else.

Compute Shader allows programmers to code more efficiently, generally resulting in better image quality through various techniques, and improved performance.

Except in Gaming Evolved titles, where the performance impact is quite large for everyone. Compute Shaders is really just DirectCompute, of which several titles such as Battlefield 3, DiRT 3, Civilization 5, and of course, Metro 2033 all use it.

Now let's look at performance in those titles, because aside from DiRT 3 (which released before AMD needed to create an advantage for their card), the rest of them aren't actually Gaming Evolved titles.


49205.png


Nvida seems to be doing fine in this DirectCompute 5.0 title.

49195.png


Nothing wrong here either, a Gaming Evolved DC 5.0 title.

49210.png


I'm noticing a pattern here.

49193.png


Nothing odd too place here, so what's really going on?


Does GCN represent a vastly superior design against Nvidia's mid-range product, or are we seeing AMD work closely with developers they're paying to maximize the bandwidth requirement in a direct attempt to not only reduce their own performance for minimal IQ improvements, but also cripple the bandwidth starved mid-range kepler cards?

You might be right, GCN might be the better DX11 design, or it could simply be a comparison of a $550-600 lackluster next gen product against a mid-range next gen product. A 6970 vs a GTX 560 Ti /w an additional 30w of TDP in clock rate if you would, a comparable situation just as this is.
 
Feb 19, 2009
10,457
10
76
You just can't hack it that NV's top dog is losing to AMD's top dog that you HAVE to bring up the old meme that gk104 is a mid-range product... hate to remind you, its not priced at mid-range at all.

Funny depends on how you setup your bench, things look different:

bf3_1920_1200.gif


civ5_1920_1200.gif


civ5_5760_1080.gif

Really? Can't even render.

metro_2033_1920_1200.gif


Welcome back btw.

perfrel_1920.gif


perfrel_2560.gif


All i see is a HALO priced NV product being beaten by a much cheaper priced 7970 Ghz. Does it matter that its the mythical mid-range gpu? Not when they are charging you an arm n leg for it, and their "top dog" GK110 isn't anywhere in sight for consumers.

But the point here, and please refer to this thread: http://forums.anandtech.com/showthread.php?t=2270659

In more recent DX11 titles (showdown, sniper elite v2, sleeping dogs) which use more directcompute features than the older dx11 games, the lead is clear and massive. It's either a) AMD being tricksters and deliberately coding GE titles to run crap on NV hardware or b) the more dx11 compute you use, the worse NV hardware runs it or heck, c) a combination. What's clear is AMD's GE CAN AND WILL hurt NV, whereas NV's TWIMTBP program has nothing on dx11 to abuse AMD with (since its now faster in dx11 gaming tessellation).. it can only resort to Physx.

Just based on this alone, which product do you think is more "future proof"? Logic would conclude GCN, especially with their huge array of upcoming AAA games being GE.
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Where's this magic sauce that makes you think such?

Cos from reputable review sites, im not seeing any of that. The 7970 Ghz is simply dominant in DX11 games.

http://www.techpowerup.com/reviews/MSI/GTX_660_Twin_Frozr_III/7.html

Not seeing it here too:

http://www.xbitlabs.com/articles/gr...gtx-680-superclocked-signature-2_7.html#sect2

Unless you feel somehow winning in Hawx2 is so much more relevant than recent dx11 titles?

The 7970GHz is using 70-80 Watt more than the GTX680 - i expect that this card is faster.

Why don't we compare the 7970 against the GTX680, which cost nearly the same in March? This is a list of all games in which the GTX680 is faster and before someone complains: We talking about the DX11 architecture and not which card has more compute performance or more bandwidth:
Code:
Crysis 2
Dirt 2
Dirt 3
Shogun 2
Batman:AC
Sleeping Dogs
Lost Planet 2
HAWX 2
WoW 
Max Payne 3
Deux Ex 3
Battlefield 3
King Arthur 2
CIV5
The Secret World
Wargame: European Escalation

And this are the games in which the 7970 is faster:
Code:
Anno2070
AvP3
Metro2033 (DoF)
Dirt:Showdown
Sniper Elite 2
Nexuiz

Yeah, where is the better architecture?
 
Feb 19, 2009
10,457
10
76
You need to show proof of such claims and not make a list out of your delusions. Show us recent reviews, because that list of yours is LOL-worthy.

The 7970 Ghz is heaps cheaper than the 680 if u didn't notice.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
You need to show proof of such claims and not make a list out of your delusions. Show us recent reviews, because that list of yours is LOL-worthy.

Use Google.
As a start: http://www.techspot.com/review/572-nvidia-geforce-gtx-660/
I think you will find more reviews which showing you my list. And don't forget:
I don't care for 8xMSAA in Max Payne 3 or Batman:AC and 1600p and Downsampling in games because that is not a sign of the "better DX11 architecture".

The 7970 Ghz is heaps cheaper than the 680 if u didn't notice.
Yes, right now. But what has the price to do with the "better DX11 architecture"?
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Jesus @ this thread

Hey guys, if anyone preordered borderlands 2 like I did, it is available for preload so you can play it immediately on release.
 
Feb 19, 2009
10,457
10
76
Use Google.
As a start: http://www.techspot.com/review/572-nvidia-geforce-gtx-660/
I think you will find more reviews which showing you my list. And don't forget:
I don't care for 8xMSAA in Max Payne 3 or Batman:AC and 1600p and Downsampling in games because that is not a sign of the "better DX11 architecture".

Yes, right now. But what has the price to do with the "better DX11 architecture"?

let's ignore all the other much more reputable review sites and use techspot...

GTX-670-POWER-50.jpg


GTX-670-POWER-48.jpg


GCN is faster in Deus Ex, been so with recent drivers for awhile now.
Same for Crysis 2, Batman, Dirt 3, A huge lead in Sleeping Dogs (and completely LOL you would claim otherwise after all the threads on here dissing it as a crap unpopular game, because NV does bad in it lol), Civ 5, Max Payne with 4x MSAA (8x would bring kepler to a slideshow), even BF3 with 4x MSAA is now too close to call with 7970 Ghz.

bf3_1920_1200.gif


But continue to claim 680 is a competitor to the normal 7970 (because its losing to the Ghz ed which is cheaper)... not helping your credibility.
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
He has a point though. When comparing architectures, price&marketing and position on the market have no place in the discussion. If you were to give Kepler more bandwidth and/or clock it higher so it consumed about the same amount of power as the 7970 GE, it would be at least as fast. Across many reviews, the 7970 GE is maybe 15% faster than the 680 while having 40% more bandwidth and using 40% more power.

http://www.3dcenter.org/artikel/lau...aunch-analyse-amd-radeon-hd-7970-ghz-edition-

Across all chip families, Kepler and GCN, both are pretty much on par. When only looking at Tahiti and GK104, however, on average GK104 is more efficient in perf/W, perf/mm2, perf/memory bandwidth and perf/FLOPs. It's just a smaller chip that was never meant to truly beat a chip class like Tahiti across the board. No one would expect that when looking at the specs.
 
Last edited:
Feb 19, 2009
10,457
10
76
He has a point though. When comparing an architecture, price has no place in the discussion. If you were to give Kepler more bandwidth and/or clock it higher so it consumed about the same amount of power as the 7970 GE, it would be at least as fast.

You cannot just ignore that the 7970 GE uses alot more power than the 680. A steep 40% at that. An architecture is not about specific configurations but how it translates raw power into fps, how efficient it is doing that.

You have a point if you argue based on an alternate version of reality where the 680 is made to use much higher TDP to perform better. But sadly no, reality comes crashing down on your illusions.. what we have now is top tier for both architectures, and 7970 Ghz wins, at the expense of efficiency BUT at the advantage of being more useful with compute outside (and inside) gaming, and cheaper to boot.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
He has a point though. When comparing architectures, price has no place in the discussion. If you were to give Kepler more bandwidth and/or clock it higher so it consumed about the same amount of power as the 7970 GE, it would be at least as fast. Across many reviews, the 7970 GE is maybe 15% faster than the 680 while having 40% more bandwidth and using 40% more power.

http://www.3dcenter.org/artikel/lau...aunch-analyse-amd-radeon-hd-7970-ghz-edition-

Across all chip families, Kepler and GCN, both are pretty much on par. When only looking at Tahiti and GK104, however, GK104 is more efficient. It's just a smaller chip that was never meant to really beat a chip like Tahiti, never mind a higher clocked one.

To play devil's advocate - as a counterpoint, simply clocking the kepler higher is like punching a brick wall, there are issues with voltage adjustment, temp and power throttle.

I know some will disagree, but I feel like the 7970 is much more hassle free in terms of overclocking especially in multi GPU configurations. I had a lot of problems with ref 680s working for 15 minute windows before temp throttle or outright instability. **Now I own MSI lightning 680s which I really enjoy - let me make this clear, I love this card to death (I have 2 of them). However, for the typical kepler 680 card being an overclocking beast? I'm not so sure. Its not a set it and forget it type of thing if you are looking for super high overclocks of 1300mhz or more.

I'm not saying the 680 or 7970GE is faster or better, I really honestly don't care much these days. They're both good cards. I do feel like nvidia really pushed the GK104 to the max, I really do think they pushed it as hard as they could - and that's why they limited it in terms of voltage adjustment.
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
You have a point if you argue based on an alternate version of reality where the 680 is made to use much higher TDP to perform better. But sadly no, reality comes crashing down on your illusions.. what we have now is top tier for both architectures, and 7970 Ghz wins, at the expense of efficiency BUT at the advantage of being more useful with compute outside (and inside) gaming, and cheaper to boot.

Again, when talking architectures, we are not talking about price and/or product placement. We also have to define workloads, in this case gaming. You can try to spin this however you want, GK104 is more efficient than Tahiti.

@blackend:
The voltage lock has nothing to do with the architecture. You clearly would have to increase the voltage, yes. But then efficiency would go out the window. Chips are designed for a specific segment. If you go outside that segment, you lose efficiency big time, see 7970 GE. In my opinion, GK104 sits in between Pitcairn and Tahiti. It has been beefed up so much that it is clearly not Pitcairn's competitor, but at the same time it is barely the competitor of Tahiti. This generation it is difficult to compare the architectures because Nvidia has 3 chips, but AMD has 4, so they don't really match, but rather overlap. You can stretch a chip to compete against a larger one, but it is still a stretch, because normally it was not intended.

I really believe the 7970 GE was never planned this way. After having been more efficient for 2 generations, I highly doubt it was AMDs plan to bring such a power hungry card. They said the initial clocks were low on Tahiti and they wanted to have more, but I don't think at such an expense of efficiency.
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
Last edited: