***Official GeForce GTX660/GTX650 Review Thread***

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
The gtx6xx cards so hard to calculate their actual voltage for. With GPU boost and throttling I would imagine they could be + or - 20w's in either direction based on how good your cooler is, or how bad your case's airflow is.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
yeah some people were praising how efficient the 680 was when it came out but it actually used about the same and even sometimes more power than the 7970. if you simply raise the power target then it will exceed the 7970 power consumption in many cases.
 
Last edited:

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
The other thing is that even though the 7970 Ghz Ed. uses slightly over 200w, that's not at all a lot of power for a single card flag ship. The only reason why the graphs look like the 7970 is using so much is because many reviews didn't put any past generation cards into the lists.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
Where did I say that? I said, the 7970 GE uses roughly 40% more power than the 680. Look at the 3DCenter link where the power measurements of all 5 reviews that do them "card only" are averaged and compared.

GTX680 = 169W
7970 GE = 235W
235/169 = 1.39 -> 39% more

http://www.3dcenter.org/artikel/launch-analyse-nvidia-geforce-gtx-660

Looks like AMD set the voltage on the GE too high:

power_peak.gif


The rest of the cards are about on-par on NVIDIA on performance/watt or higher:

perfwatt_1920.gif


This is forgetting, of course, that AMD did dedicate some of the features on Tahiti towards compute.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I'm sorry but your recent torrents of praise for AMD/GCN make me chuckle when you said, with equal amounts of vigor, the complete opposite less than 4 months ago: http://forums.anandtech.com/showpost.php?p=33484902&postcount=37 (regarding MSAA and tess)

You argued with so much vigor the exact OPPOSITE of what you are saying now, who is to say you won't be forced to do another 180 by next spring?.

You keep linking that thread over and over, despite full well-knowing you are spinning my words out of context.

For the 100th time: When I said Kepler vs. GCN it was comparing GTX670 vs. HD7850 in that thread. How do you not understand this?

Notice the person asked if it was better to buy a $400 GTX670 or save some $ and get an HD7850 for $250 for the purposes of keeping the card for 3+ years? You just won't stop with that thread and keep taking that discussion in there of GTX670 vs. HD7850 and applying it in general against GK104 vs. Tahiti XT. After I have repeatedly explained the context, you keep talking about "Kepler" architecture vs. "GCN" architecture when you cannot do that -- you have to compare SKUs vs. SKUs. For example, GTX660Ti and GTX670 handle MSAA performance hit differently. You can't just generalize and apply the words "Kepler" and "GCN" across all members of the family tree.

The fact of the matter is Kepler architecture is faster in with extreme tessellation and FP16 shaders. Now you can say I predicted wrong that more and more games would use Tessellation and FP16 textures in the near future since GTX680 launched. I was wrong on that and that's fair. It still changes nothing about the fact that Kepler is faster in tessellation in synthetic applications. Wait until Crysis 3 before judging. So far almost no games used tessellation extensively lately that have allowed GTX680 to stretch its legs in geometry processing.

It does smoke HD7970 in the Secret World which only proves that GK104 is faster than Tahiti XT in these extreme cases (although the tessellation in that game looks very poor).

Also, how are you applying "Kepler" vs. "GCN" architectures in broad terms to GTX660 vs. HD7870 since both of those are neutered versions of flagship cards? GTX660 doesn't have the same amount of Polymorph engines that perform tessellation as the full-fledged GK104 does. That's similar to using GTX460 and implying that "Fermi" architecture sucks at tessellation because GTX460 sucks at tessellation (and it does compared to GTX470/480 parts). I told you this before and you keep ignoring this. GTX660 does not have the proper amount of geometry performance of the real Kepler GK104 chip to imply any sort of tangible advantage since it's way too crippled against a "real" GK104.

As for your pro-AMD rant: calm down.

You said you need a special adapter to drive 3 monitors off AMD cards and it's not entirely true. Just wanted to point that out:

1) Some AMD SKUs include it for free;
2) You can get Sapphire FLEX cards;
3) You can get Asus DirectCUII cards that let you use 3 display port cables to drive 3 monitors.

Just laying out more information there since your post didn't address those 3 possibilities but yet called my post "Pro-AMD rant".

You also have some curious reasoning in it (e.g., the displayport thing holds true for NV cards with DP ports, btw, and NV has had SLI tri-monitor for a while now, and there's no denying that adapterless is better as it resolves convenience issues as well as DP-related screen tearing due to timing mismatches).

You can't play 3D games off 1 single GTX680 on 3 monitors above 1080P. Why is my reasoning curious? You incorrectly implied that you needed to go out and buy an adapter to drive 3 monitors off 1 HD7000 series cards. I never argued that an adapter-less design is not superior. You brought that up as if I ever stated that I don't prefer NV's adapter-less design?

I think attempting to label Russian pro-AMD or pro-NVIDIA is inaccurate. He's pro-what-he-thinks-at-this-time. And obviously he's subject to change his mind as he gathers more information. That's somewhat of an admirable trait. 4 months ago he was using data from earlier AMD drivers, using a benchmarks of a DEMO of a game, and comparing cards at pricepoints before AMD lowered them on the 7900 series. And as I've heard reviewers say, "There isn't a bad card just a bad price."

Exactly. At the time of that thread, GTX670 had very good performance and AMD was far away from releasing Cats 12.7 drivers (HD7970 was still far behind in BF3, Crysis 2, Skyrim, Dirt 3 and other games). Furthermore, that thread was comparing future-proofing (per OP's request as I generally don't recommend to future proof myself) of a $250 HD7850 vs. $400 GTX670. Blastingcap implied that the OP would have been better off buying the $250 HD7850 and OCing it since the performance advantage of GTX670 would become smaller over time vs. HD7850 as more games would use DirectCompute and that Kepler's performance advantage in tessellation wouldn't be as extensive as I had though. I told the OP in that thread that if he intended to keep his GPU for 3+ years, that at that point in time it was safer to go for the GTX670 if he didn't want to upgrade for 3 years since the trend has been more geometry processing and tessellation in games.

The OP also wanted 60 fps in modern games, something that HD7850 already wasn't delivering in the games he mentioned.

As you have mentioned, I change my recommendation because I look at more recent performance, new drivers and price adjustments in the marketplace. At that time the GTX670 seemed like a far better card to me than a $500-550 HD7970.
 
Last edited:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
You just can't hack it that NV's top dog is losing to AMD's top dog that you HAVE to bring up the old meme that gk104 is a mid-range product... hate to remind you, its not priced at mid-range at all.

Just based on this alone, which product do you think is more "future proof"? Logic would conclude GCN, especially with their huge array of upcoming AAA games being GE.

I just don't care, I don't like either lineup right now. Nv, AMD, two huge cooperations that want to sell me something at a premium only to out-date it a year later, nothing more.

Just a reminder, it's not a meme it's born of pure fact. Secondly this isn't a situation where you have to "pick one" if both are bad, both are bad, one might be slightly less bad, but bad is bad.

Only a fanboy would assume a side must be chosen, maybe if I made a poor choice with my first DX11 card I'd feel the need to upgrade. However that is simply not the case, and I'm not the fool who get's duped into buying marginally more powerful video cards for quadruple the cost because someone said Computer Shaders was the next big thing. All DX11 cards can run compute shaders, games before these Gaming Evolved titles used compute shaders, only AMD is running a "Crysis 2" of their own, where are the conspiracy theorists from the red camp now?

Nothing looks different, they're both very comparable in games that use DirectCompute already. Perhaps you were unable to follow what I was saying? Either way it doesn't much matter.

Instead of talking about how AMD has this amazing advantage in a select few games, why not talk about how worthless it is since the games suck? Sleeping dogs is probably the best title out of all of them and it's sold .02m copies on the PC. If there was ever a non-factor, it's this over-hyped advantaged you so wildly cling to.

Which is more future proof? Neither, while the boys have been arguing this up and down since January trying to sell their employers products we all know once 8xxx series and 7xx series comes out nobody is going to care. Anyone who upgraded from last gen into this garbage gen is going to upgrade next year, anyone waiting this gen out due to awful performance gains and backwards price/performance from AMD for the first six months is praying something changes.

Best advice we can give people who are on DX10 or older cards looking to upgrade is to wait, plan and simple. If they can't wait because they don't have a card, then the second best thing we can tell them is to buy something cheap, perhaps even used, to hold them over until the awfulness that is this generation passes us by and we all forget about it like the GTX 480s dominance of the 5870 in DX11.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
40 watts power draw when gaming is phenomenal, looks like it sits closer to the hd7770 than 7750 in performance. Looks like a really solid design and probably has plenty of OC headroom.

7 months late vs. HD7750/7770 and loses in performance to HD7770 at current market prices. In general, GTX650 is easily the worst budget gaming card for $117-130.

The power consumption of HD4000 is also phenomenal. GTX650 using little power means nothing since it can't play games at acceptable PC gaming settings. If you want to save $ and care that much about electricity costs, you go and buy a PS3 Slim not a desktop gaming PC. :D

Crysis2%20HIGH.png

DiRT%20HIGH.png

MaxPayne3%20HIGH.png

Metro%20HIGH.png

Skyrim%20HIGH.png

WoW%20HIGH.png


GTX650 = $117 (cheapest right now on Newegg)
vs.
HD6870 = $150

$33 more dollars, 1.75-2x the performance increase. :eek:

Even cards such as $110-120 HD7770 GE or HD6850 or GTX560 for $125-130 all make GTX650 irrelevant at current prices.
 
Last edited:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
GTX650 using little power means nothing since it can't play games at acceptable PC gaming settings.

$100 card for 1080p Ultra settings that are the only "acceptable" PC settings, anything less you might as well buy a console! That makes perfect sense RussianSensation, your new persona is amazing. :thumbsdown:
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
$100 card for 1080p Ultra settings that are the only "acceptable" PC settings, anything less you might as well buy a console! That makes perfect sense RussianSensation, your new persona is amazing. :thumbsdown:

HD6850/7770/GTX560 all provide superior gaming performance within the price range of GTX650. You missed that part?

Interesting how you are defending a $110 GTX650 part but it's 3x slower than a $40 GTX470 900mhz .....
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
HD6850/7770/GTX560 all provide superior gaming performance within the price range of GTX650. You missed that part?

Interesting how you are defending a $110 GTX650 part but it's 3x slower than a $40 GTX470 900mhz .....

Gaming performance isn't the only benchmark to consider, especially when we're talking about low end, low power, budget cards.

Would I buy one for my main rig? Not a chance, but my needs don't support the role this card was designed to fill.

Neither does the 6850 or the 560 for that matter, at least not nearly as well.

I'm not defending anything, I don't swing the way you do. I'm just pointing out the logical fallacies in your arguments these days. Don't get me started on the silliness that is your new slogan of "a card already filled that performance role".


Already with the personal attacks, way to go. :|

Is that a personal attack? I guess I'm just not super sensitive like you. My apologies if you felt that was a personal attack RS, I never meant to offend you on a personal level.

I see they finally made you change your troll avatar, it's actually more surprising they finally made you change it than how long you actually had it for. Anand has a reputation of bias to uphold you know.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Basically RS was talking smack about GCN arch and saying Kepler was more futureproof, and now it appears he's done a 180 even going so far as to use the same Unigine argument I made four months ago but which he shrugged off four months ago.You talk about DC but look at his comments about MSAA and tess and FP16.

In the context of the that thread, it was. Right away I outlined that we are comparing 2 spending scenarios for the OP's purposes (he wanted to spend $350 for the 7870) and we presented him with 2 options:

1) Buying an HD7850 for $250 + overclock
2) Buying a GTX670 for $400

Op's Context - keeping the card for 3 years and not upgrading. I went with Option #2 for his 1080P resolution. You recommended Option #1 since you ignored GTX670's performance advantages and actually claimed the performance lead would narrow with the 7850.

You said spending extra for a 670 is a waste of $ since you can't future-proof. I said 670 provides playable frame-rates at 1080P and that 7850 OC cannot reach 670 in performance which is why for OP's purposes it was better. Then I showed you modern games where HD7850 was trailing by miles to a 670. I specifically focused on games that used tessellation since those were the games where 7850 at the time had very poor performance (this was reasonable since AMD has not fixed their driver issues and 7850 is not even that fast vs. the 670 to begin with).

Here is my post at the beginning of thread to put the context out there and you still keep twisting it:

"Right but he says he intends to keep this card for 3 years and not upgrade. In that case think about BF3 expansions, all the upcoming Dirt games on Ego engine, Crysis 3, and newer games with tessellation. GTX670 will mop the floor with HD7850 in those types of games. In Crysis 2, GTX670 is 69% faster than an HD7850 at 4xMSAA 1080P. In Batman AC, GTX670 is 66% faster with 8AA. That's remarkable. We can sit here and argue that those games are "NV-biased" which is a moot point since there most likely will be more Batman games, more Crysis games and more games in the future with tessellation [I mentioned the Secret World as the first MMO that will use tessellation].
http://forums.anandtech.com/showpost.php?p=33484059&postcount=24

Even back then I already warned the OP about Secret World and NV's possible advantage in that game. At the very least I laid out this information for him to decide but you blatantly kept ignoring that this is a factor (without even asking what games OP intends to play).

Your response to my post right after:

"In any case by the time massive tessellation matters, which may be in decades because tacking it on as an extra does little, all existing cards will be obsolete anyway. Note that multiple sources say next gen consoles will be powered by something like 6670s."

^ You pretty much went off to want to discuss tessellation and Kepler vs. GCN and ignored the main context of that thread in helping the OP:

"Was it better to get a GTX670 for 3 years for gaming @ 1080P or buy an HD7850 and overclocking it"? I went with the former option and explained my position of why I thought it was worth the extra $150.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You might be right, GCN might be the better DX11 design, or it could simply be a comparison of a $550-600 lackluster next gen product against a mid-range next gen product. A 6970 vs a GTX 560 Ti /w an additional 30w of TDP in clock rate if you would, a comparable situation just as this is.

Even if it is true that GK104 is a mid-range Kepler part, GTX680 sells for $480 vs. 1Ghz HD7970 after-market card that sells for $380. Just so I understand you correctly, NV held back GK110 purposely (what initially even I though), but now NV is selling a "Mid-range" Kepler for almost $500.

Even if we accept that GK104 GTX680 is a mid-range product, NV has delayed the manufacture/launch of GK110 K20 to December 2012. It looks to me like GK110 was simply unmanufacturable at initial 28nm yields, and profit margins.

Thus, what we have is a "mid-range" NV card for $480 that can't beat an AMD flagship that sells for $100 less.

Where did I say that? I said, the 7970 GE uses roughly 40% more power than the 680. Look at the 3DCenter link where the power measurements of all 5 reviews that do them "card only" are averaged and compared.

GTX680 = 169W
7970 GE = 235W
235/169 = 1.39 -> 39% more

http://www.3dcenter.org/artikel/launch-analyse-nvidia-geforce-gtx-660

HD7970 GE in that test is a reference card. No such card exists in retail channels.

IF you have been reading this forum and paying attention, HD7970 @ 1150-1165mhz draws about 225-238W at full load, while after-market 7970 GE cards draw about 200W or less.

You guys keep linking HD7970 GE in reviews and we repeated for 20-30x that it's meaningless and you still keep linking it. :hmm:

Where can I buy a reference 1.25V HD7970 GE card? Please let me know.
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
I only brought up that old thread in this one regarding architectures, NOT specific cards, and how you slammed GCN architecture. ARCHITECTURE. Only to reverse in less than 4 months. Who's to say things won't change again in another 4 months. That is all. You criticize me for talking about arch but you were the first to do it and slip into talking about general GCN vs Kepler rather than specific cards from time to time.

And for the record I didn't say to buy and hold a 7850 for 3 years, I said that buying midrange continually is better than trying to buy and hold for 3 years. So buy, sell after 18 months, rinse and repeat. Furthermore the OP of that thread was way wishy washy and had set his budget too low for the prices of a GTX 670 back then, and only later changed it.

P.S. We are talking about architecture, not cards. Just as a reminder. If you think it is impossible to talk about architecture outside of specific cards, then please don't make general remarks like how GCN struggles with MSAA vs Kepler, etc. because that sure sounds like talking about architectures, not specific cards, to me. So does talking about Direct Compute, FP16, tessellation. You can't have it both ways and I think it's disingenuous to pretend like you are talking about specific instances only, when the plain reading of your past posts is that you are discussing architectures. I linked to the entire thread and do not believe I took anything out of context given that we wound up discussing a lot more than just the OP's question in that thread. It is disingenuous to hide behind the first page of that thread when the next 2 pages were about GCN vs Kepler architecture. It is disingenuous to pretend like you were talking only about 670 vs 7850 when we were talking about things like Anandtech's review of 7970's tessellation power and your dismissing it as not representative of real-life gaming.

Speaking of thread derails though let's just leave it at that lest we derail this thread, too.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You are simplifying the comparison of architectures when it often leads to erroneous results in the manner which you describe.

Does HD7850 have strong DP compute performance? No. Conclusion: GCN sucks at compute.
Does HD7970 have strong DP compute performance? Yes. Conclusion: GCN is great at compute.

Does GTX460 have strong tessellation performance? No. Conclusion: Fermi sucks at tessellation.
Does GTX580 have strong tessellation performance? Yes. Conclusion: Fermi is great at tessellation.

Does HD7870 have strong performance/watt? Yes. Conclusion: GCN has very good performance/watt on 28nm.
Does HD7970 have strong perfomrance/watt vs. competition? No. Conclusion: GCN has inferior performance/watt on 28nm.

You cannot discuss architectures without discussing specific SKUs in the context of "future-proofing" for next generation games. The minute you are discussing HD7950 vs. GTX660T (for example) i, you are no longer talking about a pure GCN vs. Kepler architecture comparison but a "neutered GCN" SKU vs. a "neutered Kepler" SKU. Now you have to go into specifics and see if more important aspects have been cut down than just tessellation engines and so on. Even if GTX660Ti has retained good tessellation performance, but if suddenly ROP and memory bandwidth have become the more important bottlenecks, the main bottleneck in the GPU for future games may be greater than that SKU's advantage in some DX11 feature such as tessellation. How do you not understand this?

You can add 30 geometry units but if a card is ROP / memory bandwidth limited, it won't help much unless the entire game is tessellated.

This is the entire aspect you keep missing. You can't discuss GCN vs. Kepler architectures and ignore SKUs in a family of that architecture to make generalizations like you have done for this entire generation.

The tessellation performance for NV is tied to Polymorph engines which are a part of the SMX clusters. If you remove SMX clusters from an SKU, you are directly impacting how well that SKU can perform tessellation (this is why GTX460 is far inferior at tessellation than GTX470/480 parts are). This is why comparing how GTX660 vs. HD7870 will handle tessellation vs. GTX680 vs. HD7970 is meaningless - but you want to do this anyway. You can have 1 Kepler SKU that's terrible at tessellation and 1 that's very good at it simply because of how the chip has been cut-down. You keep ignoring that and continue to compare "architectures". You made this exact mistake when you wanted to overlay the discussion of GCN vs. Kepler onto HD7850 vs. GTX670 by trying to prove that HD7850 won't suffer in next generation games with tessellation "since GCN does not have a tessellation problem" - HD7850 will definitely suffer vs. the GTX670 in real world games that use tessellation since it represents a cut-down GCN architecture SKU.

Instead of discussing architectures in the context of SKUs, which is what you should be doing because architectures don't play games, SKUs based on those architectures do, you insist on comparing GCN and Kepler architectures in a bubble.

Other people have made a similar mistake by comparing performance/watt or performance/mm^2 of HD7970 vs. GTX680 and procralimed that GCN is very inefficient. GCN SKU Tahiti XT is less efficient for games than GK104 Kepler is (true!), but the Tahiti XT chip is less efficient because it has a bucketload of compute fat tacked on (like Fermi did). GCN architecture itself in pure gaming chip form is actually pretty efficient - HD7850/7870.
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Facepalm.

Does HD7850 have strong compute performance? No. Conclusion: GCN sucks at compute.
Does HD7970 have strong compute perfomrance? Yes. Conclusion: GCN is great at compute

Does GTX460 have strong tessellation performance? No. Conclusion: Fermi sucks at tessellation
Does GTX580 have strong tessellation perfomrance? Yes. Conclusion: Fermi is great at tessellation.

You cannot discuss architecture without discussing specific SKUs. The minute you are discussing HD7950 vs. GTX660Ti, you are no longer talking about GCN vs. Kepler architecture but "neutered GCN" vs. "neutered Kepler". Now you have to go into specifics and see if more important aspects have been cut down that just tessellation engines and so on.

You can add 30 geometry units but if a card if ROP / memory bandwidth limited, it won't help.

This is the entire aspect you keep missing. You can't discuss architectures and ignore SKUs in a family of that architecture to make generalizations like you have done for this entire generation.

If that is what you believe, it's curious why you didn't state that in the old thread and especially in that post where you said you believed Kepler is more advanced due to MSAA/tess/FP16.

http://forums.anandtech.com/showpost.php?p=33484902&postcount=37

You yourself used the word architecture multiple times in that post. YOU. NOT ME. In case you can't be bothered to click that link, this is what you wrote:


You could remove the graphs from my post to make yours a bit shorter! :D



He said his budget was up to $350. That's how the discussion of $400 670 started.



See that's what you are missing: the differences in architectures and what's happening in modern games today.

I am going to address all of these points below.

(1) Tessellation

For example, Crysis 2 on Ultra + DX11 adds tessellation automatically. Why do you think HD6900 series and HD7800 series get hammered so much? You can't have GTX670 and HD7850 perform the same in a "hypothetical scenario" as you mentioned since there are 2 fundamental differences in the architecture that will ensure the Kepler is faster: tessellation and FP16 textures - part of modern DX11 games.

Those 2 things are a large part of the reason why GTX670 smokes 7850 in modern games by such a large delta - games that use Tessellation and FP16 textures (there are other reasons such as optimizations for drivers too). If you need more evidence, tessellation and FP16 are huge reasons why 670 smokes 7970 in some DX11 games (just like why HD7970's superior bandwidth allows it to lay waste to the 670 in memory bandwidth limited situation and with AA in high resolutions where memory bandwidth is a factor).

You keep ignoring tessellation as a non-factor but it's part of DX11 games and is partly WHAT makes GTX600 series so fast in modern games that have it!

Here is the evidence:

How do you explain GTX670 being 39% faster and GTX680 being 51% faster than GTX580 in Crysis 2? It sure isn't related to memory bandwidth or pixel performance which GTX580 has plenty vs. 670/680. It also cannot be explained by texture performance since it wouldn't explain why GTX670 is whopping HD7970 that has gobbles of texture performance.

gigabyte_gtx670w_crysis21920.jpg


The same if we revisit an older game such as Lost Planet 2
gigabyte_gtx670w_lp21920.jpg


So that's Tessellation covered.

(2) FP16 textures (64-bit textures)

I also noted another key advantage of the Kepler architecture: FP16 texture performance. You know which games uses FP16 textures? Dirt games for example, based on the EGO engine:

gigabyte_gtx670w_dirt31920.jpg


Again in all 3 of these cases, HD7850 is horrendously outclassed by the 670 because of 2 things that Kepler architecture excels at:

1) Tessellation performance
2) FP16 next generation texture performance.

Now before you call it a fluke, I'll even prove it to you using older cards.
Look at the specs of GTX570 vs. GTX480.
- GTX480 has more memory bandwidth, more pixel & shader performance, more VRAM, and GTX570 has a tiny texture fill-rate advantage.

Now can you explain to me how in the world can GTX570 beat GTX480 by a whopping 5 FPS in Dirt 2 at 2560x1600? This should not happen under any circumstances based on their specs.

Dirt2_03.png


Do you know why? Because of the FP16 enhancements of GF110 over GF100. GF114 can perform 4 Texels/clock vs. 2 Texels/clock in FP16 texture. It says it right here in the GF110 architecture breakdown. Dirt games use the EGO engine which uses FP16 textures. Dirt 2 also has tessellation which GF110 performs better at than GF100.

GF110 has also improved tessellation performance over GF100. "NVIDIA has improved the efficiency of the Z-cull units in their raster engine, allowing them to retire additional pixels that were not caught in the previous iteration of their Z-cull unit. Z-cull unit primarily serves to improve their tessellation performance by allowing NVIDIA to better reject pixels on small triangles. ~ Source

Ok so let's revisit tessellation with older cards in Metro 2033:

Metro_02.png


It should be impossible for GTX570 to beat GTX480 by that much. The answer? Improved Tessellation in GF110.

(3) Deferred MSAA (Frostbyte 2.0).

Ok so what about BF3? I think there is an explanation for that too. AMD's Cayman and Cypress took a larger performance penalty in deferred MSAA game engines than Fermi architecture did. This was investigated and proven by Bit-Tech using Battlefield 3. Architecturally, I haven't been able to find an explanation but it just could be that AMD stopped optimizing for MSAA as much due to MLAA. I don't know for sure. It looks like this hasn't changed much with GCN vs. Kepler which is why Kepler wins against 7970 in BF3.

In summary, it is my view that Kepler architecture has all 3 facets that are necessary for next generation games covered:

1. Tessellation
2. FP16 textures
3. Deferred MSAA

All 3 of these are going to be trending for next generation games. This is why it's very likely that AMD is working on improving at least on points #1 and #2 with GCN 2.0 / Enhanced because in current state GCN will fall apart rather quickly for next gen games.

Therefore, if I was betting, I'd say HD7850's performance will get much worse a lot sooner since it lacks in all 3 of those areas. No amount of overclocking will save 7850. Whether or not that extra performance is worth $150 depends on the person and his/her budget/upgrade frequency.

I guess it comes down to when you think next generation games will have more tessellation and higher resolution FP16 textures. I think it's already happening and why I think the performance delta between 7850 and 670 will only grow larger.

HD7850 is already falling apart in Dirt Showdown (EGO engine), while HD7950 can't even outperform the 580. Notice GTX570 again outperforming the 480?

dirt%20s%201920.png


In conclusion,

People on our forum are often quick to jump to the notion of "NV-biased games". However, if you dissect the architecture and look under the hood of the technology, it's not as simple as "NV paid more $ to make this game run faster". It sure appears to me that NV made Kepler architecture a lot more advanced for games than GCN is in its current form.

Theoretical tests even show that Kepler architecture does better in tessellation and FP16 textures and in my opinion, that's a large part of the reason why it's so fast in modern games despite 256-bit memory bandwidth.

tessmark-x32.gif

unigine.gif

b3d-filter-fp16.gif


Just my 2 cents. Feel free to present an opposing case.

Whatever, let's move back on topic for this thread. So... GTX 660 competitive, GTX 650 slightly overpriced... discuss.
 
Last edited:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Even if it is true that GK104 is a mid-range Kepler part


Nvidia is a business, one without x86 rights, one which is destroying AMD in every aspect of being a business. AMD in gpu's is playing the same failing role they are in cpus, they let Nvidia dictate the market, they react to their pricing, they follow like sheep to Nvidia's innovation. If you were oblivious to that fact before and boost, adaptive vsync, and cloud gaming didn't wake you up to it than the red tint on your gaze is clouding your vision.

The very existence of a GK110 Kepler GPU makes GK104 a mid-range card, there should be no argument to that fact at this point in time. Anyone following gpus who isn't steeped in personal bias would know GK104 was a derivative of GF104/114 and that it was by design a mid-range part.

Whatever Nvidia's reasoning for not bring out GK110 first to the consumer market it makes no difference, unable to create it though seems like a remarkably short sighted comment considering they've already delivered over 30 GK110 cards. I'm sure you saw the thread with a massive order to fill on them. The sheer amount of pre-orders for GK110 is staggering, Nvidia is going to have a good year to come.

Amazingly while we've been hearing about low wafer production and Nvidia getting priority from TSMC, and AMD's inability to make up much if any ground in market share while having a full 28nm lineup. Which they had well before Nvidia even had two cards out but the card they had out was a high margin card with a small die allowing more to be created at a lower per wafer cost.

Did you see AMD's Q2 results? Laughable at best, perhaps then since you missed how having a full 28nm lineup did little to stem the fail that is AMD's business you so too probably missed the part where Nvidia had a amazing quarter while only really having one $500 card out!

Now what's even more interesting is while Nvidia is beating AMD as a business with this mid-range GPU, we see a move towards GK110 right as we also receive word that TSMC has quadrupled their 28nm wafer output, more wafers, appearance of huge 7.1 billion transistor "high end" gpu...

It seems to me you're a guy on a forum, and Nvidia is a business making business decisions. Also of important note is that the market disagrees with just about everything you say these days, food for thought I guess.
 
Last edited:

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
The sexual tension in this post is palpable, I think you should all act on it....

Anyways, yet another post has degraded into Nvidia this, AMD that. Blah Blah Blah, it's never ending with both sides unwilling to see reason, and a few people looking at things purely on performance/$.

It's like the forum rules state that every thread in VC&G must play out exactly like this or everyone gets banned.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
Nvidia is a business, one without x86 rights, one which is destroying AMD in every aspect of being a business. AMD in gpu's is playing the same failing role they are in cpus, they let Nvidia dictate the market, they react to their pricing, they follow like sheep to Nvidia's innovation. If you were oblivious to that fact before and boost, adaptive vsync, and cloud gaming didn't wake you up to it than the red tint on your gaze is clouding your vision.

The very existence of a GK110 Kepler GPU makes GK104 a mid-range card, there should be no argument to that fact at this point in time. Anyone following gpus who isn't steeped in personal bias would know GK104 was a derivative of GF104/114 and that it was by design a mid-range part.

Whatever Nvidia's reasoning for not bring out GK110 first to the consumer market it makes no difference, unable to create it though seems like a remarkably short sighted comment considering they've already delivered over 30 GK110 cards. I'm sure you saw the thread with a massive order to fill on them. The sheer amount of pre-orders for GK110 is staggering, Nvidia is going to have a good year to come.

Amazingly while we've been hearing about low wafers, from TSMC and AMD's inability to make up much if any ground in market share while having a full 28nm lineup. They had it well before Nvidia even had two cards out but the card they had out was a high margin card with a small die allowing more to be created at a lower per wafer cost.

Did you see AMD's Q2 results? Laughable at best, perhaps then since you missed how having a full 28nm lineup did little to stem the fail that is AMD's business you so too probably missed the part where Nvidia had a amazing quarter while only really having one $500 card out!

Now what's even more interesting is while Nvidia is beating AMD as a business with this mid-range GPU, we see a move towards GK110 right as we also receive word that TSMC has quadrupled their 28nm wafer output, more wafers, appearance of huge 7.1 billion transistor "high end" gpu...

It seems to me you're a guy on a forum, and Nvidia is a business making business decisions. Also of important note is that the market disagrees with just about everything you say these days, food for thought I guess.

Implying those are features that are actually any good. Seriously? How in the world is NVIDIA's implementation of GPU Boost any good? All you can use now is an offset for overclocking and over-volting, not to mention they have much bigger power limits/constraints than AMD.

Adaptive V-Sync? That's a gimmick at most.

Cloud gaming? You can't be serious...

And there is no GK110 gaming card because it can't be manufactured in any decent quantity and it would make more sense to sell those GPUs for the professional market.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Implying those are features that are actually any good. Seriously? How in the world is NVIDIA's implementation of GPU Boost any good? All you can use now is an offset for overclocking and over-volting, not to mention they have much bigger power limits/constraints than AMD.

Adaptive V-Sync? That's a gimmick at most.

Cloud gaming? You can't be serious...

And there is no GK110 gaming card because it can't be manufactured in any decent quantity and it would make more sense to sell those GPUs for the professional market.

And yet we all see AMD following in their footsteps on every count.

I use adaptive vysnc on my i3-540 rig in several games, like Guild Wars 2 where my 4.4GHz i3-540 can't maintain 60 fps, I really enjoy it. I actually enjoy it so much I use half refresh adaptive and clock my reference 470 to 850 core, performance when I need it, cool operation when I don't +1 would recommend.

Hey thanks for your insight on what Nvidia can and can't do, based on your past predictions though I'm going to have to disregard it.
 

thilanliyan

Lifer
Jun 21, 2005
12,040
2,256
126
The very existence of a GK110 Kepler GPU makes GK104 a mid-range card,

The very existence of a 8970 makes the 7970 a mid-range card...

To be honest those features you mentioned "boost, adaptive vsync, and cloud gaming", are well...debatable. I certainly don't want boost since i like OCing it myself, never tried adaptive vsync so no opinion on that, and cloud gaming has many issues to solve before becoming good enough. nVidia does innovate though, no question about that.
 

sze5003

Lifer
Aug 18, 2012
14,304
675
126
How can you compare a 8970 its not even out yet? The 7970 Ghz would be the last generation of the 7 series, not really a midrange. Who knows if they will make a 8990 after that or a 8970Ghz lol.