***Official GeForce GTX660/GTX650 Review Thread***

Page 15 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
The highest performing GPU of a generation is not mid ranged its the highest performing card. I cant see how anyone can say otherwise. I understand that some people wanted a mammoth huge die like fermi but for the 600 series there will not be none. We have seen small die flagships in the past and this is the case currently. It make no sense to deny that the gk104 architecture is used in the 600 series as nvidia's flagship.


The highest performing GPU of this generation is 30% faster than last generations highest performing card.

What price bracket a card is located in does not make it's design high or low end, if that's the case I have a $2,000,000 Dotson I'd like to sell you, comparable to a Ferrari.

3185200001_large.jpg


Nobody is denying that a mid-range design is currently the flagship card, we're only noting it's the worst generational performance increase from a full node shrink in the history of gpus, and quite possibly the most disappointing generation on record.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
That to me is a huge perk for nVidia cards the last two generations. Not sure what AMD is doing (yeah yeah, argue quality components versus *chinese knock-offs), but can their cards seriously get any longer?


That can be a serious concern for those who don't have big enough cases. That's a special situation, though.

I wouldn't just dismiss the bold part like it isn't valid. If smaller was simply better, then why do we generally recommend the 670's on the 680 PCB?
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I've never claimed to be flawless. It's a bad practice, but there are times when everyone gets tired of endless explaining and will just make the questions go away so that everyone can get back to doing what needs to be done.

Legitimate ideas never generate this response. Typically, it is when the other individual is trying to either a) whine to make their pet view (that I/we've already evaluated and dismissed) be addressed again b) is talking to hear themselves talk (we all work with those kinds), or c) is trying to generate huge masses of work for others with little, no, or negative gain to the organization. The nice thing is, if they can still support their view when given the firehose of info on why a current design or action plan is what it is, then they probably have a good idea that needs more evaluation and the rest of us have missed something. But yes, at its core, it is more of an argument tactic than it is an education or discussion tactic (with heavy roots in arrogance and is used when the interest is no longer in discussing, but making the other party go away).

Then there are the people who make posts that are aimed at the poster rather than the post because it's easier to prejudice people against the individual than refute the facts.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Well, you can clearly see a pattern when looking at Nvidias chips for the last 2 generations. They had large chips (GT200(b), GF100/110) and smaller ones. The Kepler family consists of GK107, 106, 104 and 110. That was known a long time ago. In that lineup, GK104 is not the fastest. If Nvidia would not release a larger chip at all, things would be different. But judging by the past and the fact of GK110's existence, I think things are pretty clear.

There was a GK100 that got canned, or, rebadged as GK110, because they couldn't get it to market. That doesn't mean that the 680 isn't nVidia's high end. There was also rumors about a bigger Tahiti running 1300MHz+ That doesn't make what AMD was actually able to bring to market isn't their high end design.

It means that GK100 and Tahiti+ (for lack of a better name) were pipe dreams. Looks like we might get those designs as a refresh?
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
That can be a serious concern for those who don't have big enough cases. That's a special situation, though.

I wouldn't just dismiss the bold part like it isn't valid. If smaller was simply better, then why do we generally recommend the 670's on the 680 PCB?

I never argued smaller was better, but I included that part as sarcasm since I know that is what someone will counter with.

Use the small case example:
Options - possible shoddy quality Nvidia option or NO AMD option (assuming a specific price / perf point is trying to be met, sure you can say "get an HD 7750")

Hmmm...

From what I've read of the Ref vs GTX 680 PCB it always come down to "it OC's better because it's a full bred GTX 680 board." I don't think I've read much on people running stock GTX 670 references having issues. Usually they start to tinker with offsets and realize "hey this isn't as good as that."
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
Then there are the people who make posts that are aimed at the poster rather than the post because it's easier to prejudice people against the individual than refute the facts.


The issue is, half the time the facts aren't relevant to the discussion, and they're spewed out in such volume that unless you're going to devote hours a day to this site, you don't have the time to point out why half the time it isn't even relevant.

If I tell you the world is round, and in response you post page after page on all the information you were able to find on the Atlantic Ocean along with 50 pictures, I can't really spend the time to go through and figure out what in the world your point is. At that point, the "facts" aren't relevant at all.
 

KingFatty

Diamond Member
Dec 29, 2010
3,034
1
81
The issue is, half the time the facts aren't relevant to the discussion, and they're spewed out in such volume that unless you're going to devote hours a day to this site, you don't have the time to point out why half the time it isn't even relevant.

If I tell you the world is round, and in response you post page after page on all the information you were able to find on the Atlantic Ocean along with 50 pictures, I can't really spend the time to go through and figure out what in the world your point is. At that point, the "facts" aren't relevant at all.

Excellent point, I much appreciate when people can really articulate a complex/nuanced point in very simple terms, and then also back it up with a link. But when the logic/point is so well-put, you often don't need to check the link yourself, but you know it's there if you question the point etc.

but what can you do, when the point you are trying to make cannot be simplified any further, so you can't do any more distilling/simplification? then the wall of text and stream of pics can also be helpful, but it's usually a good idea to preface it with your best attempt to summarize and show how there is not further reduction to be made and here's the fact dump.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126

This videocard is clocked at 1100mhz and there is no explanation what was used to test the power consumption - game, furmark, synthetic test? Regardless, this is not an HD7970 GE spec but even faster. You need to overclock a 680 to match this card in performance.


This card is clocked at 1200mhz! and has 6GB of GDDR5. You are not accounting for the huge performance increase over 680 and comparing a 6GB card vs. a 2GB one? Hilarious. Sapphire Toxic is 19-20% faster than GTX680 at 1600P. Nice try. You are not even comparing videocards in the same performance bracket at this point, nevermind the fact that one has 3x the VRAM.


Not an apples-to-apples comparison. This HIS card uses 1180mhz GPU clocks. Considering HD7970 GE is 5-6% faster on average at 1080P than a stock 680, an 1180mhz HD7970 is again in an entirely different class that requires a 1300mhz GTX680 to match this level of performance.

You can find GTX680 OC models that use less or the same as the reference model, too:

All those factory pre-overclocked 680 models you linked are much slower than all the HD7970 GE after-market SKU cards you linked. Every HD7970 GE card you linked was clocked between 1100-1200mhz. It takes a heavily overclocked 680 to match that level of performance and it takes a maxed out 680 to match a 1200mhz 7970 without a volt-mod on the 680.

At 1250mhz, HD7970 is basically unmatched by an overclocked sub-1300mhz 680. You can see here a 1250mhz HD7970 beats a 1217mhz GPU Core + GPU Boost = 1290mhz GTX680 by 7% at 1080P and by 13% at 1440P:
http://www.xbitlabs.com/picture/?src=/images/graphics/radeon-hd-7970-ghz-edition/zfulltable.png

So again you are not comparing 2 GPUs that perform the same. The extra 40-45W of power consumption on an overclocked 7970 nets 7-13% faster performance over the 1290mhz GTX680.

Sorry, you are not comparing apples and oranges. You are basically linking me the top-tier factory overclocked after-market HD7970 GE cards when you original started that HD7970 GE (at 1050mhz) factory clocks in after-market version draws as much power as a GTX480. That's simply wrong and you know it. Yes, at 1200-1250mhz, HD7970 will start to draw about 250W but that's 14-19% faster than the HD7970 GE.

Here is a KFA2 GTX680 @ 1267mhz which uses 212W of power.

HD7970 @ 1150-1165 mhz will use 225-238W of power. Like I told you, the difference between them is 40W maybe. HD7970s that are clocked to 1180 (like the HIS X you linked) or 1200mhz 6GB (Sapphire TOXIC) push around 250-260W but they are faster than a 1267mhz GTX680. For that you'd need a 1300mhz GTX680 which will push around 220W of power. Again you are looking at 40-45W delta on a system that uses almost 400W at this point.

GTX480 uses 273W of power at load at stock speeds.....^_^ More than an overclocked 1200mhz 7970 6GB.

And once again, you have completely failed to take into account that a person can go out right now and buy a $380 HD7970 1Ghz and overclock it to 1100-1150mhz on stock voltage of 1.175V.
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Wait - people comparing Peak consumption of the GTX480 to other cards? Wow. I guess we ignore the fact that the GTX480 has no power limiter like Tahiti...
 

Termie

Diamond Member
Aug 17, 2005
7,949
48
91
www.techbuyersguru.com
That can be a serious concern for those who don't have big enough cases. That's a special situation, though.

I wouldn't just dismiss the bold part like it isn't valid. If smaller was simply better, then why do we generally recommend the 670's on the 680 PCB?

I never argued smaller was better, but I included that part as sarcasm since I know that is what someone will counter with.

Use the small case example:
Options - possible shoddy quality Nvidia option or NO AMD option (assuming a specific price / perf point is trying to be met, sure you can say "get an HD 7750")

Hmmm...

From what I've read of the Ref vs GTX 680 PCB it always come down to "it OC's better because it's a full bred GTX 680 board." I don't think I've read much on people running stock GTX 670 references having issues. Usually they start to tinker with offsets and realize "hey this isn't as good as that."

It's true that a larger PCB is probably better on average, but the need for longer cards is partly driven by power use.

What really baffles me is why the 7870 has a 9.5" PCB, when it appears to use almost the same power as the 7" 660, and far, far less power than a reference 7" 670. I'm not even convinced the 7870 needs two PCIe power connectors, but it probably helps with overclocking.

For the past few generations, starting with at least the GTX460, nVidia has been able to provide equal performance on a much smaller card than AMD. No one needed a 10" PCB to take a 460 to 850MHz or beyond. The 7870 uses about the same power, and yet its huge, about the size of a 470. What gives?
 
Feb 19, 2009
10,457
10
76
AMD over-engineer their graphics cards, looking through their past few generations, they always included many more VRMs, chokes than required.

NV on some of their products include just enough.

Does it matter? Well yes if you overvolt and OC like nuts. Otherwise, probably no difference in longevity.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
For the past few generations, starting with at least the GTX460, nVidia has been able to provide equal performance on a much smaller card than AMD. No one needed a 10" PCB to take a 460 to 850MHz or beyond. The 7870 uses about the same power, and yet its huge, about the size of a 470. What gives?

From like Radeon 9700 to about Radeon 4870, the PCB for the Radeon cards was rather small, perhaps 7-8" and even then I think I'm over estimating.

Then I got my Radeon HD 5870 and I had to chissel the 3.5" Bay out of my case to get that SoB in, haha.

It seems only the HD x7x0 tier card have tiny foot prints, the rest require a modest sized case.

AMD over-engineer their graphics cards, looking through their past few generations, they always included many more VRMs, chokes than required.

NV on some of their products include just enough.

Does it matter? Well yes if you overvolt and OC like nuts. Otherwise, probably no difference in longevity.

I'm sure over engineering is a win for consumers if it doesn't affect the price, but when I have a PC case with an 8" clearance, I can either buy a new case and add that to the cost of a GPU upgrade or pick up an nVidia card.
 
Feb 19, 2009
10,457
10
76
LOL yes its compensating for something.. their gpus are always so huge.

I was almost ready to grab a 670 for this SFF case but luckily found a short enough 7950 that fits perfectly.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
LOL yes its compensating for something.. their gpus are always so huge.

I was almost ready to grab a 670 for this SFF case but luckily found a short enough 7950 that fits perfectly.

Which model Powercolor and what's the length of it?
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
The issue is, half the time the facts aren't relevant to the discussion, and they're spewed out in such volume that unless you're going to devote hours a day to this site, you don't have the time to point out why half the time it isn't even relevant.

If I tell you the world is round, and in response you post page after page on all the information you were able to find on the Atlantic Ocean along with 50 pictures, I can't really spend the time to go through and figure out what in the world your point is. At that point, the "facts" aren't relevant at all.


I understand your point. I don't debate endless long winded points either. That alone though doesn't invalidate any point RS has made. If most of it is superfluous, as you've suggested, you should just be able to skip it and still make your point, counter his.

I believe he is just trying to show the large amount of evidence that has lead him to his position. I don't think he's doing it to make it difficult to wade through it and advance a false position.
 

lopri

Elite Member
Jul 27, 2002
13,317
691
126
Sold my GTX 670* and looking to buy a GTX 650 2GB for ~$100. (hopefully soon) Kepler is such a pleasant piece for 2D work. So this generation I am going to end up with a high-end AMD for 3D gaming (HD 7950) and mid-range NV for everyday computing (GTX 650).

This isn't a slight to NV at all. It's kind of like my last round - GTX 580 and HD 6870. While GTX 580 was faster in games but I liked HD 6870 way more for every day work. This time I will likely spend most of my computing on GTX 650 thanks to its outstanding 2D performance and multitasking. HD 7950 will be reserved for 3D gaming as my old GTX 580 did.

*The money is going to Getting a Nexus 7, woot.
 
Last edited:

toyota

Lifer
Apr 15, 2001
12,957
1
0
I dont consider the gtx650 a mid range gpu at all. it cant even beat the slowest mid range gpu(gtx560 se) from Nvidia's previous gen.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
I dont consider the gtx650 a mid range gpu at all. it cant even beat the slowest mid range gpu(gtx560 se) from Nvidia's previous gen.

I don't think this much matters due to the fact that he isn't using the 650 for gaming, but for it's superior 2D quality/performance??? o_Oo_O
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
I don't think this much matters due to the fact that he isn't using the 650 for gaming, but for it's superior 2D quality/performance??? o_Oo_O
my comment has nothing to do with anything other than the fact that he called it a mid range gpu which it clearly is not.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I'm seriously debating picking a GT 640 or GTX 650 to experiment Hybrid PhysX with. I get the feeling Kepler is a beast for that kind of stuff versus Fermi. Eirther way, the GTX 460 would go to a good home versus a drawer once I'm done with BL2's and wait for the next PhysX title.
 

rolodomo

Senior member
Mar 19, 2004
269
9
81
Sold my GTX 670* and looking to buy a GTX 650 2GB for ~$100. (hopefully soon) Kepler is such a pleasant piece for 2D work. So this generation I am going to end up with a high-end AMD for 3D gaming (HD 7950) and mid-range NV for everyday computing (GTX 650)..

I'm interested in a GTX 650 2GB as well for my hackintosh. The whole Kepler line works quite well in Mountain Lion (you would think the latest AMD cards would too, but they're much more hit and miss across model numbers).

I would probably buy a GTX 650 2GB for $100. I don't game, but I might want to with a higher-res display on my Apple rig. Currently running a hot & loud reference ATI 4870 512M.

I'd probably snag the GTX 660 2GB for $180.

I keep telling myself there is no way I need a GTX 660 TI in Apple world, but this model seems to be the most popular Kepler ATM for custom Macs.