Kepler vs GCN: Which is the better architecture

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Better Architecture?

  • Kepler

  • GCN


Results are only viewable after voting.

Ventanni

Golden Member
Jul 25, 2011
1,432
142
106
I voted GCN because it's good for both gaming and compute, but Kepler definitely has an edge in gaming.
 

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
I'd have to say GCN because it is better for compute which means it has more gaming potential (Forward+ uses compute so "compute vs. gaming" is a false dichotomy) and because Kepler requires too much work on the part of the driver and game devs. The full hardware scheduling in GCN ensures that sustained performance is kept closer to peak performance in more apps.

However, I prefer nv's hardware color and depth calcs so I'd buy a 670 over anything AMD has to offer. Of course, it doesn't matter when every monitor out now sucks.
 

Elfear

Diamond Member
May 30, 2004
7,168
826
126
I voted GCN because it's good for both gaming and compute, but Kepler definitely has an edge in gaming.

It does? Any links?

The charts on post #21 show the two architectures pretty much tied in gaming, if you're referring to performance/watt.
 
Feb 19, 2009
10,457
10
76
Kepler is not more efficient as a whole, since some of you forget the 78xx. GCN shows it can scale for raw perf or efficiency.
 

Rvenger

Elite Member <br> Super Moderator <br> Video Cards
Apr 6, 2004
6,283
5
81
I think at this point in time and maturity, GCN may have proved to be a superior architecture based on performance with higher resolutions -- and not really surprising based on the raw specs and potential.

If the subject is efficiency, Kepler makes a very strong case.

AMD/ATI usually are always impressive when it comes to architectures. World class --leaders in many respects -- there are reasons why ATI/AMD and nVidia survived through the years and its based on their immense talents.


:thumbsup: Indeed they are both very good. Pick your poison. Red or green lol.
 

omeds

Senior member
Dec 14, 2011
646
13
81
I'm going with GCN, apart from efficiency. I feel the 7970 is better hardware wise, but the 680 is the better overall product.
 

ShadowOfMyself

Diamond Member
Jun 22, 2006
4,227
2
0
And yet Kepler still keeps up with GCN even when limited...!

Wat? It gets destroyed in compute and even in games its slower

And despite all that, according to the charts above, it still only manages to pull the same perf/watt as GCN

Not sure how Kepler has 14 votes... I think youre in for a shock when big Kepler is released...
Then maybe youll realize how underrated GCN is
 

The Alias

Senior member
Aug 22, 2012
646
58
91
gcn vs kepler in gaming :

perfrel_2560.gif


now let's process the results here :

the 7970 ghz edition outdoes the 680 and uses more power and that what you guys use to figure that kepler is more efficient than gcn, but that comparison is invalid because the 7970 does WAY more as the 680 sucks at compute whereas the 7970 ghz is a beast at it . So let's move to a more even handed comparison :

7750 has about the same power as the 650 in gaming,both use 128 bit interfaces, yet gcn uses less power; so much so the 7750 doesn't even require need more power than what's provided from the pcie lane where as the 650 does . what does that mean you ask . Well it means that gcn is a more efficient as an architecture than kepler when they are compared on even grounds .

my vote goes gcn
 

Greenlepricon

Senior member
Aug 1, 2012
468
0
0
Wat? It gets destroyed in compute and even in games its slower

And despite all that, according to the charts above, it still only manages to pull the same perf/watt as GCN

Not sure how Kepler has 14 votes... I think youre in for a shock when big Kepler is released...
Then maybe youll realize how underrated GCN is

Kepler at 100% will probably be something to be reckoned with, but AMD is working on their next big thing now too. It would have been nice if Nvidia didn't have this generation of cards running handicapped, but it's still too early to say how it will do at its best.
Overall I agree with this post (other than me being unsure on big keplar). If we count whatever both companies threw at us this year, then GCN is superior in terms of power, while being close to as efficient. If we're talking about the actual architecture and transistor details, then that's up to who is taking advantage of what. They both are very good and it's far out of my scope of knowledge to make a purely objective call like that. As for what we have at the moment, I voted GCN.
 

nforce4max

Member
Oct 5, 2012
88
0
0
I voted for GCN on the grounds of compute performance and that future driver improvements will further improve performance. I look at compute performance not just gaming performance alone as I prefer to have a complete product rather than having something that only did well in one area but very poor in another. Kepler is great in gaming but it is not really the complete package that I am wanting and in the future gaming will adopt some of the HPC features that high end compute apps are making use of today. GCN is more balanced in this regard and power consumption isn't a big deal for anything lower than a 7950. I will never forget the gtx480 and how well it cooked meat when under full load.
 
Last edited:

Haserath

Senior member
Sep 12, 2010
793
1
81
Wat? It gets destroyed in compute and even in games its slower

And despite all that, according to the charts above, it still only manages to pull the same perf/watt as GCN

Not sure how Kepler has 14 votes... I think youre in for a shock when big Kepler is released...
Then maybe youll realize how underrated GCN is

There are rumors that there is a GK114 instead of GK110, so we may never know.

AMD has a bigger GCN chip coming as well.

Both are close, but Nvidia has always been ahead in real compute fields. Hardware needs software.
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
7750 has about the same power as the 650 in gaming,both use 128 bit interfaces, yet gcn uses less power; so much so the 7750 doesn't even require need more power than what's provided from the pcie lane where as the 650 does . what does that mean you ask . Well it means that gcn is a more efficient as an architecture than kepler when they are compared on even grounds .

my vote goes gcn

Hmm, not necessarily:

Power.png


The 650 comes in with a slightly lower power draw than the 7750. So, why does the 650 have a PCI-E power connector and the 7750 doesnt? I don't know, but I can tell you that the 650's power connector hardly ever gets used. What's a little more interesting to me is that the 660 GTX comes in neck and neck with the 7870 and a little bit above the 7850. In any case, there's not much basis for thinking that Kepler has a large advantage in performance per watt, at least not based on this chart.

I do find it rather remarkable how AMD and Nvidia's chips so neatly line up with each other in performance. GK104 roughly matches Tahiti, GK106 roughly matches Pitcairn, and GK107 roughly matches Cape Verde. I mean, it couldn't be closer if they had planned it. :p Still, that is just general performance matched by chip, not the overall quality, scalability, and flexibility of the architectures.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Hmm, not necessarily:

Power.png


The 650 comes in with a slightly lower power draw than the 7750. So, why does the 650 have a PCI-E power connector and the 7750 doesnt? I don't know, but I can tell you that the 650's power connector hardly ever would be used.

Mind including a link to the review that's from? Just sitting there out of context doesn't tell us anything.
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
how many people here actually use GCN for compute?

seems like we saw the complete reverse with Fermi

a feature that is often bragged about but seldom used

I suppose there are a decent chunk of people out there who dabble with distributed computing, but I think on the whole that's still very much a minority number.

regardless, this is still a weighted question. GCN is the better architecture for compute, so it can be the better solution for those who care at all about that aspect, and its still pretty good in gaming and offers excellent value in that regard, however Kepler is clearly the better of the two when it comes to gaming efficiency. Its not just performance/watt, but performance/transistor

and while its a feature almost as minor as GCN's compute advantage, nVidia still has PhysX

and so even though I give my vote to Kepler because I care most about gaming, that doesn't mean I think its currently the best solution; Kepler architecture might be better for gaming, but that architecture is overpriced.
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
The price of the final products depends on market demand and how much the companies want to sell it for; it's not a factor in evaluating the quality of a microarchitecture. Better would be to know just how much resources Nvidia and AMD need to make each chip, but that information isn't made publicly available.

Mind including a link to the review that's from? Just sitting there out of context doesn't tell us anything.

It's from Tom's Hardware's review of the Geforce GTX 660 and 650 when they came out. Toms was one of the few review outlets that actually did some benchmarks with the 650 and not just the 660 (Ryan Smith of AT said a review of the 650 was coming in the next week or so after the 660 review, but it never did...).
 
Last edited:

The Alias

Senior member
Aug 22, 2012
646
58
91
how many people here actually use GCN for compute?

seems like we saw the complete reverse with Fermi

a feature that is often bragged about but seldom used

I suppose there are a decent chunk of people out there who dabble with distributed computing, but I think on the whole that's still very much a minority number.

regardless, this is still a weighted question. GCN is the better architecture for compute, so it can be the better solution for those who care at all about that aspect, and its still pretty good in gaming and offers excellent value in that regard, however Kepler is clearly the better of the two when it comes to gaming efficiency. Its not just performance/watt, but performance/transistor

and while its a feature almost as minor as GCN's compute advantage, nVidia still has PhysX

and so even though I give my vote to Kepler because I care most about gaming, that doesn't mean I think its currently the best solution; Kepler architecture might be better for gaming, but that architecture is overpriced.
gcn's compute advantage is huge . because of it if you buy a 78xx card and bitcoin mine with it, it eventually pays for itself !
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
I'd say Nvidia's implementation of Kepler chips was a bit better as a whole package. Boost worked wonderfully out of the box. AMD now does boost as well with GCN but I remember hearing of how poorly it was used. That could be fixed now but I haven't followed. At release Nvidia's software side was ahead of AMD. It's more even now, but it did take 6 months.

I think Nvidia had a better launch and used their hardware better out of the gate.

That may have little to do with the actual architecture but you can't just throw a GPU onto a board and sell it, you gotta make it work through software as well.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
It's from Tom's Hardware's review of the Geforce GTX 660 and 650 when they came out. Toms was one of the few review outlets that actually did some benchmarks with the 650 and not just the 660 (Ryan Smith of AT said a review of the 650 was coming in the next week or so after the 660 review, but it never did...).

Thanks. I was wondering what app they ran taking that measurement. I can't see where they tell us though. Not much good without knowing, IMO. :\
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I'd say Nvidia's implementation of Kepler chips was a bit better as a whole package. Boost worked wonderfully out of the box. AMD now does boost as well with GCN but I remember hearing of how poorly it was used. That could be fixed now but I haven't followed. At release Nvidia's software side was ahead of AMD. It's more even now, but it did take 6 months.

I think Nvidia had a better launch and used their hardware better out of the gate.

That may have little to do with the actual architecture but you can't just throw a GPU onto a board and sell it, you gotta make it work through software as well.

That doesn't speak to which arch. is better.

It's a very vague question, though, and leaves a lot of room for interpretation.
 

Sunny129

Diamond Member
Nov 14, 2000
4,823
6
81
how many people here actually use GCN for compute?
we're here, but we're admittedly in the vast minority. to be honest, some DC projects are far more efficient on nVidia hardware, and other DC projects are fare more efficient on AMD hardware...hence the mix of AMD and nVidia hardware you see in my sig. granted most of my AMD hardware is VLIW4 architecture, but i do have an HD 7950 in the mix, which runs Milkyway@Home 24/7.
 
Last edited:

Will Robinson

Golden Member
Dec 19, 2009
1,408
0
0
Which Kepler? In gaming I think the GK104 does rather well. In compute I think GK104 is pretty decent compared to GCN. But the full sized compute Kepler should wipe the floor with GCN in compute.
mm...good idea...let's compare some mythical,non existent chip with AMD's freely available production chip...that'll work...:whiste:
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
The question is what kepler architecture? GK104 or the big kepler GK110? Fermi already had hardware scheduling and GK104 does not to decrease die size and increase margins. GK104 is all about margins. That half length PCB of GTX670 really takes the cake when it comes to cutting manufacturing costs to a minimum. It really looks like a PCB taken from sub 100$ card.

I think GCN is better overall when considering both gaming and compute performance then kepler as implemented in GK104. GK110 might be another matter. Tahiti is not that much bigger then GK104 and yet it has reasonable FP64 performance which takes a lot of die size, hardware scheduling(also takes die size) and overall just wipes the floor with GK104 when it comes to compute performance. If GK104 featured the same FP32 to FP64 ratio and had hardware scheduling like Fermi had I don't think it would be any smaller then tahiti. As I said GK104 is all about gaming performance and die size. I think the cost of producing a GTX680 card is closer to 7870 then 7970 so from that standpoint it is a huge win for nVidia bottom line. It just has additional 82mm2 and that's it, the same memory bus, wide memory buses are expensive to implement, the same amount of memory. Memory also don't come free. 4GB GTX680s are ultra expensive. Personally I would prefer doubled GF110. 1024 shader cores clocked at over 2GHz and so on... The obvious downside would be performance per watt, but personally I couldn't care less about that metric if it doesn't interfere with cooling. It's relevant when building multi-gpu rigs with cards stacked right next to each other. But I wouldn't have any problem with a single GPU pulling 400Watts.
 
Last edited: