Kepler vs GCN: Which is the better architecture

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Better Architecture?

  • Kepler

  • GCN


Results are only viewable after voting.

omeds

Senior member
Dec 14, 2011
646
13
81
Software=applications in the context of my post. NOT drivers. Read my post again. Nvidia has had some really terrible driver issues of their own over the years.

I'm not referring to quality of drivers per se, but features and support for consumers. With Nv I can have a decent 3D experience, PhysX, AO, SLI AA, far more IQ tweaks and AA modes/combinations via 3rd party apps, set render ahead limits, more control over profiles (SLI, AA, AO etc).

AMD is just lacking on the software side to me as a consumer, although I like the hardware better.
 

Will Robinson

Golden Member
Dec 19, 2009
1,408
0
0
Let me see....Tahiti XT is faster in Frames Per Second and way faster in compute ability.
Hard to pick a winner was it?:whiste:
Oops,I forgot to mention it's also cheaper than its counterpart GTX680.
3-0 Game over.
*checks voting by the forum members*
61% in favor of GCN....ouch:oops:
 
Last edited:

Ibra

Member
Oct 17, 2012
184
0
0
You, guys, forgot that K20X is also based on Kepler architecture. So who's faster in compute now?
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
Architecture != specific SKUs. If we look at architecture, price and features and even absolute performance doesn't necessarily matter.

This poll is kinda useless because 95% of all people who voted didn't understand it and because the Kepler architecture is not yet complete. Only when GK110-based GeForce cards launch will we be able to compare. K20(X) exist, but there are of course no independent comprehensive reviews out there.
 
Last edited:

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Let me see....Tahiti XT is faster in Frames Per Second and way faster in compute ability.
Hard to pick a winner was it?:whiste:
Oops,I forgot to mention it's also cheaper than its counterpart GTX680.
3-0 Game over.
*checks voting by the forum members*
61% in favor of GCN....ouch:oops:

If only the forum represented the buying public.
 

Will Robinson

Golden Member
Dec 19, 2009
1,408
0
0
They probably do.
I'd say there's more NVDA owners here than AMD generally.
It's likely a similar percentage as market share in the public domain.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
mm...good idea...let's compare some mythical,non existent chip with AMD's freely available production chip...that'll work...:whiste:

GK110 does exist. Forgot to check your facts? ;)

Problem is that GCN and Kepler have different product solutions. Nvidia is separating their HPC/Quadro and their GeForce lines, AMD is putting it all into one GPU. You would have to compare their respective solutions with each other like S9000 vs K20.
 

96Firebird

Diamond Member
Nov 8, 2010
5,747
342
126
Everything overall.

In this case...

Kepler excels at the professional compute market, after releasing their newest professional cards. It also is great for the mobile market, and hangs in there in the discrete desktop market.

GCN is better as a discrete desktop card, and does well in compute for both regular users and the professional market. For the bitcoin users out there, it provides enough compute performance to earn a little bit of money.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,817
1,552
136
From what we've seen so far it's pretty obviously GCN.

I mean, just look back at previous compute architectures versus gaming ones. GT200b was 84% larger than RV770 and outperformed it by 20% in gaming, and still lost sometimes in compute (simpler tasks that played into AMD's higher theoretical throughput, or basically anything that needed double precision since GT200 hardly had any)

Fermi was 65% larger and got what, 12-18% higher gaming performance than Cypress? And Cypress was still better at some high profile workloads like bitcoin mining.

Tahiti is 25% larger than GK104, and performs about 8-10% better in games, and unlike everything else mentioned so far GK104 doesn't even make an attempt at doing double precision, which is why a comparison against Pitcairn is even more unflattering. If that's not enough, Tahiti's compute advantage is greater than any of the previous ones Nvidia enjoyed. There is no stronghold for GK104 like bitcoin mining. It's just worse all around at compute, by a massive amount.

There's a chance that GK110 will paint a different picture, but based on what we have in front of us right now AMD is far in the lead architecturally. If you think otherwise, then you must see Fermi and GK200 as absolute crap since they both paid drastically higher "compute tax" yet got far less out of it. Anyone voting for Kepler in this thread willing to make that admission? ;)
 
Last edited:

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Adaptive Vsync is not a feature it's a marketing gimmick same with Physx and whats Cuda you mean Streaming Processors well AMD has them as well.

They're actually features. You may not like them and thankfully there are other choices from IHV's and AIB's.
 

headhumper

Banned
Dec 11, 2012
35
0
0
They're actually features. You may not like them and thankfully there are other choices from IHV's and AIB's.
Well Adaptive Vsync is not a feature as it does not work and there's nothing at all wrong with good old regular Vsync as it was never broken therefore relegating adaptive Vsync to a marketing gimmick. Cuda corers is marketing speak for Streaming Processors and Physx is in like 18 games and most are old and or crappy games.

http://www.physxinfo.com/
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
Well Adaptive Vsync is not a feature as it does not work and there's nothing at all wrong with good old regular Vsync as it was never broken therefore relegating adaptive Vsync to a marketing gimmick. Cuda corers is marketing speak for Streaming Processors and Physx is in like 18 games and most are old and or crappy games.

http://www.physxinfo.com/
Dude you have no clue what you are talking about.Stop right there as it will be beneficial for all of us :thumbsup:
 

Ibra

Member
Oct 17, 2012
184
0
0
Architecture != specific SKUs. If we look at architecture, price and features and even absolute performance doesn't necessarily matter.

This poll is kinda useless because 95% of all people who voted didn't understand it and because the Kepler architecture is not yet complete. Only when GK110-based GeForce cards launch will we be able to compare. K20(X) exist, but there are of course no independent comprehensive reviews out there.

Because nobody is going to review one card. Start from 100 K20Xs and we talk. :cool:

From what we've seen so far it's pretty obviously GCN.

I mean, just look back at previous compute architectures versus gaming ones. GT200b was 84% larger than RV770 and outperformed it by 20% in gaming, and still lost sometimes in compute (simpler tasks that played into AMD's higher theoretical throughput, or basically anything that needed double precision since GT200 hardly had any)

Fermi was 65% larger and got what, 12-18% higher gaming performance than Cypress? And Cypress was still better at some high profile workloads like bitcoin mining.

Tahiti is 25% larger than GK104, and performs about 8-10% better in games, and unlike everything else mentioned so far GK104 doesn't even make an attempt at doing double precision, which is why a comparison against Pitcairn is even more unflattering. If that's not enough, Tahiti's compute advantage is greater than any of the previous ones Nvidia enjoyed. There is no stronghold for GK104 like bitcoin mining. It's just worse all around at compute, by a massive amount.

There's a chance that GK110 will paint a different picture, but based on what we have in front of us right now AMD is far in the lead architecturally. If you think otherwise, then you must see Fermi and GK200 as absolute crap since they both paid drastically higher "compute tax" yet got far less out of it. Anyone voting for Kepler in this thread willing to make that admission? ;)

Actually it's 4-5%. (680 vs 7970GHz)

perfrel.gif
perfrel.gif
perfrel.gif


Well Adaptive Vsync is not a feature as it does not work and there's nothing at all wrong with good old regular Vsync as it was never broken therefore relegating adaptive Vsync to a marketing gimmick. Cuda corers is marketing speak for Streaming Processors and Physx is in like 18 games and most are old and or crappy games.

http://www.physxinfo.com/

Good old regular Vsync is worse.

However with VSync on, your FPS can often fall by up to 50%.
That's why I played with tearing till Adaptive Vsync.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
From what we've seen so far it's pretty obviously GCN.

I mean, just look back at previous compute architectures versus gaming ones. GT200b was 84% larger than RV770 and outperformed it by 20% in gaming, and still lost sometimes in compute (simpler tasks that played into AMD's higher theoretical throughput, or basically anything that needed double precision since GT200 hardly had any)

Fermi was 65% larger and got what, 12-18% higher gaming performance than Cypress? And Cypress was still better at some high profile workloads like bitcoin mining.

Tahiti is 25% larger than GK104
, and performs about 8-10% better in games, and unlike everything else mentioned so far GK104 doesn't even make an attempt at doing double precision, which is why a comparison against Pitcairn is even more unflattering. If that's not enough, Tahiti's compute advantage is greater than any of the previous ones Nvidia enjoyed. There is no stronghold for GK104 like bitcoin mining. It's just worse all around at compute, by a massive amount.

There's a chance that GK110 will paint a different picture, but based on what we have in front of us right now AMD is far in the lead architecturally. If you think otherwise, then you must see Fermi and GK200 as absolute crap since they both paid drastically higher "compute tax" yet got far less out of it. Anyone voting for Kepler in this thread willing to make that admission? ;)

1) GK110 is already out there

2) 40 nm vs 28 nm yeah right
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
Then prove me wrong please.

I can but it will be boring anyway.So for starters adaptive vsync does not cut your fps by half(and so on) when your fps is below your monitor's refresh rate.CUDA is NV's proprietary technology for GPU compute it has got nothing to do with AMD's streaming processors.They coined the term after they introduced unified device architecture in 8XXX series.Physx I think we can agree with that.
 

headhumper

Banned
Dec 11, 2012
35
0
0
I can but it will be boring anyway.So for starters adaptive vsync does not cut your fps by half(and so on) when your fps is below your monitor's refresh rate.

Would you like me to upload a video to outline how regular Vysnc does not cut frames like nvidia tries to erroneously claim to try and pump up there marketing gimmick ?
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
Would you like me to upload a video to outline how regular Vysnc does not cut frames like nvidia tries to erroneously claim to try and pump up there marketing gimmick ?
NV has nothing to do with that.Maybe you should understand some basics before posting.
 

ginfest

Golden Member
Feb 22, 2000
1,927
3
81
OT but I wonder if there would be any poll between ATI and Nvidia on this forum that ATI wouldn't win? This place is skewed big time for big red ;)