Kepler vs GCN: Which is the better architecture

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Better Architecture?

  • Kepler

  • GCN


Results are only viewable after voting.

Siberian

Senior member
Jul 10, 2012
258
0
0
Midrange Kepler launhed faster than a high end GCN. That's almost like being a generation ahead. Just compare the size of each chip to see.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
mm...good idea...let's compare some mythical,non existent chip with AMD's freely available production chip...that'll work...:whiste:


Did you want to buy one ? A K20 ?
http://www.tigerdirect.com/applicati...?EdpNo=7462665


http://www.theregister.co.uk/2012/11/12/nvidia_tesla_k20_k20x_gpu_coprocessors/
The GK104 versus the GK110

The GK104 weighs in at 3.54 billion transistors, and bundles 192 single-precision CUDA cores into what Nvidia calls a streaming multiprocessor extreme (SMX) bundle.
The GK104 has eight SMX units for a total of 1,536 cores. Each SM unit has 64KB of L1 cache and 768KB of L2 cache added for the SM to share. Unlike the predecessor "Fermi" GPUs, the Kepler1 GK104 chip has 48KB of read-only cache memory tied to the texture units. On the Tesla K10 card, all of the 1,536 cores are fired up, and there are two GK104s on the PCI-Express card, each with 4GB of GDDR5 graphics memory and 160GB/sec of memory bandwidth off the card.
The Tesla K10 card has a piddling 190 gigaflops of double-precision math, but a whopping 4.58 teraflops of single-precision. For weather modeling, life sciences, seismic processing, signal processing, and other workloads that don't care a bit about DP floating point, this GPU coprocessor is exactly what the scientist ordered.
nvidia_tesla_k10_k20_features.jpg
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
mm...good idea...let's compare some mythical,non existent chip with AMD's freely available production chip...that'll work...:whiste:

GK114 does exist, and it would be fair game to compare it to GCN's top dog Tahiti (especially considering that Tahiti is used in AMD's best professional card they sell). This thread isn't just about what has been made available for desktop graphics cards; it's about the architecture in general.

The problem with GK114, though, is that hardly any benchmarks for it are available. So while it exists, that fact hardly helps evaluate just how good the Kepler architecture is.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
GK114 does exist, and it would be fair game to compare it to GCN's top dog Tahiti (especially considering that Tahiti is used in AMD's best professional card they sell). This thread isn't just about what has been made available for desktop graphics cards; it's about the architecture in general.

The problem with GK114, though, is that hardly any benchmarks for it are available. So while it exists, that fact hardly helps evaluate just how good the Kepler architecture is.

Right now it's only in a tesla card, so gaming benchmarks are out of the question, still it has pretty conservative clocks, it's quite normal in the professional cards but I would like to see if it handles the same clocks as GK104 given proper cooling of course. 732MHz at stock is pretty damn little. Maybe it just can't clock well and that's why we'll never see it in a GF card, although nv greed seems like a more probable explanation.
BTW. But where are the compute benchmarks? We should have seen tons of them by now.
http://www.anandtech.com/show/6446/nvidia-launches-tesla-k20-k20x-gk110-arrives-at-last
Anand did a paper article about it a month ago and still no benchmarks? If nV wasn't kind enough to send them a sample for evaluation, they should just buy it. Why didn't they?
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Right now it's only in a tesla card, so gaming benchmarks are out of the question, still it has pretty conservative clocks, it's quite normal in the professional cards but I would like to see if it handles the same clocks as GK104 given proper cooling of course. 732MHz at stock is pretty damn little. Maybe it just can't clock well and that's why we'll never see it in a GF card, although nv greed seems like a more probable explanation.
BTW. But where are the compute benchmarks? We should have seen tons of them by now.

If they have enough to sell as a Geforce card, I'm fairly certain they will. After all, it's just the first chip that cost million$ to make. After that selling them for ~$600 each would make plenty of money. Assuming decent yields and they don't have to take fab resources that would be better for higher volume parts.

In the end, I think what will determine that is if they can compete with AMD's top chip with a smaller chip. If AMD focuses on making a top consumer chip that's also good on compute, nVidia might be able to do that.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
If they have enough to sell as a Geforce card, I'm fairly certain they will. After all, it's just the first chip that cost million$ to make. After that selling them for ~$600 each would make plenty of money. Assuming decent yields and they don't have to take fab resources that would be better for higher volume parts.

In the end, I think what will determine that is if they can compete with AMD's top chip with a smaller chip. If AMD focuses on making a top consumer chip that's also good on compute, nVidia might be able to do that.

Even if they could make enough of them to release them as GeForce card I'm not so sure that they would do it. Even with slower and more profitable cards for them they still outsell AMD's cards and despite all that the market-share for NV seems to be rising at the expense of AMD. They don't need to have the fastest single GPU card on the market to sell more then the competitor. It's(GTX680) slower and more expensive to boot yet it still outsells 7970sGHz by a lot. For example in my country the cheapest GTX680 costs 25% more then gigabyte 7970(also the cheapest 7970 I could find) clocked at 1GHz(I'm not sure if it's GHz edition or just normal overcloced, but it doesn't really matter) And guess what? nV sells more GTX680s then AMD 7970s and it's not some trivial difference it sells way more.
 
Last edited:

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
Kepler also won huge chunks of market share in the mobile market.


Chief Executive of AMD We Are Not Interested in Low-Volume Customers.
But reduction manufacturing costs in many ways causes market share decrease. Many criticized Nvidia Corp. for pumped up OpEx due to implementation costs and other manufacturing-related charges that the company faced during the Kepler GPU family ramp up. As the time has shown, Nvidia is now the No. 1 supplier of notebook GPUs (based on data from Mercury Research provided by Nvidia) because of AMD’s reluctance to help integrate its Radeon Mobility products based on the recent architecture.
 
Last edited:

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
It's interesting that this poll is closer between AMD and nvidia when you are discussing the architecture, whereas the poll for GTX 680 vs Radeon 7970GE is a landslide in favour of the 7970.

Nvidia did themselves a good turn after the debacle of the GTX 480/470 dustbuster furnaces with Kepler. I know I'm curious to see what a 500mm2+ Kepler die is like in terms of thermals. Pretty sure it will still be a roaster, but more in the vein of a 580 than a 480.

Kepler in gaming right now is impressive for the power efficiency but not really for performance. Sort of a flip flop on what nvidia has traditionally done; powerful performance and not so good thermals, efficiency etc.

I really hope they decide to put out a decent enthusiast card with Kepler at some point. Huge die, unlocked voltage control, serious performance etc. If the 780 as a 680 refresh is just another pretty tiny die with a nominal performance increase and more locked down voltage, it will be really disappointing.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
Kepler. Just look at the new mobile GPUs to see the power efficiency. The GTX 680MX is as fast as the desktop GTX 580.

I can't comment on compute as it doesn't matter to me. But even though the 7970 GE is clearly faster than the 680, I wonder how close it would be if the 680 had a 384-bit memory bandwidth. Sure it would raise the power consumption of the Kepler chip as well, but I wonder if it would still use less and be equal or perhaps faster than the 7970 GE.
 
Last edited:

lavaheadache

Diamond Member
Jan 28, 2005
6,893
14
81
honestly, Who are we to actually discuss the merits of the architectures? As if the majority really know what any of it actually means.

We just sit back eating Cheetos reading what the smart people wrote, copy and pasting big words and appropriate terminology as we go. A few days later and a few beers deep we can then be found at Best Buy talking to people in the video card isle telling them how we are members of an elite AMD biased video forum and direct them away from that 5200 Ultra they had their unsuspecting fingers on learning them about the promise land we call New Egg.

After a few years of repeating this behavior we then earn the right to call everybody and anybody a Fanboi based on what video card they have, and throw stones at their mothers.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
Kepler. Just look at the new mobile GPUs to see the power efficiency. The GTX 680MX is as fast as the desktop GTX 580.

I can't comment on compute as it doesn't matter to me. But even though the 7970 GE is clearly faster than the 680, I wonder how close it would be if the 680 had a 384-bit memory bandwidth. Sure it would raise the power consumption of the Kepler chip as well, but I wonder if it would still use less and be equal or perhaps faster than the 7970 GE.

Easy to check by matching memory bandwidth of those cards. I don't know if 294mm2 die can even have 384bit bus, if not 320bit would be a good compromise and then I wouldn't worry about RAM amount on multi-GPUs. There was a test where someone lowered 7970s (Xbitlabs?)memory clock so that it would match its bandwidth if it had 256bit memory bus. From what I remember it was about 13% slower. GK104 got 256bit bus because they didn't plan this chip to be their flagship. They probably made more money even if their big chip had been released instead of GK104. GTX680 is a cheap card to build and even though they have a slower card they still outsell AMD very significantly. High-end is not the only area where they have a slower card and still outsell AMD's competing product without problems. Such is the power of marketing and brand.
 
Last edited:

Zodiark1593

Platinum Member
Oct 21, 2012
2,230
4
81
honestly, Who are we to actually discuss the merits of the architectures? As if the majority really know what any of it actually means.

We just sit back eating Cheetos reading what the smart people wrote, copy and pasting big words and appropriate terminology as we go. A few days later and a few beers deep we can then be found at Best Buy talking to people in the video card isle telling them how we are members of an elite AMD biased video forum and direct them away from that 5200 Ultra they had their unsuspecting fingers on learning them about the promise land we call New Egg.

After a few years of repeating this behavior we then earn the right to call everybody and anybody a Fanboi based on what video card they have, and throw stones at their mothers.
Ehh, I don't bother with Best Buy, I hang around Fry's. And who needs Copy and Paste. I R smurt. :p

It's a pity anything beyond entry level would probably fry most OEM PSUs. I'd be glad to slap that 5200 Ultra out of the noob's hand and give them a GT 650 or Radeon 7750 instead.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Even if they could make enough of them to release them as GeForce card I'm not so sure that they would do it. Even with slower and more profitable cards for them they still outsell AMD's cards and despite all that the market-share for NV seems to be rising at the expense of AMD. They don't need to have the fastest single GPU card on the market to sell more then the competitor. It's(GTX680) slower and more expensive to boot yet it still outsells 7970sGHz by a lot. For example in my country the cheapest GTX680 costs 25% more then gigabyte 7970(also the cheapest 7970 I could find) clocked at 1GHz(I'm not sure if it's GHz edition or just normal overcloced, but it doesn't really matter) And guess what? nV sells more GTX680s then AMD 7970s and it's not some trivial difference it sells way more.

A lot of people hold out for nVidia's release because they believe they will release something that is faster than AMD. Most people believe that will be true next round, as well. Let the 8970 be faster than the 780 and I think you'll see a lot of nVidia customers jump ship.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
I hope I can post without voting :D Honestly I would choose none.Kepler though efficient is yet to replace my old Quadro 6000.The K5000 is fast but it won't replace the 6000. GCN may be lucrative as a architecture but AMD needs more than better hardware to compete with NV/Intel in the professional space.They need to bring their own software ecosystem..
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
A lot of people hold out for nVidia's release because they believe they will release something that is faster than AMD. Most people believe that will be true next round, as well. Let the 8970 be faster than the 780 and I think you'll see a lot of nVidia customers jump ship.

Maybe, but it's pretty irrelevant to the sales numbers, isn't it? This generation waiting for GTX 680 wasn't worth it, especially for us high resolution monitor owners. It was about 5-7% faster at my resolution at launch, basically a tie. If someone was willing to OC then 7970 was actually better. Besides, locked voltage on a 500$ card? It won't fly with me. They probably have less RMA's and users don't care and buy 680s anyway so I don't expect it to change ever. Hope AMD won't follow suit. Yeah, it was both cheaper and faster but only in the USA in Europe 680s were quite more expensive even at launch. I really don't know why that is. Someone likes high margins, but who? Internet retail shops operate on razor thin margins so it's not them. Someone should really look into this. In europe nvidia cards are about 15-20% more expensive even if they cost the same in the USA. It's really fishy, and still they sell better.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
An interesting perspective:
“DirectX 11 by itself is not going be the defining reason to buy a new GPU. It will be one of the reasons. This is why Microsoft is in work with the industry to allow more freedom and more creativity in how you build content, which is always good, and the new features in DirectX 11 are going to allow people to do that. But that no longer is the only reason, we believe, consumers would want to invest in a GPU,” said Mike Hara, vice president of investor relations at Nvidia, at Deutsche Bank Securities Technology Conference on Wednesday.

Nvidia believes that special-purpose software that relies on GPGPU technologies will drive people to upgrade their graphics processing units (GPUs), not advanced visual effects in future video games or increased raw performance of DirectX 11-compliant graphics processors.

NVIDIA - September 2009

It's unfortunate IMO that Nvidia has gimped their consumer cards. From a business perspective they are smart to do it, use the expensive dice for the professional market, cut down cheap dice for the hapless end user. Nvidia is lucky that AMD is largely incompetent on the software side, otherwise AMD could release a killer apps and make Nvidia users really feel left out, much more so than the potential of PhysX making Radeon owners missing out.

For that reason and a couple of others, I voted GCN, but Kepler is no slouch either. And before you get your pantyhose in a bunch, yes I realize that Big-K is an altogether different animal, but we are talking about the consumer space here for the most part.
 

omeds

Senior member
Dec 14, 2011
646
13
81
Nvidia is lucky that AMD is largely incompetent on the software side.

This is how I feel. I would have 7970's in my main rig if they could do what I want.. due to features and support, Nv is the only choice for me atm.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
Software=applications in the context of my post. NOT drivers. Read my post again. Nvidia has had some really terrible driver issues of their own over the years.