The rumors about the product schedule are false -- AMD

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
I was asking about independent numbers for the whole market, not some cherrypicking online listing.

I said and meant at *every* price point. That includes HD87xx and 88xx. If they let the 7000 series run out, people have to buy the new cards. It's normal that prices fall due to competition and then get raised to their original levels when new cards - be it refresh or new generation - are released.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I don't have market share numbers. I don't have access to that or I would have linked it. Once Jon Peddie publishes it, we'll know. Again, people don't seem to understand market share well. If you have a growing market, you can still grow revenues and profits despite losing market share. Market share alone is not a good metric without looking at market share dollars. AMD could get 95% market share if they sold each GPU at $1-5 tomorrow.

Also, AMD already stated new HD7000 cards will be launching on the desktop in 2013 between $100-500 price levels. 384 shader card for < $100 segment and some cards closing the price gaps. There is no reason to call those cards HD8000 series since even labeling GTX400 to 500 series and HD5000 to 6000 series was already misleading on the GPU markers part. Those were refreshes and they called them next generation. They should go back to the practice of raising the 1st number ONLY when a real next generation launches. Kepler/HD7000 refreshes should all be called GTX600/HD7000 and 700/8000 series should be Maxwell/Volcanic Islands. That's actually doing a service to the consumers instead of brain-washing them with misleading marketing claims of "next generation" GPUs.

AMD never denied that they won't launch HD7890 or HD7960 for example. They just said no new GCN 2.0 architecture in 2013 and nothing faster than HD7970GE for single-chip cards.

If they let the 7000 series run out, people have to buy the new cards. It's normal that prices fall due to competition and then get raised to their original levels when new cards - be it refresh or new generation - are released.

GTX400 was hot and loud. GTX500 solved those problems. HD5000 ran into a VRAM bottleneck and had tessellation issues. HD6000 solved those problems. AMD is already in the lead with HD7970GE and they are undercutting GTX680s. How about AMD works on re-writing the memory management driver and waits for NV to refresh GTX600 series? Did you ever think of that? It's not AMD that needs to refresh cards but NV. Their entire desktop line-up is overpriced, worse overclocking, slower at same price levels and has less impressive game bundles for most people. The Titan doesn't really count since 10,000 unit sales at $900 does not concern 99.5% of the market. NV has a trump card - its loyal fans will buy their cards for more $ even if they are slower, which means they have no pressure to refresh GTX600 series either. Do we even have concrete evidence NV is launching GK114 in 2-3 months?
http://www.tomshardware.com/news/Radeon-GeForce-Delay-GPU-Next-Generation,20838.html

Notice how NV waited for AMD to launch HD7000 series first because GTX580 was in the lead? Why would AMD rush their HD8000 cards before fixing their drivers when HD7000 series had a record sales month in January based on AMD's management?
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
Yup, and yet Nvidia is selling more, probably even with AMD bundles in the picture. Nvidia has no problem right now. If it were to go on like this, it is AMD that would be forced into action. Unless they are okay with giving their cards away basically for free ;)

As for the names, I don't know what I like better. How would the customer know that the 7890 has perhaps a bit better perf/W than the 7870? And the gaps in the lineup aren't that large. It is cramped with all those GHz Editions already. Confuse the customers with a 7890 that is as fast as a 7950? Discontinue the 7950? Etc. I find it more "clean" and less confusing if a refresh uses new numbers.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Yup, and yet Nvidia is selling more, probably even with AMD bundles in the picture. Nvidia has no problem right now. If it were to go on like this, it is AMD that would be forced into action. Unless they are okay with giving their cards away basically for free ;)

That goes back to everything people have been saying for the last 3+ years. AMD cannot win if its cards are faster, cannot win if its cards are cheaper, cannot win if its cards have better game bundles. NV has Apple-style consumers in terms of willingness to pay premiums and loyalty. AMD almost has to launch cards 2x faster for 50% less with 10 free games or work with developers closely so that most games simply perform faster and run better on their cards. Both of those scenarios are unrealistic since AMD doesn't have money to execute either case. The goal is to keep AMD's graphics division work on new architectures so that HSA plan can kick into action before the company dies. The execution of this plan is years behind now since AMD had this visionary plan after acquiring ATI. AMD doesn't really care about their graphics division on its own. It's about leveraging the graphics side to make sure the execution to the SOC and APU side allows them to save the company against Intel and changing consumer demographics in smartphones/tablets. NV has always been a graphics company. AMD has always been a CPU company. People continue to compare them but it's not the same thing. Their strategies are different. The reason ATI was so competitive with AMD is because it was also mainly a graphics company. AMD is not focused on graphics alone. It can't allocate 90% of its financial/engineering resources on GPUs like NV can. The fact that AMD's graphics even manages to keep up with NV is incredible.

How would the customer know that the 7890 has perhaps a bit better perf/W than the 7870?

The naming of the cards tells the consumers nothing about performance/watt. You have to look that up anyway. Does NV's GT620 have better performance per watt than GT430?

Renaming refreshes by bumping up the last couple digits has worked great for many years. HD4870 to 4890, GTX280 to 285. All those make perfect sense. Renaming GTX480 to 580 or HD5870 to 6970 made no sense. It was pure marketing nothing to do with generations. If you've followed GPUs since late 1990s or early 2000 then the existing marketing games are much more confusing than in the past. If you got 9700Pro, it was 2x faster than 8500, X800XT was 2x faster than 9700Pro/9800Pro, X1800XT was at least 60% faster than X800XT, etc. GTX580/HD6970 were barely 15-20% faster than 480/5870s. Those cards should have been called GTX485 and HD5890.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
[journalist C]: Hi, just to confirm, what you said means that Radeon HD 7970 GHz Edition stays AMD's fastest single-GPU graphics card until the end of the year?"

Devon: "Yes."

[journalist F]: "Actually it's not on the little speech I'm afraid, but an earlier question that was answered, which stated that the 7970 would still be your most powerful single-GPU product until the end of the year, why is it so? Are there any technical reasons for that, because at that point you would have announced that part almost two years ago? Is it a technical issue, or is it simply because you don't feel the need to put something newer out there?"

Darren: "Definitely, I got a two-part answer for that one. We launched the 7970 in December of 2011, and then we followed up with the GHz Edition in July of 2012. So the GHz Edition has been out there for the better half of the year now, and that will continue to live on until the end of 2013. From a technical perspective, no, there's absolutely no technical reason; from a product perspective, we have the world's fastest GPU now, with the GHz Edition, and as Roy mentioned, the GK110 is coming, but is something NVIDIA is leveraging from a completely different space. It's a Tesla product, it's a workstation product, it really was never intended for the consumer market, and I think that way they're kinda shoe-horning that product into their GeForce stack. So, we took them by surprise by having the world's fastest GPU, and you see all these kinds of reactions from them right now, but we're very confident that the 7970 GHz Edition is the best GPU for enthusiast gamers out there. So this is purely for marketing performance reasons, nothing technical."


I was asking about independent numbers for the whole market, not some cherrypicking online listing.

Box, this is the point I was making. AMD is growing their sales for HD7000 cards.

Roy: "Can I jump in on that? The recent sales success of our "Never Settle Reloaded" bundle shows us the appetite of consumers for that product [7970 GHz Edition] is wonderful. I know this is a European call, but Newegg.com in the USA were sold out. We're seeing record sales throughout Europe. So as measured by people spending money, I'd say that we're in pretty good shape."

Roy: "&#8230;and Darren touched on it earlier, that the simple fact is that the 7000 series is still ramping. We're still seeing higher sales, growing sales on the 7000 series, so yeah, there's an appetite in the channel for product stability, and we're seeing that it actually accelerates the sales. You can argue that the channel cycles previously were a bit too forced."

[journalist H]: "While I'm really hear you in the price-performance side of the product, what about image and marketing? I mean, even if NVIDIA only pumps out a handful of GeForce Titans, they can still run around saying they have the fastest GPU for gamers, how are you going to compete with that?"

Roy: "I don't think we can comment until we've seen it. Based on what we know so far, I don' think that this (GeForce Titan) is a threat to our business."

Darren: "&#8230;and it's not like it's a surprise. You know, the GK110 has been known about for a long time. Our roadmap stayed, once we knew about it. We didn't need to react."
http://www.techpowerup.com/180345/C...adeon-Graphics-Teleconference-Transcript.html


----

My guess is AMD doesn't consider the GK110 a competitor since 10K unit launch is too limited and the price is in the stratosphere for 99% of consumers interested in GPU upgrades. I don't agree with their logic since the Average Joe will think the Titan is the fastest single GPU and think right away that any GTX600 > any HD7000. It's brilliant marketing by NV once again :) Also, if NV launches GK114/lower-priced GK110 parts in 2013, then AMD will have gotten their strategy wrong. I guess it's a risk they are taking.
 
Last edited:

dangerman1337

Senior member
Sep 16, 2010
333
5
81
I suppose the question we should be asking is that will AMD even consider a 28nm transition like the 40nm generations (HD5000>HD6000) or will they mostly skip it and focus their resources on Volcanic Islands (somewhat familar with Nvidia's 200>300>400)? Which would not surprise me as AMD has limited resources and staying to something very profitable over potentinally meagre profit or even loss with something that is marginal at best. Also another factor is when would it be possible to get a 20nm GPU that is similar in size to Tahiti/7900? That transcript seems to imply so since the 7970 GHz edition is mentioned to keep up until this year's end.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Yeah, i'd say that's a risky strategy.

While I agree that GK110 is for a different market segment - the card is a limited edition and extremely costly - AMD would have issues if nvidia suddenly released the GK114 or a kepler successor without a sufficient answer. I would hope they have a new high end product in the pipeline, even if they're not ready to share details.

I think Titan is nothing for AMD to be concerned about because most buyers are looking for a sweet spot GPU in the 200-300$ range. However, one would hope they have a contingency plan (with an applicable new high end product) if nvidia ends up surprising them with a kepler refresh.
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
7,400
2,437
146
Pitty, I was really hoping for a good 8970 or two to come out soon. I would consider the geforce titan, but only if Nvidia stops the voltage locking BS.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Pitty, I was really hoping for a good 8970 or two to come out soon. I would consider the geforce titan, but only if Nvidia stops the voltage locking BS.

Apparently there's some new trickery with regards to voltage on the Titan. Supposedly it can use a higher voltage with a new "temperature target" feature? I'm anxious to hear more about it.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Yeah, i'd say that's a risky strategy.

Agreed. Some of the comments by AMD's new ex-NV employee are pretty out there. You would think AMD was having a 40-50% flagship GPU with the amount of smack Roy unleashed in the conference call:

"Devon: "&#8230;and frankly from a marketing perspective, you know, we're tied up with all the latest AAA titles at the moment, so how can NVIDIA compete? They're launching GeForce Titan as a reaction to us being signed up with all the best game developers right now. Also as a marketing point, I assume everyone's seen the game bundles that NVIDIA has launched recently, right? I thank them for launching their game bundle, because it goes to show have great "Gaming Evolved" (developer program) has become. Just look at the titles in the "Never Settle Reloaded" bundle versus their "Free to Play" bundle, and you'll see which company is more serious about gaming for desktop and PC users."

Roy (ex-NV employee): "We should make it clear that NVIDIA is a good company, and we respect their attempted transition into being a smartphone company."


I would prefer that management used actions to speak for their words. If AMD's GPU management really wants a GPU war with NV, then these guys might awake the sleeping giant (NV pushing GK110 to lower price levels, GK114, etc.). They are talking way too much **** considering if GK110 is 40%+ faster than HD7970GE and comes in at 250W TDP. That could mean Maxwell has the potential to distance itself from 20nm AMD GPUs even further in both performance and performance/watt.

Kepler is just a Fermi re-balanced, but Maxwell is NV's brand new GPU architecture like GCN was for AMD. Can you imagine how impressive Maxwell will be if 2-3 year old if GK110 on older Kepler architecture (aka Fermi without hot clocks and minor fixes) is 40% faster than HD7970 based on GCN? Even if 20nm Volcanic Islands is 50% faster than HD7970GE, that would barely take them 10% above Titan. If NV does a GTX580 --> Titan transition with Maxwell, then the flagship Maxwell would end up 70-80% faster than Volcanic Islands. I don't understand why AMD's management is so confident to be honest unless they truly believe that GK110 was a desperate attempt by NV to use a Tesla card for PR reasons. Maybe Roy knows more information about NV's Kepler strategy since he did work for them when Kepler was under development.
 
Last edited:

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
Roy (ex-NV employee): "We should make it clear that NVIDIA is a good company, and we respect their attempted transition into being a smartphone company."[/I]

There is truth to that, and I want to see NVIDIA involved in PC and gaming much more then they currently are.
But Roy should be the last person to ridicule NV for being smart-phone company.

If NVIDIA can smash AMD both on desktop and on laptops, with their both hands tied to Tegra and it's growing expenditures, not even flinching on AMD price cuts and bundling F2P games,
when OTOH AMD is giving the best they have, wrapping all those nice game bundles and for less money then NV,
gathering all those developers and showering them with presents, racing both their driver team and driver itself almost to the point of braking with each release.

I mean how fricking cool is NVIDIA if they can do that.
And that AMD is getting it from smartphone company... lol that's even worse, why would you mention that. Your smartphones are making billions now, but not for you :rolleyes:
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,193
2
76
I really wonder what kind of performance increase AMD is getting from their memory management driver. If it's substantial it could give them a perfectly valid reason for delaying their upcoming GPU's and riding out the 7900 series long term.

I said it in another thread and it may have been somewhat off topic, but I could really see this driver dropping at the same time as Titan reviews and spoiling the launch. Guess we will know very soon.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I really wonder what kind of performance increase AMD is getting from their memory management driver. If it's substantial it could give them a perfectly valid reason for delaying their upcoming GPU's and riding out the 7900 series long term.

I am curious to see how many more games will leverage DirectCompute in 2013 and how extensively. What is this new DX11 feature in Tomb Raider? If AMD can't outright beat NV in raw performance, they could leverage the superior compute performance of GCN architecture and still be competitive. Remember how NV kept exploiting AMD's tessellation weakness for 2 straight years? If AMD works with developers more closely and turns things to 11 like Dirt Showdown (Forward+, global illumination), not even a GK110 would save NV in those titles.

"Devon: "Yeah I think you'll see some really exciting things coming out of the partnerships in the "Never Settle Reloaded" bundle. The Crysis 3 partnership is not just about that specific game, it's about the exciting things we're working on with Crytek that we'll be developing over the course of the first half of this year, Bioshock has already come out with some great press on how much they've invested on the PC side of the game with us, and Tomb Raider will ship with a new feature we've been working on with them. So it's about performance optimization, but also creating a great experience on the Radeon graphics, and kind of pushing industry forward in terms of new effects as well&#8230;""
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
I think we've established by now that Direct Compute has nothing to do with Tahiti being so fast. Why do you keep ignoring this if you yourself admitted as much in my other thread? Titan has the same or more raw power than Tahiti, so it should perform quite well in those titles.

Also could you please elaborate on your optimism for Maxwell? From a usually well informed source I've heard that the architectural step Kepler -> Maxwell will be larger than Kepler -> Fermi. So far so good, but how can we predict performance from this? GCN was completely new and couldn't really set itself apart from Kepler in terms of perf/W. A few percent here and there, a draw or loss with Tahiti in gaming perf/W and HPC perf/W...certainly not the horror scenario for Nvidia that you envision for AMD when Maxwell comes around ;)
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I think we've established by now that Direct Compute has nothing to do with Tahiti being so fast. Why do you keep ignoring this if you yourself admitted as much in my other thread?

When did I agree with you on this point? This is news to me.

Are these fake?

HD7970 Ghz is beating GTX680 by more than its theoretical GLOPs or memory bandwidth advantage can ever explain. 87% advantage in performance!!! Dirt Showdown has the most use of Compute out of any game since the entire engine (forward+), global illumation, contact hardening shadows are all designed around compute shaders.

dirt-fps.gif


HD7870 has less floating point performance and much lower memory bandwidth than the GTX670, and gets creamed by the 670 in texture fill-rate, still keeps up easily in a DirectCompute title.

sniperelitev2_2560_1600.gif


Self-explanatory. Metro 2033 is a notorious memory bandwidth hog and has run faster on AMD cards since forever. The only other titles where HD7970 has leads by huge margins all take advantage of Compute Units/shaders.
11_797vs68_big.png


Titan has the same or more raw power than Tahiti, so it should perform quite well in those titles.

That comparison is not valid. Titan has way more power overall as a graphics card because it will have a ton more pixel and texture fill-rate performance too. Since there is no such thing as a 100% pure DirectCompute game, Tahiti XT runs into its bottlenecks (like ROPs). Kepler architecture is deficient in DirectCompute on a per mm2 basis. The reason Titan will keep up with HD7970GE or beat it in those games is cuz it needs a 520mm2 die with a crap load more functional units to do this. If you made a 224 TMU, 48 ROP 2688 SP HD7970 at same clocks as the Titan, it would cream it Compute heavy games.

Developers/programmers all say that GCN is superior for DirectCompute. Go read forums, ask around. HD7870 keeping up with GTX670 despite way less functional units and memory bandwidth is evidence that GCN is superior in DirectCompute. Also, synthetic benchmarks show GCN beating GTX600 in Compute as well. Same story with Fermi and tessellation.

Maxwell is the brand new architecture from NV designed to excel in HPC. It should be superior to GCN architecture in every way being 2-3 years more modern.

The comparison to the Titan will prove that GCN is more advanced in compute on a per mm2 basis since the Titan will need a 520mm2 die, way more functional units just to beat HD7970GE in some of those compute games.

Titan vs. HD7970GE

Pixel fill-rate = +25%
Texture fill-rate = +46%
Gflops = +10%
Same memory bandwidth

You keep thinking that somehow in your mind all those games are 100% GLOPs/memory bandwidth limited. They are not. Just like no 1 game is tessellation limited only. Kepler architecture is just Fermi with no hot clocks and minor changes. It's no wonder it was never designed to excel in compute.

If you look at pure DirectCompute benchmarks, Tahiti XT's Glops or memory bandwidth cannot alone explain these differences:
http://www.computerbase.de/artikel/grafikkarten/2013/test-17-grafikkarten-im-vergleich/8/

In Compute Mark, HD7970 is 59% faster than GTX680 but it only has 17% more floating point and 38% more memory bandwidth over the 680.
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
Me:
Yet I wonder, with all that being said - why is the 7870 LE not faster than the 670/680 if it is so much better at compute?
You:
My educated guess is most games are not mostly Compute Shader limited.

Think of it this way - for graphics we now have all these factors that could be limiting overall gaming performance of a GPU:

Pixel shading power
Texture shading power
Geometry shader/engine performance (tessellation)
Compute shader processing power (Compute shaders)
Memory bandwidth limitations
http://forums.anandtech.com/showpost.php?p=34597762&postcount=48

So essentially you concur that other factors (which I summarize with the term "raw power") may play a role, too. The 7870 LE has almost the same raw power as the 670/680 and performs the same or worse in all those titles except Dirt Showdown. What more proof do you need? And could we stop waving a title around where AMD was directly involved in? The 6970 is faster than the 580 in Dirt Showdown - would you have me believe that the full GF110 is worse than Cayman in DirectCompute, too? You simply cannot draw your conclusions from one title. It could be specific AMD optimizations that only apply in this title due to their cooperation. Other engines that use DirectCompute can/could show quite different results. Aside from that, there are other results, too, so it doesn't look quite as bad as you make it out to be:
http://www.computerbase.de/artikel/grafikkarten/2013/test-17-grafikkarten-im-vergleich/27/

That comparison is not valid. Titan has way more power overall as a graphics card because it will have a ton more pixel and texture fill-rate performance too. Since there is no such thing as a 100% pure DirectCompute game, Tahiti XT runs into its bottlenecks (like ROPs). Kepler architecture is deficient in DirectCompute on a per mm2 basis. The reason Titan will keep up with HD7970GE or beat it in those games is cuz it needs a 520mm2 die with a crap load more functional units to do this. If you made a 224 TMU, 48 ROP 2688 SP HD7970 at same clocks as the Titan, it would cream it Compute heavy games.

Maxwell is the brand new architecture from NV designed to excel in HPC. It should be superior to GCN architecture in every way being 2-3 years more modern.

Yes it is valid. First with "raw power" I generally mean everything. SP GFLOPs, bandwidth, fillrate. Second, you failed to prove that fillrate is that important. You provided examples, I provided counter examples. It's not conclusive, not by far. Read my post here if you haven't already:
http://forums.anandtech.com/showpost.php?p=34601291&postcount=53

Developers/programmers all say that GCN is superior for DirectCompute. Go read forums, ask around. HD7870 keeping up with GTX670 despite way less functional units and memory bandwidth is evidence that GCN is superior in DirectCompute. Also, synthetic benchmarks show GCN beating GTX600 in Compute as well. Same story with Fermi and tessellation.

1280 is "way less" than 1344. Got it. The 7870 can be close to the 670 because it has a very similar amount of SP GFLOPS (2560 for the 7870 and 2459 for the 670 without boost). Nothing to do with compute. On average, the 670 is 25% faster than the 7870. So much for "keeping up". These 25% fit nicely with the bandwidth advantage that the 670 has over the 7870, which is 25%.

Lastly, I have to quote you on this, because it is too funny:
If you made a 224 TMU, 48 ROP 2688 SP HD7970 at same clocks as the Titan, it would cream it Compute heavy games.
Now you are referring to raw power when it suits you to explain a performance advantage? How about Tahitis 2048 SP vs GK104's 1536 SP? Double standards. Besides, you don't know how such a GPU would perform. As I've said earlier, the fillrate discussion is inconclusive. For all we know, Kepler has too much fillrate and at least a part of it is not utilized. Same goes for Tahiti. Since TMUs are tied to CUs/SMX, they scale proportionally. Doesn't mean you need it.

Edit:
I forgot, I mentioned this in the Titan thread but got no reply yet. But is seems to be the center of our discussion, so I'll quote myself:

With all this talk about compute...how would one actually know how extensively a title uses compute operations and how that affects performance on different architectures? Research is all and fine and saying Sleeping Dogs uses DirectCompute as part of their SSAA approach for example. But it is impossible for anyone of us to distinguish if it is raw power or special compute mojo that actually makes the difference. We are not programmers nor do we have the extensive knowledge that GPU architects to.

I find it daring to claim "title A is compute heavy but title B is not/less so". How do you know? You possibly cannot.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Me:
You:
http://forums.anandtech.com/showpost.php?p=34597762&postcount=48[/URL]

Ya, that only proves my point. I am saying that not all games are DirectCompute limited which is why you aren't going to be seeing Tahiti XT destroying GTX680 in every game. If they were, it would be game over.

So essentially you concur that other factors (which I summarize with the term "raw power") may play a role, too. The 7870 LE has almost the same raw power as the 670/680 and performs the same or worse in all those titles except Dirt Showdown.

What is this "Raw" power? Are you talking about floating point / SP GLOPs? Shader performance? Everything combined?

Your argument was first that memory bandwidth was what was important in HD7970 beating GTX680 in compute. Sniper Elite V2 shows it's not true since HD7870 (non-LE) keeps up with GTX670 and HD7870 LE beats GTX680 and neither card have more memory bandwidth.

Then your argument shifted to "raw" performance which I have no idea what you even mean by that. Since we already know with 100% certainty that memory bandwidth is not the answer since a 153GB/sec HD7870 keeps up with a 192GB/sec GTX670 in Sniper Elite V2, let's look at GFLOPs. If memory bandwidth was the limiting factor, the HD7870 could never match the GTX670 since they roughly have similar GLOPs.

HD7970GE is winning by 42% but its GLOPs advantage over the 680 is only 32%. If GLOPs was the limiting factor, HD7970GE could never net a 42% advantage over the 680 since the max it could do would be 32%.

sniperelitev2_2560_1600.gif


By process of deduction, the only thing that's left is that GCN can perform compute calculations faster/more of them in a pass.

Lastly, I have to quote you on this, because it is too funny:
Now you are referring to raw power when it suits you to explain a performance advantage? How about Tahitis 2048 SP vs GK104's 1536 SP? Double standards.

You are comparing a 520mm2 chip with way more functional units against a 365mm2 chip and claiming that it's the same thing in "raw" power? That only makes sense if games responded to floating point performance. Of course we know they don't.

GTX580 3GB = 1.58 Tflops
GTX680 2GB = 3.25 Tflops

There isn't a single game on the planet where a GTX680 is 2x faster than the 580.

Looking back to Sniper Elite V2 again:

GTX680 leads the 580 by just 28%. GTX680 has 2.05x floating point performance. So why is the 680 barely faster than the 580? Because it's based on the same outdated Fermi architecture that's slow at DirectCompute/Compute shader operations. You keep using this term "raw" power and correlating Floating point performance with DirectCompute. They don't mean the same thing and yet you continue to use these terms interchangeably over and over and over.

It's like you don't understand what a Computer Shader is vs. what a Floating Point operation is. Just because shaders can perform many floating point operations, it doesn't mean at all they are fast at Compute Shader work. GTX680 vs. 580 is the perfect example of this. Since Kepler GTX680 is just Fermi without hot clocks and minor improvements, the 2.05x floating point advantage it has over GTX580 doesn't mean much in games that use DirectCompute shaders.

Aside from that, there are other results, too, so it doesn't look quite as bad as you make it out to be:
http://www.computerbase.de/artikel/grafikkarten/2013/test-17-grafikkarten-im-vergleich/27/

That review does not have global illumination enabled. You cannot get 58 fps with 4xMSAA at 1600P and 48 fps with 8xMSAA at 1600P in that game on a 7970GE if you enable all the Compute features. Therefore it's not stressing Kepler's compute capabilities / not allowing GCN to show its strength since they turned it off.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Here is Dirt Showdown with DirectCompute Global Illumination ON at 1600P.

GTX680 vs. HD7870

GFLOPs +27%
Memory bandwidth +25%
Pixel fill-rate +5.8%
Texture fill-rate +69%

GTX680 should win using your theory that floating point matters for compute or that memory bandwidth matters. HD7870 wins the minute you enable the most advanced DirectCompute graphical feature in any game ever included so far - Global Illumination via Compute Shaders

ds%202560.png


HD7870 is also a 212mm2 chip.

That means:

1) GCN is more efficient in Compute than Kepler is on a per mm2 basis;
2) Floating point / GLOPs and/or memory bandwidth alone do not tell us how fast a card is for DirectCompute calculations.

Not sure why you keep getting so upset about this. Most games are not DirectCompute limited yet. NV will fix this problem with Maxwell anyway by the time it might matter more.
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
Ya, that only proves my point. I am saying that not all games are DirectCompute limited which is why you aren't going to be seeing Tahiti XT destroying GTX680 in every game. If they were, it would be game over.

But again you neglect the ressources both GPU have to work with. I'm saying Tahiti is faster in those games because it has more SP GFLOPs and more bandwidth. If all games were to use DirectCompute, Tahiti would be able to bring that advantage to bear across the board. There is no contradiction here. What you're saying doesn't prove my explanation for Tahitis performance advantage is wrong.

What is this "Raw" power? Are you talking about floating point / SP GLOPs? Shader performance? Everything combined?

Your argument was first that memory bandwidth was what was important in HD7970 beating GTX680 in compute. Sniper Elite V2 shows it's not true since HD7870 (non-LE) keeps up with GTX670 and HD7870 LE beats GTX680 and neither card have more memory bandwidth.

Then your argument shifted to "raw" performance which I have no idea what you even mean by that. Since we already know with 100% certainty that memory bandwidth is not the answer since a 153GB/sec HD7870 keeps up with a 192GB/sec GTX670 in Sniper Elite V2, let's look at GFLOPs. If memory bandwidth was the limiting factor, the HD7870 could never match the GTX670 since they roughly have similar GLOPs.

I never shifted anything. Maybe I wasn't always clear, but what I mean by raw power is GFLOPs and bandwidth which bottleneck performance more or less, depending on the game/scene/setting.
I would suggest you stop thinking in absolute bottleneck terms and start thinking relative. A bandwidth bottleneck doesn't mean 25% more bandwidth will give you 25% more fps. It can mean it could give you 10 or 15% more performance. So it might not be a complete bottleneck, but a partial one. This applies to all kinds of bottlenecks. It is not an absolute term, at least in my book.

HD7970GE is winning by 42% but its GLOPs advantage over the 680 is only 32%. If GLOPs was the limiting factor, HD7970GE could never net a 42% advantage over the 680 since the max it could do would be 32%.

sniperelitev2_2560_1600.gif


By process of deduction, the only thing that's left is that GCN can perform compute calculations faster/more of them in a pass.

See my explanation above. And for example look at these OC benchmarks of a 680:

More here:
http://www.hardwareluxx.de/communit...icherbandbreitentest-934679.html#post19958238

As you can see, increasing one factor alone doesn't yield a proportional response. However, there is a synergy effect in basically all cases when GPU clock and memory clock are increased. You cannot separate the two from the benchmarks you posted.

You are comparing a 520mm2 chip with way more functional units against a 365mm2 chip and claiming that it's the same thing in "raw" power? That only makes sense if games responded to floating point performance. Of course we know they don't.

GTX580 3GB = 1.58 Tflops
GTX680 2GB = 3.25 Tflops

There isn't a single game on the planet where a GTX680 is 2x faster than the 580.

Looking back to Sniper Elite V2 again:

GTX680 leads the 580 by just 28%. GTX680 has 2.05x floating point performance. So why is the 680 barely faster than the 580? Because it's based on the same outdated Fermi architecture that's slow at DirectCompute/Compute shader operations. You keep using this term "raw" power and correlating Floating point performance with DirectCompute. They don't mean the same thing and yet you continue to use these terms interchangeably over and over and over.

It's like you don't understand what a Computer Shader is vs. what a Floating Point operation is. Just because shaders can perform many floating point operations, it doesn't mean at all they are fast at Compute Shader work. GTX680 vs. 580 is the perfect example of this. Since Kepler GTX680 is just Fermi without hot clocks and minor improvements, the 2.05x floating point advantage it has over GTX580 doesn't mean much in games that use DirectCompute shaders.

I know there is a difference, but if the discrepancy between SP GFLOPs and compute shader performance were so large in games that use compute shaders extensively, the 7870 LE would destroy similarly specced cards like the 670 or 680. It doesn't.

That review does not have global illumination enabled. You cannot get 58 fps with 4xMSAA at 1600P and 48 fps with 8xMSAA at 1600P in that game on a 7970GE if you enable all the Compute features. Therefore it's not stressing Kepler's compute capabilities / not allowing GCN to show its strength since they turned it off.

Fair enough. But I'm getting tired of Dirt Showdown. IF more games where AMD was not involved so much show the same characteristic, you get a cookie. If they don't, I get one ;)

And btw again:
How do you know which games use compute shaders how extensively? Were you involved in programming them? I'm upset since you most likely don't have advanced knowledge about this but keep pretending otherwise and basing your arguments on it.

By what standard do you judge how something is "the most advanced DirectCompute graphical feature"? By performance? By looks? By guesswork? Have you seen the source code and do you know what operations are used how often? Do you know if those are "standard" operations or if they are specifically tailored to run good on AMD hardware? How do you know there isn't something fishy going on? Could a little tweak (for example in regard to register number/size) boost Kepler performance significantly? How would you compare compute usage of Dirt Showdown with other games like Metro 2033 or Sleeping Dogs, and by what standard?

I hope you get the point of this little riddler intermezzo. "I know that I know nothing" would be an appropriate quote here.
 
Last edited:

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
I really wonder what kind of performance increase AMD is getting from their memory management driver. If it's substantial it could give them a perfectly valid reason for delaying their upcoming GPU's and riding out the 7900 series long term.

I said it in another thread and it may have been somewhat off topic, but I could really see this driver dropping at the same time as Titan reviews and spoiling the launch. Guess we will know very soon.

I predict the re-write driver will push the 7970GE to an average of .5% faster overall than an overclocked (975-1000) Titan. That's how incredible it is going to be because they've (AMD) been mismanaging the memory subsystem on Tahiti since launch day and nobody realized it. Now that AMD has fired a large percentage of their staff, the programmers that are left had no choice but to eventually pick up on the huge oversight.
 
Last edited:

Ibra

Member
Oct 17, 2012
184
0
0
Can't wait for massive AMD fanboys suicides. I hope RussianSensation will fall first. ():)

Not needed, please don't flamebait, and joking about suicide is NOT ok. Moderator Shmee
 
Last edited by a moderator:

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
I predict the re-write driver will push the 7970GE to an average of .5% faster overall than an overclocked (975-1000) Titan. That's how incredible it is going to be because they've (AMD) been mismanaging the memory subsystem on Tahiti since launch day and nobody realized it. Now that AMD has fired a large percentage of their staff, the programmers that are left had no choice but to eventually pick up on the huge oversight.

says the Nvidia troll :rolleyes:. pathetic sarcasm to talk down AMD's efforts. Why don't you go somewhere else and troll ?

Nvidia and AMD have good products and compete fiercely. each has its advantages and disadvantages. its the constant b****** on the web from trolls which is tiresome. don't get into every AMD thread and spoil it.

If Nvidia refresh their product stack and beat AMD across the product stack good for you because then you can keep praising your favourite company even more.

But AMD seems to have a strategy of improving driver performance, aggressive game bundles and competitive pricing. Only time will tell how they fare in 2013. All that remains to be seen is Nvidia going aggressive and taking leadership across the product stack or playing it safe and maintaining the status quo and enjoying the nice margins. Titan at USD 900 and GTX 680 at USD 450 allow them to enjoy the best margins in their history.

Member callouts are NOT ok. Please refrain from using profanity in the technical forums. Moderator Shmee
 
Last edited by a moderator:

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
says the Nvidia troll :rolleyes:. pathetic sarcasm to talk down AMD's efforts. Why don't you go somewhere else and troll ?

Nvidia and AMD have good products and compete fiercely. each has its advantages and disadvantages. its the constant b****** on the web from trolls which is tiresome. don't get into every AMD thread and spoil it.

Ah, then you should have no problems with getting over to the NVIDIA Titan thread and tell all those AMD fans not to spoil it and go post elsewhere. Or did you perform that impartial act already and silly me, I missed it?
/thanks
 

Arzachel

Senior member
Apr 7, 2011
903
76
91
Ah, then you should have no problems with getting over to the NVIDIA Titan thread and tell all those AMD fans not to spoil it and go post elsewhere. Or did you perform that impartial act already and silly me, I missed it?
/thanks

Oh, so "those other people did it first" means you should do it too? Well, aren't you a paragon of justice.

Hardware enthusiasts have a tendency to expect the unreasonable (look at all the pre launch speculation), but even a low single digit increase will chip away when Titan costs 2.5x more.