AMD goes fermi

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

VulgarDisplay

Diamond Member
Apr 3, 2009
6,193
2
76
On average, drivers gain 9-10% performance increase, not 20%.

AMD = 9%
Nvidia = 10%

Overclocking is a must for this card. :thumbsup:

I'm still willing to bet they will get over 10% out of drivers for GCN simply because this architecture is brand new. Fermi to kepler won't see as much improvement simpley because their architecture is going to be similar.

Also I believe they have huge amounts of performance to gain fixing DX9 titles on GCN because the numbers for old dx9 games make less than no sense at the moment.
 

iCyborg

Golden Member
Aug 8, 2008
1,324
51
91
You are missing a key difference ==> HD5870 destroyed HD4870 in the most demanding games, often matching HD4870 x2 and GTX295. HD5870 had no problem doubling HD4870's performance in demanding games at the time.

I did a summary of GTX590 vs. HD7970 vs. GTX580 based on Anandtech's review. GTX590 is on average 32% faster than HD7970. HD5870 was nearly as fast as an HD4870 X2 in at least 30-40% of the benchmarks.
Your summary conveniently doesn't include any compute tests where 7970 mostly beats 6990 and against 6970 it's closer to 70-80% advantage, and GPGPU was the driving force behind arch changes.

Also, I looked at anandtech's 5870 review, and this 5870 equal or better to 4870x2 pretty much happened only when CF didn't scale all that well which seems to have happened more often back then. Crysis Warhead is the only game that is common and CF scaled 25% then (for 1920x1200, did better for 2560x1600 at ~45%), and 70% today (6990 vs 6970). After a couple years of driver improvements for SLI/CF, I'd say it has become harder for single GPUs to compete against X2 from previous gen...

As for your compilation of unplayable games: in two of them GTX 580 is under 20fps, and since 45fps isn't playable, I guess 50+ is playable? Are you expecting 150% improvement for Kepler over GTX 580? Even for the other two games Kepler will need 100+% improvement to meet your "playable" rating.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Also I believe they have huge amounts of performance to gain fixing DX9 titles on GCN because the numbers for old dx9 games make less than no sense at the moment.

Ya, probably in 2013 and beyond. 2012 is shaping up to be the year of console game engines. Q1 2012 is entirely all console games that can be maxed out on a $200 GTX560Ti/HD6950.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Your summary conveniently doesn't include any compute tests where 7970 mostly beats 6990 and against 6970 it's closer to 70-80% advantage, and GPGPU was the driving force behind arch changes.

My summary took every single game benchmark from Anandtech's Review. I didn't include compute because I feel that's meaningless for consumers at the moment. I don't do any "professional compute tasks" on the desktop. Do you? If people cared about compute, not a single HD5xxx series would have sold in consumer space when Fermi was available for 2 years. Feel free to tell me which programs you use that take advantage of the added compute performance.

For professional applications (scientific, financial/banking sector, programming), compute matters. At the moment I fail to see how AES256, Mandelbrot, etc. performance advantage matters in the consumer space without any applications in Windows 7 that actually use AMD's huge compute advantage for common tasks.

Can I extract files faster with compute?
Can I run antivirus faster with compute?
Can I run Monte Carlo simulation faster with compute?

When we can use the GPU to accelerate common tasks, then I think it'll be a more exciting development for consumers.

Also, I looked at anandtech's 5870 review, and this 5870 equal or better to 4870x2 pretty much happened only when CF didn't scale all that well which seems to have happened more often back then.

"Using a 2560x1600 resolution, where multi-GPU technology works at its best, the single-GPU Radeon HD 5870 actually managed to outperform the Radeon HD 4870 X2 in 6 out of the 15 games tested, and in many of the cases where it was slower the margin was minimal." ~ TechSpot

6 of 15 is 40% of the time in their games. Most importantly, HD5870 often delivered 60 fps in situations where HD4870 would only be running 30-35 fps. But HD7970 doesn't allow for that to happen in Crysis/ Crysis 2/ BF3/ Metro 2033/ Shogun 2, etc. It's never fast enough to actually make a huge difference imho, with overclocking, it gets there ;)

After a couple years of driver improvements for SLI/CF, I'd say it has become harder for single GPUs to compete against X2 from previous gen...

I agree, but that doesn't mean I am going to give it a pass, esp. when AMD thinks it's impressive to beat GTX580 by just 25% on a new 28nm process, 14 months later.

As for your compilation of unplayable games: in two of them GTX 580 is under 20fps, and since 45fps isn't playable, I guess 50+ is playable?

For racing games, I want 45 fps minimum.
For FPS games, I want 60 fps average, with 40 fps minimums.

If I was ok with 20 fps minimums and 45 fps average, I'd either write for [H] or I play my PS3 @ 30 fps without spending a dollar to upgrade PC hardware.

Are you expecting 150% improvement for Kepler over GTX 580?

150% more? No. But at least 50% more over GTX580, not 25% that HD7970 delivered.

In 2 years time I purchased HD4890 ($175), GTX470 ($190) and HD6950 @ 6970 ($230). Those 3 cards in total cost $595 without resale. Going from HD4890 --> HD6970 netted me 75-80% increase, at least. Since HD7970 is only 40% faster than my HD6970, my upgrade path involves paying $550 to just get 40% more performance? o_O I was hoping for a much higher performance increase from the factory, so that the HD7950 might have been a viable upgrade, but now it might only be 25-30% faster for $450 over my card.

Even for the other two games Kepler will need 100+% improvement to meet your "playable" rating.

Agreed. There isn't a double standard on my part, trust me. If GTX680 is only 40% faster than GTX580, I won't be impressed either and skip it too.
 
Last edited:

janas19

Platinum Member
Nov 10, 2011
2,352
1
0
the move to gpgpu or simd is more or less a mandated course given deferred rendering in games and other compute functions. neither amd or nvidia have a choice in the matter, if they want to stay relevant.

watch the dice bf3 presentation and you can get an idea of what is going on. they are dividing the rendering pipeline into large sets of simpler functions calculated in the first pass(color, spec, surface normal, ambient occlusion) and then solving for multiple light sources later. the sheer number of conditionals and dependencies require you to have powerful gpgu style schedulers.

as long as deferred rendering is the vogue for future game engines, older style vliw plug and chug architectures are not an option to pursue. the added benefit of being able to do hpc compute and being able to synergize with cpu development is just gravy as far as amd's motivation to go gpgpu is concerned.

That's good info about where gaming is going.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Their improvement is due to a massive shift from 40nm to 28nm. An architecture that sacrifices performance in a field (gaming) will still come ahead with such a huge improvement.

If you look at the 33% shader increase (1536-2048), it doesn't look like they sacrificed any performance. It appears that GCN is no better at gaming, except for geometry throughput, than VLIW4 (a side benefit of the improved compute performance), but does improve compute performance.
 

iCyborg

Golden Member
Aug 8, 2008
1,324
51
91
My summary took every single game benchmark from Anandtech's Review. I didn't include compute because I feel that's meaningless for consumers at the moment. I don't do any "professional compute tasks" on the desktop. Do you? If people cared about compute, not a single HD5xxx series would have sold in consumer space when Fermi was available for 2 years. Feel free to tell me which programs you use that take advantage of the added compute performance.
When we can use the GPU to accelerate common tasks, then I think it'll be a more exciting development for consumers.
It's your right to ignore all the architecture improvements that you as a gamer don't find useful and label them meaningless, but what ultimately matters for AMD is revenue/market success of the product and I don't agree that they are insignificant looking at the overall picture. I also doubt that nVidia doesn't care about GCN's GPGPU improvements...
I mean the title of this thread is suggestive enough.

I agree, but that doesn't mean I am going to give it a pass, esp. when AMD thinks it's impressive to beat GTX580 by just 25% on a new 28nm process, 14 months later.
How much better did nVidia get 14 months later? :)

For racing games, I want 45 fps minimum.
For FPS games, I want 60 fps average, with 40 fps minimums.

If I was ok with 20 fps minimums and 45 fps average, I'd either write for [H] or I play my PS3 @ 30 fps without spending a dollar to upgrade PC hardware.
I'm not judging your standards, I was just using the numbers to derive a conclusion that it's pretty much a given that Kepler will fail to make those games playable by the same standards.

150% more? No. But at least 50% more over GTX580, not 25% that HD7970 delivered.

In 2 years time I purchased HD4890 ($175), GTX470 ($190) and HD6950 @ 6970 ($230). Those 3 cards in total cost $595 without resale. Going from HD4890 --> HD6970 netted me 75-80% increase, at least. Since HD7970 is only 40% faster than my HD6970, my upgrade path involves paying $550 to just get 40% more performance? o_O I was hoping for a much higher performance increase from the factory, so that the HD7950 might have been a viable upgrade, but now it might only be 25-30% faster for $450 over my card.
I bought 4870 3 years ago. And I'm still waiting for a card that *I* would find a meaningful upgrade. 580 GTX is about the minimum performance-wise, but at $500 it's pricier than I'd care, not to mention ~60% more than I paid 4870. 7970 is a better value, but also more than I'm willing to pay for. But I don't think of either of them as lackluster just because they aren't for me.

Agreed. There isn't a double standard on my part, trust me. If GTX680 is only 40% faster than GTX580, I won't be impressed either and skip it too.
Fermi was hyped by some as the second coming of Jesus, huge chip and 6 months later it had to rock, right. And it turned out to be ~10-15% faster than 5870 and a lot hotter. I could be wrong of course, but there's a certain déjà vu feeling here. AMD can already easily come out with ~10% faster clocks, put another 5% driver improvements and Kepler will need to be ~60% faster just to get to the same position they were in with 580 vs 6970, and they will be late. I'll be very impressed if they can pull off 70+% improvement in Q2. That 1024 core beast supposedly coming in Q4 will be coming very close to SI's successor me thinks.
 

Flipped Gazelle

Diamond Member
Sep 5, 2004
6,666
3
81
When we can use the GPU to accelerate common tasks, then I think it'll be a more exciting development for consumers.

It's a chicken or egg situation, do programmers wait for widespread HW support, or does HW follow the programmers' lead? Only enthusiasts who have their noses firmly stuck in the present can fail to be excited by the future prospects of GPGPU... well, as exciting as the modern computing environment allows, anyway.

Edit: we've been having this problem for years with multi-core CPU's. I get a tad depressed when I think about how f**king plodding mainstream computing technology has been over the last decade.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
The 5870 doubled the spu of the 4870. The 7970 adds 33% to the 6970. ~doubling of performance isn't a reasonable expectation.

Phenom II x4 was slower than 1st generation of i7 CPUs. Bulldozer doubled the # of cores of Phenom II X4, but still can't beat 1st generation of i7 CPUs. Just because you achieved 40% more performance over your previous part (HD7970 vs. 6970), doesn't make it impressive in competitive markets with other players.

I never said HD7970 should double HD6970's performance, but 50-60% would have been nice. Also, relative to GTX580, it's only 25% faster after 14 months. I expected more -- here is why.

A) Comparing HD7970 only to the HD6970 misses the point since HD7970 is not a $370 card anymore, but a $550 card. Normally, we get more performance at the same price, or similar performance at a far lower price. In this case, what kind of a performance boost should be expected in the graphics card industry at the $500-550 level given the time frame? Do you think 25% is satisfactory in 14 months?

B) Would we have been satisfied if each previous generation was only 25% faster? So in that case, GTX480 only needed to be 25-30% faster than the faster of [GTX285/HD4890] and GTX280 only needed to be 25-30% faster than the faster of [8800GTX/HD3870]. I think the performance increase should be measured from a generational perspective vs. the fastest card from whichever camp, in which case 25% more misses that mark by miles. 25% is excellent for an evolutionary part, not for a new generation. Why is HD7970 exempt from historical standards of expected performance improvement vs. the previous generation high-end card (regardless if it's from NV or AMD)?

C) If for a moment we assume that 25% is satisfactory from the highest performing card released 14 months ago, then by those same standards, we should expect no more than a 25% increase 14 months from January 9, 2012 as well. Would you be happy if by April of 2013, the fastest card available at that time to be only 25% faster than HD7970? If we wouldn't be happy, then logically we should expect a far greater increase than 25% over the course of 14 months, or that performance level should be achieved in a much shorter period of time. I personally wouldn't be happy if by April of 2013, the fastest single-GPU on the market is only 25% faster than the 7970.

I would have been extatic if HD7970 was $379, but it isn't.

There are some significant implications from this:

1) Normally, current high-end cards would now drop to $200-250 levels, as clearance. This is unlikely to happen now because HD7970 didn't beat GTX580 enough. Since AMD and NV gave up on improving performance at the sub-$100 level, that means for gamers on a budget, it's very difficult to increase their performance level by a lot without having to drop a lot of $$. Frustratingly enough, HD6950 2GB is still priced at $230-250, hardly an improvement considering it's been on the market for 12 months. That's not a healthy sign for a competitive industry.

2) If both AMD and NV go back to $500-600 price levels, then we are back to the previous decade of pricing strategy. I have no problem with that, however, in that era, the $500-600 price levels for a new generations were accompanied by at least a 50% performance boosts over the previous fastest card from either camp.

If we as gamers become satisfied to pay $500-550 for just a 25% performance increases every 14-15 months, what kind of a message are we sending to AMD and NV? They'll start giving us 25% performance increases, instead of the 40-50% we usually expected from them during the transition from the previous high-end card from either camp to a brand new generation. Just my 2 cents.

** My post is from a gamer's point of view, not the AMD point of view. I already said that from AMD's perspective, the $550 is more than justified since it's 25% more performance for 10% higher price than the GTX580. But for us gamers, this is a huge step back in terms of expected performance increase from 1 generation to the next. ** If this is what the future holds, I am disappoint (but I don't believe it is). I think HD7970's $550 reign at the top will be short-lived, nothing like GTX580's 14 months reign.
 
Last edited:

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
I think that's true, RS, anyone on a 6000 or 500 series will probably need to wait for 2nd spin/gen of 28nm if they want to see 60%+ perf per dollar. That's happened before, but anyone still on 4000/400 will be tempted.
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,193
2
76
The pricetag of these cards is entirely up to nvidia. If they lower the price on the gtx580 it will tempt a lot of people. Everyone seems to think that AMD is pricing this card at $550 as some sort of greed fueled anti consumer the rich get richer scheme. It's not. They are just pricing the card where it's performance allows it to sell in relation to the competition.

If nvidia drops prices to make their gtx580 more attractive and clear out stock for kepler AMD will have to lower 7970 prices to sell parts.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
They had to do it sometime, just like Nvidia had to do it sometime. GPGPU is the future, and a lot of money can be made on it.

And a few more in that vein...

http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review/2
In Q3’2011 NVIDIA’s Professional Solutions Business (Quadro + Tesla) had an operating income of 95M on 230M in revenue. Their (consumer) GPU business had an operating income of 146M, but on a much larger 644M in revenue.
GPGPU
95M Operating income = 230M revenue - 135M Expenses
(230M-135M)/135M *100% = 70.37% profit
Gaming
146M Operating income = 644M revenue - 498M Expenses
(644M/498M)/498M * 100% = 29.32% Profit

Now, while the profit percentage is undoutdebly higher on the GPGPU side, the total profit is much smaller. The notion that GPGPU is more important is simply false because the total profit from it is smaller, it might become more important if it maintains its high profit margins (unlikely with actual competition now) and analysts are correct and it overtakes gaming some day (who knows). But at the moment it doesn't, didn't, and isn't.

I am also very curious as to development cost, as with such high incomes in both sectors I would imagine that both would warrant their own unique development. What they are doing is cutting costs by making a chip that is less good at both things it targets, but saves the money of developing a unique chip for each sector.

They must have reasoned that:
A. Sacrificing some gaming would cause a smaller reduction in gaming income then the profits from GPGPU income.
B. Developing a unique device for each will cost more then the increase in sales in each field with the added performance.
C. Games and consumer software is going to, very very soon, render non GPGPU focused video cards obsolete (has been nVidia's view from day one and it never materialized)

PS. IF someone can shed some light on the cost of developing a new architecture I would very much like to hear it.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
If you look at the 33% shader increase (1536-2048), it doesn't look like they sacrificed any performance. It appears that GCN is no better at gaming, except for geometry throughput, than VLIW4 (a side benefit of the improved compute performance), but does improve compute performance.

Firstly, you can't just compare SPs without looking at GPU clock speeds. 925 x 2048 vs. 880 x 1536 = +40% shader throughput. I see the SP comparison without clocks all the time on our forums. It's simply illogical. What if 1 card had 1024 SPs @ 500mhz and another had 512 SPs @ 1000mhz? You have to consider clock speeds.

Secondly, to measure the efficiency of the architecture, you should look at transtitor count and die sizes, not # of SP units alone. SPs is just 1 of the 4 important facets of a videocard (ROPs, TMUs and memory bandwidth being the other 3). By only narrowing it down to transistors, you start to see the real efficiency imo.

With 4.3 billion transistors, the Tahiti XT boasts over 65% more transistors than the Radeon HD 6970, and yet is only 40% faster on average. Performance / transistor in games is actually worse (which is expected since they had to add GPGPU functionality). Fermi took a performance hit from this earlier.

The reason we are getting higher performance / watt is because of the node shrink, not because the efficiency of GCN vs. VLIW-4 has improved in games.

- 65% more transistors in 6% small die size strictly because they went from 40nm to 28nm.
- The move from 40nm to 28nm allows for up to a 45-50% reduction in power consumption at same transistor switching speeds, at least at Global Foundries. I am assuming it's similar for TSMC.

"28nm transistors offer up to 60% higher performance than 40nm at comparable leakage with up to 50% lower energy per switch and 50% lower static power." The real "secret sauce" behind HD7970's 40% performance increase is not GCN, but the die shrink. HD7970 is 40% faster than HD6970 in-spite of moving to GCN, not because of it. 28nm allowed AMD to fit GCN features, without sacrificing gaming performance. So more less, until games move to deferred rendering engines or engines that rely on computer (such as in Civilization 5), GCN isn't any more efficient for games than VLIW-4 was.

That means, if HD6970 was shrunk to 28nm, had 65% more transistors, it probably would also have achieved the same 40% performance boost, if not more, because it wouldn't have needed to waste space for GPGPU units.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I never said HD7970 should double HD6970's performance, but 50-60% would have been nice. Also, relative to GTX580, it's only 25% faster after 14 months. I expected more -- here is why.

No, here is why. They had a transistor budget. They decided to spend a fair bit of it on something besides games.

A) Comparing HD7970 only to the HD6970 misses the point since HD7970 is not a $370 card anymore, but a $550 card. Normally, we get more performance at the same price, or similar performance at a far lower price. In this case, what kind of a performance boost should be expected in the graphics card industry at the $500-550 level given the timeframe? Do you think 25% is satisfactory in 14 months?

The card that it's competing against, and is beating, is a $500-$550 dollar card. They are pricing it against it's competition. While I'm no happier than you are about the price, it's what makes sense at the moment considering the card's position in the market, now, and for the foreseeable future. It's going to sell at that price. There is no way to justify doing anything else.

B) Would we have been satisfied if each previous generation was only 25% faster? So in that case, GTX480 only needed to be 25-30% faster than GTX285/HD4890 and GTX285 only needed to be 25-30% faster than 8800GTX/HD3870. I think the performance increase should be measured from a generational perspective, in which case 25% more misses that mark by miles. Why is HD7970 exempt from historical standards of expected performance improvement vs. the previous generation high-end card (regardless if it's from NV or AMD)?

We'd all like to see more performance. The reason for it's relative performance to it's previous gen is a combination of things. First is the transistor budget and the need to do more than play games better I mentioned earlier. I think, from a gaming standpoint, it's fast enough. Much like CPU's though from a compute standpoint there's no such thing as fast enough. I think they made it as fast as they could on the compute side while improving the gaming performance as much as they deemed necessary. It's just the compromise required for the targets and goals they set up.

C) If for a moment we assume that 25% is satisfactory after 14 months of waiting from the highest performing card 14 months ago, then by those standards, we should expect no more than a 25% increase 14 months from now as well. If we would be unhappy with the fastest card being only 25% faster than HD7970 by May 2013, then logically we should expect a far greater increase than 25% in 14 months, or that performance level achieved in a much shorter period of time.

You need to stop exaggerating the time scale. IIRC (and I'm not looking it up, just going from memory) The 580 was released last Dec. The 6970 will be in Jan. That's 13mos. If the next faster card was released in May '13 that would be 16mos. Now, it's very likely the next faster card to be released will be sometime this year (who knows exactly when) by nVidia. Before May of next year AMD (and maybe even nVidia) will likely release a yet faster card than that. So that's 2 or maybe 3 faster cards. And that's not counting dual GPU cards, because I assume we are talking single gpu only.

I would have been extatic if HD7970 was $379, but it isn't.

I think this is the real problem. You don't perceive this card as being a good value. I can actually see your point here. We are dealing with supply, which is apparently not good, and demand, which will likely outstrip supply. Add to that no competition, and we have a situation that doesn't bode well for consumers.

There are some significant implications from this:

1) Normally, current high-end cards would now drop to $200-250 levels, as clearance. This is unlikely to happen now because HD7970 didn't beat GTX580 enough. Since AMD and NV gave up on improving performance at the sub-$200 level, that means for gamers on a budget, it's very difficult to increase their performance level by a lot without having to drop a lot of $$.

I don't think it's because the 7970 didn't beat it by enough. I think it's because nVidia doesn't have a competitive card. If nVidia was releasing a card that was 20% faster than the 7970 at $500, you'd see the 7970 for $400 (or less). I also don't think AMD and nVidia gave up on improving performance <$200. The 6870 is pretty damned fast for it's price. Definitely faster than anything else that's come before it for the same price. We'll have to wait and see how this gen turns out. If nVidia continues to not compete, or yields stay in the toilet, it might turn out bad.

2) If both AMD and NV go back to $500-600 price levels, then we are back to previous decade of pricing. I have no problem with that, however, in that era, the $500-600 price levels for new generations were accompanied by 50-100% performance boosts over the previous fastest card from either camp.

Do you honestly not see the combination of circumstances that have created this situation. Or the circumstances that caused lower pricing for the last couple of gens? AMD trying to gain market share. Strong competition between the brands. Etc... etc...

If we as gamers become satisfied to pay $500-500 for just a 25% performance increases every 14-15 months, what kind of a message are we sending to AMD and NV? They'll start giving us 25% performance increases, instead of the 40-50% we usually expected from them during the transition from the previous high-end card from either camp to a brand new generation. Just my 2 cents.

** My post is from a gamer's point of view, not the AMD point of view. I already said that from AMD's perspective, the $550 is more than justified since it's 25% more performance for 10% higher price than the GTX580. But for us gamers, this is a huge step back in terms of expected performance increase from 1 generation to the next. ** If this is what the future holds, I am disappoint.

It might or might not continue this way. It could all change in a few months, if nVidia gets competitive again. If they continue to release 6mos apart from eachother though, it might stay like this for a while.

just for the record, I dispute how you are calculating the gen to gen improvement for SI. Debating it though isn't really relevant. Suffice it to say I don't think things are as bad as you are making them out to be.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
How much better did nVidia get 14 months later? :)

GTX580 launched just 8 months after GTX480 and added 15&#37;. That was a refresh, not a new generation.

I guess we'll have to see how much NV adds over HD7970 in the next 14 months.

I'm not judging your standards, I was just using the numbers to derive a conclusion that it's pretty much a given that Kepler will fail to make those games playable by the same standards.

:D Agreed. This is why I am strongly contemplating skipping this entire generation. If you step back for a second from % increases, and just look at the Frames Per Second added, in some cases I am only seeing 12-14 fps added over my card. Now 40% more sounds like a lot, but when you are coming from 29 fps in Crysis 2 to 45 fps, that's still not what I would want.

I bought 4870 3 years ago. And I'm still waiting for a card that *I* would find a meaningful upgrade. 580 GTX is about the minimum performance-wise, but at $500 it's pricier than I'd care.

I have a solution for this kind of thing. Backlog of less demanding steam games. :cool:

AMD can already easily come out with ~10% faster clocks, put another 5% driver improvements and Kepler will need to be ~60% faster just to get to the same position they were in with 580 vs 6970, and they will be late. I'll be very impressed if they can pull off 70+% improvement in Q2. That 1024 core beast supposedly coming in Q4 will be coming very close to SI's successor me thinks.

There's a limit of how long one can wait to upgrade because there will always be something better coming out in the tech world. For people who have been holding out to upgrade for a while, it doesn't make sense to wait for Kepler if they really need more performance. That includes a huge wave of HD4870/4890/GTX275/280/285 users, etc. These gamers will already be getting a 200-250% performance increase by going to an HD7950/7970. So waiting for Kepler for a possible 25% performance increase (hypothetical) wouldn't really be worth it. It is this generation of gamers for whom the HD7xxx generation is really a slam dunk.
 
Last edited:

PingviN

Golden Member
Nov 3, 2009
1,848
13
81
VLIW has served ATi/AMD well so far, but there is a shift towards GPGPU and I guess they felt it was time to jump onto that bandwagon now rather than with the next generation of GPUs. It would be interesting to see the performance gains of sticking to VLIW on 28nm instead of GCN, but there really isn't a lot of games where HD7970 isn't enough for maximum settings at 2560*1600. I think the change is for the better, but it wasn't really necessary until now.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review

So, I am looking at the anantech review of the AMDs new graphics architecture and all I can see is fermi.
It outright says that this is not as good for gaming, but better for compute. And so for necessity they are going that way, providing absolute minimal boost in gaming performance over current gen in graphics due to dumping VLIW4 for SIMD.

I have to say that was not at all what I expected. I thought we would see nVidia backpedal on fermi rather then AMD embrace it.

Intel and AMD CPUs are rendering lower-end GPUs obsolete.

NVDA lost its chipset business after a long struggle with Intel.

NVDA needed to make up revenue elsewhere. Mobile is fine, console GPUs are fine, and professional graphics = cash cows (and may need some compute-related abilities), but HPC/supercomputing on GPU is nascent and NVDA wants to own it just like it owns professional graphics at 85%+ market share. If AMD doesn't go compute, it will basically conceded GPU-based HPC/supercomputing to NVDA. No way AMD can be dumb enough to allow that to happen to them again... all these years and AMD is still unable to crack the professional graphics market due to the NVDA market share there and reluctance of business users to switch brands to a semi-flaky rival.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review

So, I am looking at the anantech review of the AMDs new graphics architecture and all I can see is fermi.
It outright says that this is not as good for gaming, but better for compute. And so for necessity they are going that way, providing absolute minimal boost in gaming performance over current gen in graphics due to dumping VLIW4 for SIMD.

I have to say that was not at all what I expected. I thought we would see nVidia backpedal on fermi rather then AMD embrace it.

I am surprised that your are suprised by this.
GPGPU is here to stay and have become more and more important.

And like I have predicted before, we will see a chnge of stance on this forum.
The "argument" will not be that GPU physics is a turd anymore.
It will be changed to NVIDIA's GPU physics is a turd...but AMD's is good.

Wait and see.

Another thing is to look at NVIDIA's margins in the HPC segement.
AMD really wants some of the green.
The last generations they tried to lower their prices to keep marketshare (and little did that help them) but that strategy hurt their bottom line.

Now they have new manegement, they need to make money...and the way to do that is to follow the course that NVIDIA have followed since the G80.

But I am glad that "perf/watt" will die off...and performance again will be the metric of choice.

That will stop a lot of irrelevant noise.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
I don't do any "professional compute tasks" on the desktop. Do you?

I use this on a near daily basis:
http://www.vreveal.com/


GPGPU is not something just for professionals.

Just yesterday I cleaned up a 20 minute video of my daughters Jule-ending at her school.

Took 10 minuttes to clean that video op...I wouldn't like to think on how long I would have spent if I were using my CPU...
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
I am surprised that your are suprised by this.
GPGPU is here to stay and have become more and more important.

And like I have predicted before, we will see a chnge of stance on this forum.
The "argument" will not be that GPU physics is a turd anymore.
It will be changed to NVIDIA's GPU physics is a turd...but AMD's is good.

Wait and see.

Another thing is to look at NVIDIA's margins in the HPC segement.
AMD really wants some of the green.
The last generations they tried to lower their prices to keep marketshare (and little did that help them) but that strategy hurt their bottom line.

Now they have new manegement, they need to make money...and the way to do that is to follow the course that NVIDIA have followed since the G80.

But I am glad that "perf/watt" will die off...and performance again will be the metric of choice.

That will stop a lot of irrelevant noise.

Although AMD and NVDA both innovate, NVDA does a better job of actually executing its business plans. E.g., AMD's physics and tessellation went nowhere and are still pretty much nowhere to be seen, whereas NVDA didn't get on the tessellation bandwagon for a long time, but when it did, it really pushed the boundary. Similarly, NVDA has actual implementation of CUDA and PhysX. AMD has... not much. NVDA pushed into mobile, but AMD sold its mobile division for cheap while watching the buyer get rich off it. NVDA's market share in professional graphics is basically monopoly-level. Etc.

And yeah, AMD's Eyefinity was a rare victory, but NVDA put together a temporary fix (dual-GPU cards, or SLI, to get Surround) while it worked on Kepler which probably has hardware Eyefinity like AMD's cards do.

There is one thing AMD got very right, though, and that is performance/watt. We are living in a world with ever-rising energy costs and ever-rising concerns over greenhouse gases and the environment. Thus it baffles me to hear you say that perf/watt doesn't matter, even in the non-mobile world. Intel knows this which is why they are in a deathmatch with ARM's energy-sipping designs... it is only a matter of time before ARM gets big server contracts due to ARM-run server farms sipping less energy than x86-run server farms. Heck, even NVDA know perf/watt matters--look at their PR slides and slides at their HPC-oriented conferences. Wattage means more costs: to cool the chip, to quiet the cooling solution (if necessary), more PCB components, and of course more direct electrical costs. That is why it respinned Fermi so quickly to get that power draw down.

P.S. GPU-based physics, including NVDA GPU PhysX, is still a "turd" as you put it, because nobody is adopting it who isn't being bribed by NVDA to do so. It will de-turdify if it ever gets into consoles, which many companies program for first these days, not PC.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
Compute and DX-11 Tessellation needs to spend a lot of transistors in the Front End in order to raise TLP/DLP and adding more Tessellation Units. Those transistors consume die space, the effect is lowering performance per die size and performance per power usage.
Older DX-9 and VLIW coded games will see minimal performance increase against Cayman. In those games performance per die size and performance/watt efficiency will go down.

Some examples,

CoD 4
http://tpucdn.com/reviews/AMD/HD_7970/images/cod4_1920_1200.gif

Skyrim
http://tpucdn.com/reviews/AMD/HD_7970/images/skyrim_1920_1200.gif


Home Front
http://www.kitguru.net/components/graphic-cards/zardon/amd-hd7970-graphics-card-review/10/

F1 2011
http://www.kitguru.net/wp-content/uploads/2011/12/f1-20117.png

FarCry 2
http://www.kitguru.net/wp-content/uploads/2011/12/far-cry21.png

The small die strategy will not have the same effect this time in DX-11 games and GPGPU. VLIW-4/5 was more efficient in performance per die space in DX-9 games and it was the reason HD4870/90 and HD5870/6970 had better performance per die/watt efficiency against GT200 and GF100/110.

A 1024 Cuda Core Kepler GK100 could have higher performance difference against HD7970 than GTX580 vs HD6970 had. Since GCN is a new architecture there are no games coded for that as of yet and it will take some time for drivers and game patches to raise performance in today’s games.
 

Gloomy

Golden Member
Oct 12, 2010
1,469
21
81
And like I have predicted before, we will see a chnge of stance on this forum.
The "argument" will not be that GPU physics is a turd anymore.
It will be changed to NVIDIA's GPU physics is a turd...but AMD's is good.

Wait and see.

Err, no. The "argument" is that GPU physics in its current form alienates consumers, and because of this, adoption isn't going to happen.

And it will remain so if AMD's take also alienates consumers and isn't widely adopted. This has very little to do with brand and a lot to do with the -fact- that developers aren't going to waste resources on features that only apply to a fraction of the market. (Remember that PhysX has a performance tax as well, so it's not only not applicable to AMD buyers, it's only being used by Nvidia users with hardware that can handle it!)
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
Err, no. The "argument" is that GPU physics in its current form alienates consumers, and because of this, adoption isn't going to happen.

And it will remain so if AMD's take also alienates consumers and isn't widely adopted. This has very little to do with brand and a lot to do with the -fact- that developers aren't going to waste resources on features that only apply to a fraction of the market. (Remember that PhysX has a performance tax as well, so it's not only not applicable to AMD buyers, it's only being used by Nvidia users with hardware that can handle it!)

I have had hardware since 2006 that could run PhysX...at some point you cannot dial up for the AA and AF...and just pushing mor pixels of the same dead world isn't going to improve immersion...
 
Status
Not open for further replies.