AMD's Gaming Evolved snags FarCry3

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Umm...exactly, yet again, and you continue to support my original question of your statement. I hope that makes it more clear.

How can they add more units when their architecture doesn't support more units? Magic? :hmm:
That's the reason why Tahiti and Pitcairn have the same configuration. AMD can add more SIMD-Cluster with TMUs and Compute-Units. But at some point the performance benefit is not scaling up with the units.

So, yes, higher clocks - can overcome the difference in through put, which it does.

Yes because of the higher clock you don't need more geometry pipelines and units. But it will increase the power consumption.

It is a problem, just like the lack of tessellation units were for the HD 5k series - which is why I offered that as a comparative from the start.

It's the reversal of the tessellation issue. AMD beat their chest for years, then nVidia stomped them with their implementation. Well nVIdia beat their chest for years, and now AMD is stomping them with their implementation.

It has nothing to do. Kepler is limited by the compute units (Back-End). Using DirectCompute for accelerate the processing time of effects will always benefit Kepler more than Tahiti because Tahiti is limited by the front-end.
That's the reason why AMD is pushing more workload instead of less and Tessellation: It will hurt them more than nVidia.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
How can they add more units when their architecture doesn't support more units? Magic? :hmm:
That's the reason why Tahiti and Pitcairn have the same configuration. AMD can add more SIMD-Cluster with TMUs and Compute-Units. But at some point the performance benefit is not scaling up with the units.

...Exactly. Less units == problem in situations that demand for more. Ie Tessellation issues during HD 5K series, and now DirectCompute issues for Kepler.

Thus why I said the two bold items in your original post that I quoted were conflicting.

Yes because of the higher clock you don't need more geometry pipelines and units. But it will increase the power consumption.

Well, of course. But it seems you already knew this and thus I don't get why you even presented it as a counter arguement.

It has nothing to do. Kepler is limited by the compute units (Back-End). Using DirectCompute for accelerate the processing time of effects will always benefit Kepler more than Tahiti because Tahiti is limited by the front-end.
That's the reason why AMD is pushing more workload instead of less and Tessellation: It will hurt them more than nVidia.

And this circle just keeps going round and round. Roger, than AMD didn't have an issue with their lack of tessellation during the HD 5K series. There was no problem then either.
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
In reality, every generation it seems that the GPU makers patch up what's missing. During the 6xxx series, AMD had horrible tesselation and were behind in compute. Now they excel in Compute and improved their tesselation. nVidia was horribly power inefficient and hot (even comparing Kepler to 460/560), but now they've got an architecture to improve that, too.

Once you start comparing the GK104 to the AMD card it was built to beat (ie., Pitcairn), you realize that nVidia will be fine once GK110 is called out from the delay-induced shadows it was put into because you see that both companies address their weaknesses with each gen. GK110 still gains ground on Compute and improves upon tesselation while improving power efficiency from Fermi 480/580.

But right now Compute is not being used a lot. Most gamers are still using cards from the 8800/280 series, so pushing the boundaries with console ports tends not to work out very well. Hence, nVidia taking its time to get GK110 to market because it's not needed.

And when games actually start using extreme levels of Compute, they have GK110 already designed and ready for release. For now, the higher power efficiency of GK104 trumps rarely used DirectCompute. And given that AMD took six months to release a high performance driver update that leveled the playing field, I think I'll give nVidia at least as long to match it before I call the game over and done with.

If anything is sad, it's that nVidia's mainstream part beat AMD's high end out the gate. If AMD had the performance they now have with 12.7 and 12.8, then perhaps nVidia would have released the 680 series as the 660 it should always have been and the pricing would have been lower from the get-go. That didn't happen and so here we are. Mainstream is now $330-$430 for many gamers who used to spend only $200 on GPU's.

I'm sure nVidia and AMD are both very pleased, but I'm also sure it's probably inevitable as discrete GPU's become increasingly niche.

Ah...not sure what this had to do with my post. But anyways. The rumor that GK104 was supposed to be Nvidia's mainstream part and some phantom GK110 was to be the high-end enthusiast product is not the consumer's problem. It's not the developer's problem. It's not AMD's problem. It's Nvidia's problem. The reality is that the GTX 680 is Nvidia's best single-chip product. No point in talking as if that's not the case.
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
I heard that Ubisoft Montreal will be using Direct Compute on GCN hardware to spawn enemies at checkpoints far faster than is possible on Kepler GPU's.

I truly hope they fixed that garbage after Far Cry 2. Ruined what could have been a pretty good experience.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
It's sound so easy but it's so hard to put more units into the chip. AMD has not the architecture for it.

So you are saying AMD will not add any more shaders, textures, ROPs or improve geometry processing/tessellation with HD8000 series? Not sure I understand what your statement even means. You can always add more units -- it's a matter of the node process, power consumption and which units you want to add (i.e., what is the priority for a specific generation, SKU, price level).

You are saying AMD engineers can't figure out how to add more geometry / tessellation units into a chip? Pretty amazing that you know this information for a fact.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
So you are saying AMD will not add any more shaders, textures, ROPs or improve geometry processing/tessellation with HD8000 series? Not sure I understand what your statement even means. You can always add more units -- it's a matter of the node process, power consumption and which units you want to add (i.e., what is the priority for a specific generation, SKU, price level).

You are saying AMD engineers can't figure out how to add more geometry / tessellation units into a chip? Pretty amazing that you know this information for a fact.

I'd imagine it's probably the same difficulty as adding another memory controller to GK104. The question is, the designs for what AMD had in store for sea islands was fleshed out some time ago. AMD has had greater and more frequent architectural changes than Nvidia's lineup since Nvidia intro'd G80. Can AMD keep up the engineering pace of these constant non-trivial architectural changes or will they fall more in line with architectural tweaks a la Nvidia with GT200, Fermi, and probably Kepler?

AMD has already copied Nvidia's multi-purpose high end GPU strategy, so I suspect that following in more closely behind tweaks rather than overhauls is the likely scenario. But it's not to say that tweaks and fine tuning can't result in substantial improvements, we need not look further than the performance and power draw differences from GF100 to GF110 and GF104 to GF114.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
The number of GAMING EVOLVED titles in 2012 and heading into 2013 is quite significant. Alan Wake, Alan Wake's American Nightmare, Dirt Showdown, Nexuiz, Sniper Elite V2, Sleeping Dogs, Medal of Honor Warfighter, Farcry 3, BioShock Infinite, Hitman Absolution, Tomb Raider. Most importantly AMD needs to do a good job with excellent driver support for CF and Eyefinity out of the box on launch day with upcoming titles.
 

zlatan

Senior member
Mar 15, 2011
580
291
136
There is no problem with DirectCompute and Kepler. It's right the opposite:
In a lot of Games with DirectCompute nVidia is faster because of it. There are only a few examples which shows the other side: Metro2033 with DoF (bandwidth) or Dirt:Showdown with advanced lighting (less compute units).
There are two main difference between GCN and Kepler in the aspect of compute.
First ... a Kepler multiprocessor has 0.33B/FLOP shared data bandwidth with 32-bit accesses. With the same conditions a GCN multiprocessor has 1B/FLOP from LDS memory and an additional 0.5B/FLOP from L1 data cache. Also GCN has 64+16KB memory to share data when Kepler has 64KB.
The second main difference is the cache system. AMD use a very complex hierarchy tuned for compute. Nvidia use a more simple and less efficient approach.
All in all. GCN is a compute monster. Kepler has no direct problems with compute, but it will be much slower in complex compute shaders compared to GCN.
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
Well, here's my issue with the direct compute thing. WHen you use it and it runs better on AMD hardware, Nvidia has more marketshare so the developer is alienating a huge percentage of users. Granted many probably don't even know what direct compute is or what a game uses to render lighting etc. However, if someone reads a review and they see "the game runs poorly on Nvidia hardware" they will probably not buy the game. Word travels fast too.

So I think devs have to be careful about how they use certain features. It's one thing to be tossed money during development, it's another to sell your game when everyone knows it runs like garbage on anything except a specific set of hardware.
 
Last edited:

Plimogz

Senior member
Oct 3, 2009
678
0
71
It then becomes quite a feather in AMD's cap if the next-gen consoles are sporting GCN, no?
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
It then becomes quite a feather in AMD's cap if the next-gen consoles are sporting GCN, no?

no because they are using something like a 6670 according to the rumors I saw. Not enough power to do anything close to what you'll see in PC games.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Well, here's my issue with the direct compute thing. WHen you use it and it runs better on AMD hardware, Nvidia has more marketshare so the developer is alienating a huge percentage of users. Granted many probably don't even know what direct compute is or what a game uses to render lighting etc. However, if someone reads a review and they see "the game runs poorly on Nvidia hardware" they will probably not buy the game. Word travels fast too.

So I think devs have to be careful about how they use certain features. It's one thing to be tossed money during development, it's another to sell your game when everyone knows it runs like garbage on anything except a specific set of hardware.

I suppose it could be a bit of a double edged sword. It's better than the alternative though. Which is don't do it and continue to let your competitor do it. That is the worse of 2 evils, IMO. I don't think it running better on AMD hardware is really going to hurt sales. Although I see what you are getting at. It would matter to videocard geeks. Regular gamers, not too much, I wouldn't imagine.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I'd imagine it's probably the same difficulty as adding another memory controller to GK104. The question is, the designs for what AMD had in store for sea islands was fleshed out some time ago. AMD has had greater and more frequent architectural changes than Nvidia's lineup since Nvidia intro'd G80.

What? R600-> RV970 is the same fundamental underlying VLIW architecture with incremental improvements along the way. The first complete architecture redesign is GCN this generation.

Can AMD keep up the engineering pace of these constant non-trivial architectural changes or will they fall more in line with architectural tweaks a la Nvidia with GT200, Fermi, and probably Kepler?

HD8000 series is an incremental not a fundamental upgrade to GCN 1.0. Expect larger die sizes, slightly higher clocks perhaps, more functional units.

AMD has already copied Nvidia's multi-purpose high end GPU strategy, so I suspect that following in more closely behind tweaks rather than overhauls is the likely scenario. But it's not to say that tweaks and fine tuning can't result in substantial improvements, we need not look further than the performance and power draw differences from GF100 to GF110 and GF104 to GF114.

AMD has just went through the first major architectural overhaul since 2900XT 6 years ago. GCN is the foundation for at least the next 1-2 generations. It'll be improved much like NV has continuously improved its scalar architecture. I think the next major architecture from NV is actually Maxwell from what I remember.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
no because they are using something like a 6670 according to the rumors I saw. Not enough power to do anything close to what you'll see in PC games.

Ya, those rumors have since been put into question. Apparently both Xbox 720/PS4 may now be using some GCN HD7000 series. With the recent rumor that PS4 will support 4K TV resolution, that gives us a strong hint that HD7000 modern spec GPU is at least in the cards.

Well, here's my issue with the direct compute thing. WHen you use it and it runs better on AMD hardware, Nvidia has more marketshare so the developer is alienating a huge percentage of users.

Welcome to PhysX and why if AMD pours $ into AMD Gaming Evolved, it's going to segregate our the choice of GPU ownership even more based on the games we play. Now NV owners who kept saying how amazing PhysX is and how great it was that NV worked closely with developers (i.e., throwing $ at developers) are going to taste their own medicine. AMD Gaming Evolved vs. TWIMTPB (i.e., the marketing game of who throws more $ at developers) is not a great development for gamers but AMD had to do it since for 3 generations NV users weren't switching. I guess AMD's management realized they can't play a fair and honest game and are now giving in to using the same tactics NV has used for years.

Then there is the rumor that AMD won the GPU selection for all 3 next generation consoles. It sounds like AMD is going all in on this game engine/developer optimization strategy. NV has very intelligent engineers. They fixed DX9 performance of the FX5900 series without any problems with GF6. If NV falls behind in DirectCompute, they'll address it shortly, maybe even starting with GK110.
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Please keep in mind AMD is rumored to have a clean sweep of the Wii/Sony/Xbox next-gen consoles. In fact we already know Wii U uses AMD 7xxx-series technology. This obviously does not help hardware PhysX market penetration. However, NV's architecture has historically been more flexible at more varied computational tasks so don't count them out... I doubt they will fall behind AMD by much if at all, even if games get coded in such a way as to favor GCN.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
Nvidia is free to support DirectCompute as robustly as they want. Unfortunately PhysX is specifically implemented by Nvidia to try and force an artificial advantage into the marketplace, and in the process it has clearly instigated a war that is starting to divide PC gaming. The sad part is, there are even some reviewers that actually consider DirectCompute implementations proprietary, but don't say a word about PhysX, or even tout it as an advantage for Nvidia, and sing the praises of Nvidia's developer relations. If you don't think so, see this post.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
@ RS
"Welcome to PhysX and why if AMD pours $ into AMD Gaming Evolved, it's going to segregate our the choice of GPU ownership even more based on the games we play. Now NV owners who kept saying how amazing PhysX is and how great it was that NV worked closely with developers (i.e., throwing $ at developers) are going to taste their own medicine. AMD Gaming Evolved vs. TWIMTPB (i.e., the marketing game of who throws more $ at developers) is not a great development for gamers but AMD had to do it since for 3 generations NV users weren't switching. I guess AMD's management realized they can't play a fair and honest game and are now giving in to using the same tactics NV has used for years. "
That's not how software engineering works at all.To gain better performance from your hardware you must show the developers how to extract that performance in the first place,throwing money won't help that.If someone has used Intel/NV/Amd compilers they will understand that.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
That's not how software engineering works at all.To gain better performance from your hardware you must show the developers how to extract that performance in the first place,throwing money won't help that.If someone has used Intel/NV/Amd compilers they will understand that.
This is how it really works. GPU maker marches in and helps the dev get the most out of the hardware, accidentally leaves some money bags behind. And by complete blind coincidence, those features and optimizations don't work on the competitors hardware. :hmm:
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
Now a days when PC game's sales are dwindling I don't see how throwing money is going to help either AMD/NV.I think they do a excellent job of marketing the game though.AMD/NV are well known than most of the game devs, so if they advertise your game its great for them.I mean AMD advertises SD and NV advertises BL2 ,what's more the devs can hope for?
 

Jacky60

Golden Member
Jan 3, 2010
1,123
0
0
Now a days when PC game's sales are dwindling I don't see how throwing money is going to help either AMD/NV.I think they do a excellent job of marketing the game though.AMD/NV are well known than most of the game devs, so if they advertise your game its great for them.I mean AMD advertises SD and NV advertises BL2 ,what's more the devs can hope for?

Throwing money around is THE most effective way of getting anything done in capitalism. The idea that throwing money at problems doesn't or can't help strikes me as bizarre in the extreme. With enough money you can get almost anything done by anybody anywhere.
 
Feb 19, 2009
10,457
10
76
Now a days when PC game's sales are dwindling I don't see how throwing money is going to help either AMD/NV.

Srly not this.. i've been hearing the "PC gaming is dying" prediction for over a decade and its all rubbish.

I fully expect GK110 to be a complete beast in gaming and compute so it should handle anything DX11 AMD tries to throw at GE titles.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
But PC sales are no where near consoles unless it is made by Blizzard
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
If NV falls behind in DirectCompute, they'll address it shortly, maybe even starting with GK110.

Actually it is AMD who is falling behing nVidia. Losing in 16 of 23 DX11 games is not really a sign of a great Architecture for DX11.
 

Rikard

Senior member
Apr 25, 2012
428
0
0
This is how it really works. GPU maker marches in and helps the dev get the most out of the hardware, accidentally leaves some money bags behind. And by complete blind coincidence, those features and optimizations don't work on the competitors hardware. :hmm:
This. I do not have experience with GPU coding, but I was invited to a free course with Intel regarding multithreading, vectorization etc. Of course they only show you how to do it with their own compiler... There are ways of throwing money around without any money being visible. This free-lunch marketing is quite common in business in general.