HD5870 vs GTX480 two years later.

Status
Not open for further replies.

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
Did you ever wondered the performance difference those two cards have in recent DX-11 games??

Well, while reading TechPowerUps HD7950 3GB review, I did noticed that both HD5870 and GTX480 were on the benchmark slides.

So, almost two years (22 months) after the initial GTX480 release on March 2010, we have data to compare them again only in DX-11 titles.

AVP

avp_1920_1200.gif


GTX480 is 14% faster than HD5870

Batman : Arkham City

arkhamcity_1920_1200.gif


GTX480 is 110,5% faster than HD5870

BF3

bf3_1920_1200.gif


GTX480 is 27,2% faster than HD5870

Battleforge

battleforge_1920_1200.gif


GTX480 is 20,6% faster than HD5870

Civ 5

civ5_1920_1200.gif


GTX480 is 60,76% faster than HD5870

Crysis 2

crysis2_1920_1200.gif


GTX480 is 30,10% faster than HD5870

DIRT 3


dirt3_1920_1200.gif


GTX480 is 12,10% faster than HD5870

Dragon Age II

dragonage2_1920_1200.gif


GTX480 is 05,30% faster than HD5870

Metro 2033

metro_2033_1920_1200.gif


GTX480 is 40% faster than HD5870

Stalker : COP

stalkercop_1920_1200.gif


GTX480 is 16% faster than HD5870

Shogun 2

http://tpucdn.com/reviews/AMD/HD_7950/images/shogun2_1920_1200.gif

HD5870 is 16% faster than GTX480

On average, we have that GTX480 is 35,62% faster than HD5870 in DX-11 games at a resolution of 1200p.

It’s clearly that GF100 was designed for DX-11 and if both cards were being launched today, GTX480’s higher power consumption would not be a huge issue because of its large performance lead over the HD5870.

Unfortunately for NVIDIA, 2010 was not the year of DX-11 gaming. On the other hand, 2012 is the year of DX-11 games and Kepler will be playing in a more familiar playground than Fermi did in 2010.
 
Last edited:
  • Like
Reactions: Drazick

BD231

Lifer
Feb 26, 2001
10,568
138
106
nVidia is quite literally the Intel of GPU's from a technological standpoint as far as I can tell, they just don't execute as well. You can't blame them though, being stuck with the problematic FAB tech we have these days. The work each individual shader can get done is pretty much double that of AMD's which is staggering (although I don't know about the 79xx series). That's how AMD rolls though, MORE COARS. Have to wonder if that will bite them in the ass down the line but I'm not familiar with GPU tech on that level.

I think the higher minimum frame rate argument is pretty telling though, but AMD's execution is pretty stellar as of late.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
nVidia is quite literally the Intel of GPU's from a technological standpoint as far as I can tell, they just don't execute as well. You can't blame them though, being stuck with the problematic FAB tech we have these days. The work each individual shader can get done is pretty much double that of AMD's which is staggering (although I don't know about the 79xx series).

Well its pretty obvious since NV used shader hotclocking and super large dies with the GTX 200 series and Fermi. Their shader clocks are double the raster clocks, and AMD chose to not do that as to go for a more efficient design.

Just so you know, Kepler will not have hotclocking and they as well are going for an efficient design this time, without the hotclocking. Their old mantra was "die as large as feasibly possible, and hotclocked shaders to make this feasible" The side effect was terrible efficiency. They are doing the AMD approach with Kepler, fun times huh.
 

monkeydelmagico

Diamond Member
Nov 16, 2011
3,961
145
106
Did you ever wondered the performance difference those two cards have in recent DX-11 games??


On average, we have that GTX480 is 35,62% faster than HD5870 in DX-11 games at a resolution of 1200p.

It’s clearly that GF100 was designed for DX-11 and if both cards were being launched today, GTX480’s higher power consumption would not be a huge issue because of its large performance lead over the HD5870.

Unfortunately for NVIDIA, 2010 was not the year of DX-11 gaming. On the other hand, 2012 is the year of DX-11 games and Kepler will be playing in a more familiar playground than Fermi did in 2010.


GTX 480 was released almost a year after 5870. It should have been better on more levels than just DX11. It really got chewed up in the price wars with the 5870 usually being alot cheaper.
 

blckgrffn

Diamond Member
May 1, 2003
9,127
3,069
136
www.teamjuchems.com
There was quite a while there when you could do two 5870 for $400, or basically one GTX 480.

It would also be interesting to know how a 2GB 5870 would have fared in those tests, just academically.

The upside? The GTX 470/480 owners should feel even better now than at launch :)

Interesting look back though. It makes me glad that I only game @ 1080p (I resist buying a large monitor as I don't want to pay for the video cards to run with the settings I "need" :p )
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
GTX 480 was released almost a year after 5870. It should have been better on more levels than just DX11. It really got chewed up in the price wars with the 5870 usually being alot cheaper.

Would you like to see the HD6970 vs GTX480 ?? not only Cayman launched 8-9 months later, it still loosing in most of those DX-11 games to the GTX480. :whiste:
 
  • Like
Reactions: Drazick

badb0y

Diamond Member
Feb 22, 2010
4,015
30
91
Problem was when these cards were relevant and current the GTX was on average like 15% faster than a HD 5870 but also brought more heat and noise into your system. Trying to predict where the performance difference would land in the future is not a gamble many people like to take and that's why the GTX 480 was shunned at launch.

Obviously looking back now the GTX 480 is obviously much more future proof than the HD 5870.
 

BD231

Lifer
Feb 26, 2001
10,568
138
106
Well its pretty obvious since NV used shader hotclocking and super large dies with the GTX 200 series and Fermi. Their shader clocks are double the raster clocks, and AMD chose to not do that as to go for a more efficient design.

Just so you know, Kepler will not have hotclocking and they as well are going for an efficient design this time, without the hotclocking. Their old mantra was "die as large as feasibly possible, and hotclocked shaders to make this feasible" The side effect was terrible efficiency. They are doing the AMD approach with Kepler, fun times huh.

The more you know.
 

badb0y

Diamond Member
Feb 22, 2010
4,015
30
91
Well its pretty obvious since NV used shader hotclocking and super large dies with the GTX 200 series and Fermi. Their shader clocks are double the raster clocks, and AMD chose to not do that as to go for a more efficient design.

Just so you know, Kepler will not have hotclocking and they as well are going for an efficient design this time, without the hotclocking. Their old mantra was "die as large as feasibly possible, and hotclocked shaders to make this feasible" The side effect was terrible efficiency. They are doing the AMD approach with Kepler, fun times huh.
Wasn't the hot clocking thing a rumor?
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
Like others have said... in DX9 titles it less, which back then was more or less every game.

1) I guess AMD went for a GPU die optimised more for DX9 than DX11.
It was a choice someone made, where profit vs die space came into it.

2) Size
5870 = 334mm^2
480 = 529mm^2 (~59% bigger)

3) power use (maximums):
5870 = 212watts
480 = 320watts (51% more)

It makes sense the 480 is abit faster, since its 59% bigger and uses 51% more power.

My theory is AMD could make faster GPUs than nvidia, but they choose not to.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
Where did you see 320W in gaming for the GTX480 ???


@ AtenRa your putting words in my mouth, I never said anything about gameing
(check my post above, look for gameing in it, it isnt there). I just mentioned power usage.
The 480 *can* reach 320+ watts of power use.

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_480_Fermi/30.html

power_maximum.gif



AMD did release a dx11 gpu on 40nm with a bigger die/ 60 watts more peak power usage and it about tied the gtx 480 that was released 8 months earlier :)
It was the 6970.
The 480 is ~36% bigger, and a tiny bit slower. That means AMD has a 36% advantage in performance/size.
*IF* AMD choose to make a 529mm^2 chip it would be faster than the 480 by 36%+.

*IF* AMD had done a 40nm 529mm^2 chip, it would be like ~18% faster than the 580 was (assumeing same size/performance is kept).


Today the 7970 is like 20% faster than the 580, and its only a 352mm^2 chip (from wiki).
 
Last edited:

bryanW1995

Lifer
May 22, 2007
11,144
32
91
nVidia is quite literally the Intel of GPU's from a technological standpoint as far as I can tell, they just don't execute as well. You can't blame them though, being stuck with the problematic FAB tech we have these days. The work each individual shader can get done is pretty much double that of AMD's which is staggering (although I don't know about the 79xx series). That's how AMD rolls though, MORE COARS. Have to wonder if that will bite them in the ass down the line but I'm not familiar with GPU tech on that level.

I think the higher minimum frame rate argument is pretty telling though, but AMD's execution is pretty stellar as of late.

They are fundamentally different. Intel fabs their own parts, and they're the best fab in the world, so they have a ridiculous competitive advantage. Nvidia uses the same fab as their competitors, and their competitors have had a ~ 6 month process advantage for as long as I can remember.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
...except that it didn't actually work.

http://www.legitreviews.com/article/1264/1/

They were just very inefficient with their egg-frying, they could have easily gotten the tin foil onto the rest of the HS! If I keep my 480 long enough for it drop down to ~ $50 or so, then I'll try this one out for posterity's sake. However, for now I'll take my superior gaming performance on a budget, and thank my lucky stars that my card wasn't from that first run.
 

BD231

Lifer
Feb 26, 2001
10,568
138
106
They are fundamentally different. Intel fabs their own parts, and they're the best fab in the world, so they have a ridiculous competitive advantage. Nvidia uses the same fab as their competitors, and their competitors have had a ~ 6 month process advantage for as long as I can remember.

When I say intel of gpus I'm referring to the IPC advantage Intel has vs AMD on the CPU side. Kinna mimics nVidias approach on the gpu side being that they have the stronger per core performance that's all. Wasn't referring to Intels success or total dominance, just strategy.

Wasn't that 6 month delay entirely because of Fermi? They fixed it in the end and got a generally faster product out of it for a few generations no? I don't see the doom in a 6 month advantage when all AMD is doing with it is charging people heavily for it then getting beat by the competitor in single GPU performance once that six months is up :p.

I think it was definitely a lead when they came out with first DX11 GPU in all, but there's no way you can match that kind of intensive every product cycle.
 

moriz

Member
Mar 11, 2009
196
0
0
When I say intel of gpus I'm referring to the IPC advantage Intel has vs AMD on the CPU side. Kinna mimics nVidias approach on the gpu side being that they have the stronger per core performance that's all. Wasn't referring to Intels success or total dominance, just strategy.

Wasn't that 6 month delay entirely because of Fermi? They fixed it in the end and got a generally faster product out of it for a few generations no? I don't see the doom in a 6 month advantage when all AMD is doing with it is charging people heavily for it then getting beat by the competitor in single GPU performance once that six months is up :p.

I think it was definitely a lead when they came out with first DX11 GPU in all, but there's no way you can match that kind of intensive every product cycle.

not quite. AMD's VLIW approach is better for doing raster graphics, while Nvidia's approach is better for general compute. for the task of doing strictly graphics work, AMD's design end up having a higher "IPC".

and six months make a pretty large difference. once again, AMD snagged all the early adopters, who are willing to spend big bucks on the latest and greatest. money Nvidia won't get.
 
Status
Not open for further replies.