GTX480 only 5% faster than a 5870?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Charlie has responded to the 448/512 thing.

http://www.semiaccurate.com/forums/showpost.php?p=28431&postcount=234

It's as I thought; the 480 is a 512 part but the top bin is still 448. Hence, no 480s at all in the initial allocation to AIBs.

Putting it all together, the only Fermi you'll be able to buy performs less than the 5870.

the way I read his earlier article, the top bin was only 448 sp's b/c it could be clocked higher, hence the term "top bin". the gtx 480 with 512 sp's or cuda cores or whatever they're called now isn't the "top bin" b/c it won't clock as high as the 448 sp unit.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
It seems that a number of you didn't bother to read the relevant parts of the HWC preview prior to posting about it. So, I'll break it down for you...

  • None of the benchmarks represented in the graphs were done by HWC, but by NVIDIA while staff from HWC was allowed to watch.
  • The "simulated" benchmarks are of the 5870, not Fermi. Why NVIDIA has to run simulated benchmarks on a card they could have picked up at newegg for ~$400 is beyond me...

My take on this is that while you could pick on HWC for reporting direct NV marketing, you could also say that if NV is going to give HWC quasi hands on access to Fermi, as a hardware site, HWC really should tell us about their experience. Of course, as with any other pre-release you need to take these benchmarks with a grain of salt.
 
Last edited:

Apocalypse23

Golden Member
Jul 14, 2003
1,467
1
0
Here's another January 20th link from legitreviews :

maingear_shift.jpg

Fermi

Legitreview's Fermi preview Final Thoughts

"When it comes to performance, the GF100 is faster than a Radeon HD 5870 graphics card, which is a good thing. NVIDIA is late to the market so they really needed to take back the performance crown. If they were late and slower then it would have been a rough couple of years for engineers at NVIDIA. Still, NVIDIA has to get the GF100 cards shipping and in volume. TSMC has had a lot of issues with yields at the 40nm manufacturing process, but it appears to be better over the past six months. The next couple of months will be crucial for NVIDIA as the world is waiting for GF100 to be announced. Now, if only we knew what the real name of GF100 was and how much it will cost!

Legit Bottom Line: We got hands on game time with GF100 and it looks faster than the Radeon HD 5870! Get ready for some GPU battles in 2010 as AMD finally has something to worry about!"
 

blanketyblank

Golden Member
Jan 23, 2007
1,149
0
0
Here's another January 20th link from legitreviews :

maingear_shift.jpg

Fermi

Legitreview's Fermi preview Final Thoughts

"When it comes to performance, the GF100 is faster than a Radeon HD 5870 graphics card, which is a good thing. NVIDIA is late to the market so they really needed to take back the performance crown. If they were late and slower then it would have been a rough couple of years for engineers at NVIDIA. Still, NVIDIA has to get the GF100 cards shipping and in volume. TSMC has had a lot of issues with yields at the 40nm manufacturing process, but it appears to be better over the past six months. The next couple of months will be crucial for NVIDIA as the world is waiting for GF100 to be announced. Now, if only we knew what the real name of GF100 was and how much it will cost!

Legit Bottom Line: We got hands on game time with GF100 and it looks faster than the Radeon HD 5870! Get ready for some GPU battles in 2010 as AMD finally has something to worry about!"

Even Charlie admits the GTX 480 is faster. The question is how much faster it is and how many they can actually ship. Far Cry 2 can be an uncharacteristic game just like Last Remnant give the GTX 260 way higher scores than a 5770 when the two cards are neck to neck and exchange blows for every thing else.
http://www.tomshardware.com/charts/...compare,1687.html?prod[3201]=on&prod[3251]=on

Is the GTX 260 60% faster than a 5770? Hell No. However in last remnant it is.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
didn't the original phenom outperform c2d by up to 40% in "simulated benchmarks"?

No, "Barcelona", which was the server Opteron, beat "Clovertown" Xeon in memory bandwidth intensive SpecFP benchmark. Technically AMD wasn't wrong, but practically since it wasn't a common case, the advantage did not show.

I can't see how after all this delay Fermi can't have something cut or not turn out much worse than it was originally expected. They are delaying it because they can't fix whatever problems they are having.

-NV30 aka Geforce 5800
-ATI 2900XT
-Intel Prescott
-AMD Barcelona

Anyone?
 

SolMiester

Diamond Member
Dec 19, 2004
5,330
17
76
He does seem to have contradicted himself by previously saying the top bin is 448 shaders yet the top card is 512.

Although, maybe the top bin is 448 and they're going to launch a 512 anyway... with consequently (almost) zero cards available to achieve it. They could launch a card with 2000MHz shaders, but if the top bin is only 1250MHz then you'd never see a retail card.

It gets hard keeping track of lies.......While I am guarded over benchmarks, I hardly think an established site like hardwarecanacks would be so lax with accurate benchies....I certainly would be them over SA and Char-lie!
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
While isn't totally accurate to compare the results of the Fermi's review of Legit reviews against Anandtech's reviews, but the numbers alone doesn't tell the whole story. AFAIK, the tests done in Fermi are with the Far Cry's Built In benchmark, which usually are higher than the real gameplay of the game.

http://www.anandtech.com/video/showdoc.aspx?i=3746&p=3

Here you can see that the GTX 285 during gameplay, scored 45fps, which is less than the 50's that it scores in average in the Legit's Review using the Far Cry 2 built in benchmark with the same resolution and Anti Aliasing/Image settings. Plus we don't know the setup used in Legit review, but Anandtech's setup is very high end, so they can eliminate any CPU bottleneck during the results, I doubt nVidia or Legit Reviews would test Fermi with a Dual Core CPU.

Seeing that the HD 5870 scores over 61.8fps in average during real gameplay in Anandtech's review, we can normalize the Fermi's results of the Far Cry 2 built in benchmark by taking away for example 10fps, it scored 84fps in Legit reviews Far Cry 2 built in benchmark, so we just take 10fps less in a simulated gameplay, it would mean that Fermi is running at average of 74's which isn't bad, but not utterly fast. So definitively it can't touch the HD 5970

10fps is an accurate number from my point of view because usually the difference using the same card between Far Cry 2 game play and the built in benchmark is between 5fps and 10fps, sometimes it might be more.
 

Martimus

Diamond Member
Apr 24, 2007
4,490
157
106
the way I read his earlier article, the top bin was only 448 sp's b/c it could be clocked higher, hence the term "top bin". the gtx 480 with 512 sp's or cuda cores or whatever they're called now isn't the "top bin" b/c it won't clock as high as the 448 sp unit.

I'm pretty sure he said that they needed to fuse off 2 units due to process errors during fabrication (yield), not because of clocks. Of course, if they are able to get fully functioning units, they may need to down clock them to keep power consumption in check.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
NV and their ever lowering texture to SP ratio. SP's are just power hogs and don't benefit as much as fillrate at least in games today. 8800gt was going the right direction but they've really screwed themselves with CUDA. Nvidia just can't compete with ATI's SP at least right now until they shrink their clusters like ATI did.

Those SP makes for power hogs and lower clocks = Nvidia fail to ooohhh the PC gamers.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
nVidia's approach to SP is to make fat SP which are very independant of each other SP, able to run a scalar calculations no matter what is going on, they're very encapsulated and have excellent thread management. ATi's approach is more software based, having lots of simple shaders which needs a clever compiler and a Command Queue Processor to feed such ultra wide architecture.

Considering such issue, ATi then is doing a great job with their drivers with so much work to do, nVidia's approach is less dependant of optimizations and will have predictable performance. Both approaches are good, but the ATi's approach maximizes the usage per mm2 being more efficient overall. That's why the much smaller HD 4870 performed the same as the two times bigger GTX 216 while it consumed slighly more power, meaning that the ATi's smaller die is doing more work per mm2.

Today's games approach ins't texture fillrate, is shading performance and the only way to improve it is adding more SP and more optimizations. Even the HD 4870 with their reworked TMU's represented a bottleneck in some scenarios when doingn filtering and that's why the filtering process got moved to the SP with the HD 5800 series.
 
Last edited: