GTX 470 pics *EDIT* possible benchmarks

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Qbah

Diamond Member
Oct 18, 2005
3,754
10
81
If the GTX470 is slower than a HD5870, it's all over last gen again... Seriously, it's like identical. But this time nVidia is very very late. And this time I'm talking performance, not prices...

HD4850 > 9800GTX (HD5850 > GTX285)
HD4870 > GTX260 (HD5870 > GTX470)

AMD has a HD5970 that will most likely be faster than a GTX480 (and last time HD4870x2 > GTX285 ). And probably nVidia will release a dual-GPU card that will be a tad faster than a HD5970 (so GTX295 > HD4870x2).

Now if the pricing will follow the performance history of last gen, I'm all for it :p
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
If these benches are true, GTX 400 series is a major disappointment, given the size of the chip and NV's strategy overall (not even mentioning them being late).

From a consumer's perspective, it was more or less expected that ATI would produce a mid-size GPU and offer high-end performance through dual-gpu card based on their strategy. NV's strategy was to counter ATI by offering a single large monolithic core with superior performance.

Right now, my greatest concern for future NV graphics cards, based on these benchmarks, is efficiency. Since a larger core is significantly more costly to manufacture, a GTX470 with nearly 500mm^2 die size and 320-bit memory bandwidth (which is also more costly) has to be faster than a 5870 334 mm^2 GPU from ATI. If this is not true, there is a definite problem with NV's current strategy/architecture (even if it matches 5870's performance, it still makes the card very inefficient). Of course in the context of performance, one cannot simply look at memory bandwidth. But in the context of manufacturing costs, having a 334 mm^2 card with 256-bit memory easily match yours is a slap in the face.
 
Last edited:

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
If the GTX470 is slower than a HD5870, it's all over last gen again... Seriously, it's like identical. But this time nVidia is very very late. And this time I'm talking performance, not prices...

HD4850 > 9800GTX (HD5850 > GTX285)
HD4870 > GTX260 (HD5870 > GTX470)

AMD has a HD5970 that will most likely be faster than a GTX480 (and last time HD4870x2 > GTX285 ). And probably nVidia will release a dual-GPU card that will be a tad faster than a HD5970 (so GTX295 > HD4870x2).

Now if the pricing will follow the performance history of last gen, I'm all for it :p

I totally agree with you, seems that its GT200 vs RV770 all over again, but this time, instead of the RV770, now nVidia will hold the crown in terms of higher power consumption and if the GT200 was more expensive than ATi's counterpart, now this time will break the record, and I doubt that nVidia will do a dual Fermi for now, I think that will be feasible with a lower manufacturing process or if they do a respin of the Fermi die with better yields and lower power consumption with current 40nm process.

I just hope we can get lower prices of the HD 5800 series as low as the HD 4800 series, I don't mind buying an HD 5870 for $189.00 6 months from now.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
If these benches are true, GTX 400 series is a major disappointment, given the size of the chip and NV's strategy overall (not even mentioning them being late).

From a consumer's perspective, it was more or less expected that ATI would produce a mid-size GPU and offer high-end performance through dual-gpu card based on their strategy. NV's strategy was to counter ATI by offering a single large monolithic core with superior performance.

Since a larger core is significantly more costly to manufacture, if a GTX470 with nearly 500mm^2 die size is unable to match a 334 mm2 GPU from ATI, there is a definite problem with NV's current strategy. This also calls into question the efficiency (or lack thereof) for GTX 400 series architecture. I hope these benches are way off.

Well, I'm going to bring up Charlie here, and probably get flamed for it, but meh.

According to one of his stories Fermi wasn't meant to be a graphics card, there was supposed to be a GeForce chip that fell through for whatever reason and Fermi for HPC. Now, I'm sure there is no way to verify if this is just his imagination or 100% accurate, but given the likely cost of a Fermi chip and it's graphics performance (assuming these numbers are true) you have to wonder. Maybe this chip is 'plan b' for graphics? Who knows...

But, based on these couple of tests (again, if the numbers are true) I would think that Fermi is a big disappointment... very late, and very similar performance. On top of it, we'll have to wait and see about availability, pricing, and power use. But if I were to take a guess at this point, based on what little info we have and rumors, I'd say that Fermi will be Nvidia's 2900XT.

I guess we'll have to wait a few more weeks before we know for certain though.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Slightly off topic, but I wonder what dual HD5770 would do vs HD58xx and GTX470/80 cards in terms of DX11.
Given that ATI has gone with a more fixed function design for some DX11 parts (so it seems), the Crossfire HD5770 might scale better when it comes to heavy use of tessellation, and the GTX470/80 might inherently offer more power through its scalable architecture, so it could be that for tessellation 2xHD5770 might be better than a single higher end ATI card given the nature of the design.

Maybe someone will make a dual 5770 GPU card.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Hot from the CeBit in Hannover, Germany:

http://translate.google.com/transla...rce-GTX-470-enthuellt-946411.html&sl=de&tl=en

heise.de is the news portal of a major german (tech) magazine publisher.
Great source, thanks! This seems to be in line with other numbers that have been showing up. If these results are accurate, the GTX470 better not be more than about $350 MSRP if NVIDIA wants it wants to sell. $300 would be a more competitive price and would put some pressure on AMD, but I'd imagine NVIDIA doesn't have the stock to play that game. These are very interesting developments nevertheless.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
Slightly off topic, but I wonder what dual HD5770 would do vs HD58xx and GTX470/80 cards in terms of DX11.
Given that ATI has gone with a more fixed function design for some DX11 parts (so it seems), the Crossfire HD5770 might scale better when it comes to heavy use of tessellation, and the GTX470/80 might inherently offer more power through its scalable architecture, so it could be that for tessellation 2xHD5770 might be better than a single higher end ATI card given the nature of the design.

Maybe someone will make a dual 5770 GPU card.

While that is somewhat true, tessalation adds triangles, loads of triangles to a surface. But then you still have to apply all the other shit. I think a series 5xxx tessalator unit can do 1 triangle per clock, if I'm not mistaken, so 5xxx tesselator are still 8xx million triangles.

I don't think the bottleneck is the tessalator but the shaders. More triangles means more shit for the shaders to do.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
Um, folks... are we "sure" that's a good score? The screenshot shows the GTX470 getting 13264 3DMarks for 3DMark 06 @ default settings on an E8600 stock.

http://service.futuremark.com/compare?3dm06=12604682 - E6750 @ stock speed with a Radeon 4890 pulling 12685 in 3DMark 06. Can't say I'm impressed.

3DMark benchmarks are irrelevant and need to be treated as such by the community. Which is why most people here are comparing it to the actual game benchmarks posted where it is either equal or slightly slower than a 5870.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
3DMark benchmarks are irrelevant and need to be treated as such by the community. Which is why most people here are comparing it to the actual game benchmarks posted where it is either equal or slightly slower than a 5870.

In general, it seems to be a good deal slower according to those number.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
8,107
9,359
136
Why do people put so much emphasis on minimum frames? Its only a single data point along the entire continuum of a test. A card could average 40 FPS and really never drop below 35, but if there was some load lag as the benchmark fired up it would return a min. fps value of 5 and make people think that there are long stretches of gameplay chugging along at 5 fps when it could simply be a blip. I mean, no one ever treats max FPS with any importance. Although a game averages 40 FPS, what if there are huge stretches of gameplay at 60?

Also, given what we know about the 480 and its 512 shaders, couldn't we extrapolate its performance figures assuming linear scaling? 14% more shaders still puts it roughly at 5870 performance levels, with a little more gravy thanks to higher clocks and some more bandwidth.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Why do people put so much emphasis on minimum frames? Its only a single data point along the entire continuum of a test. A card could average 40 FPS and really never drop below 35, but if there was some load lag as the benchmark fired up it would return a min. fps value of 5 and make people think that there are long stretches of gameplay chugging along at 5 fps when it could simply be a blip.

You've got a point, but I that is why graphs showing frame rates during a test (like what hardocp does) is great indication of overall performance.

If card A averages 50 fps but has minimums regularly in the 20's, and card B averages 45 fps but has minimums no lower than 30 fps, I'd rather have card B.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Well, I'm going to bring up Charlie here, and probably get flamed for it, but meh.

According to one of his stories Fermi wasn't meant to be a graphics card, there was supposed to be a GeForce chip that fell through for whatever reason and Fermi for HPC. Now, I'm sure there is no way to verify if this is just his imagination or 100% accurate, but given the likely cost of a Fermi chip and it's graphics performance (assuming these numbers are true) you have to wonder. Maybe this chip is 'plan b' for graphics? Who knows...

But, based on these couple of tests (again, if the numbers are true) I would think that Fermi is a big disappointment... very late, and very similar performance. On top of it, we'll have to wait and see about availability, pricing, and power use. But if I were to take a guess at this point, based on what little info we have and rumors, I'd say that Fermi will be Nvidia's 2900XT.

I guess we'll have to wait a few more weeks before we know for certain though.

I am starting to think that nvidia's one GPU for all markets strategy isn't going to work for them long term. I think they will need to whittle down or completely remove many of the compute-only functions for future consumer versions to save die space and get higher yields & clocks.
 

v8envy

Platinum Member
Sep 7, 2002
2,720
0
0
Minimum frame rates are the ONLY thing that matters. If one card is "blipping" at 5 fps then it's "blipping" at 5 fps... if the other card is not doing that the game experience is better. That's why valid benchmarks are those which are run multiple times, on machines without other cruft firing up randomly.

Minimum frame rates happen when performance is most important. Which is why I don't really care about averages (who cares if my 60 hz lcd is being fed 300 fps when there's nothing going on?) or maxes, only the minimums.

If these performance figures are anywhere nearly accurate I can see why NV might want to EOL the first revision of this part before it is even released.
 

konakona

Diamond Member
May 6, 2004
6,285
1
0
does this mean we gonna see more viral push of physx and twimtbp crap (since nv does not seem to have a commanding lead in perf despite all the delays, uncertainty of adequate supply, higher projected price)? oh wait, there is tessellation...

I just hope this doesn't delay the release of ATi's cypress refresh, that's what I am more interested in now.
 

v8envy

Platinum Member
Sep 7, 2002
2,720
0
0
I'm looking forward to Fermi II and the Cypress refresh. Both should be in full swing in time for Black Friday 2010. Looks like we're in for a few months of sleepy, boring, and expensive GPU time until then.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I'm looking forward to Fermi II and the Cypress refresh. Both should be in full swing in time for Black Friday 2010. Looks like we're in for a few months of sleepy, boring, and expensive GPU time until then.

I'm with you in this. I think this fall will have a much more exciting and competitive GPU field. If it does not, then I may start believing all the nvidia-is-abandoning-pc-gaming naysayers.
 

Daedalus685

Golden Member
Nov 12, 2009
1,386
1
0
Minimum frame rates are the ONLY thing that matters. If one card is "blipping" at 5 fps then it's "blipping" at 5 fps... if the other card is not doing that the game experience is better. That's why valid benchmarks are those which are run multiple times, on machines without other cruft firing up randomly.

Minimum frame rates happen when performance is most important. Which is why I don't really care about averages (who cares if my 60 hz lcd is being fed 300 fps when there's nothing going on?) or maxes, only the minimums.

If these performance figures are anywhere nearly accurate I can see why NV might want to EOL the first revision of this part before it is even released.

That is not necessarily true. Minimum FPS happen where the run happened to get a minimum. Unless the review shows you the fps vs time plot there is no reason to believe the minimum fps happened at the good bits, for more than one instant of time, than there is to believe it was teh very first sample taken before the game even fully loaded.

Minimum FPS only means anything when accompanied by a plot, and highlighted on said plot. Otherwise I'd rather see STdev instead. In the way most reviews are presented the only value that is of significance on its own is average fps. All max/min require at the least the time location in the benchmark to mean anything, even that is not sufficient as it could still be due to many things. For instance, it is entirely possible that a driver bug causes a particular seen in a given game to dive to 5fps for one second. While it would be annoying and worth noting in a review, it is not telling of the overall general performance of a card.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Why do people put so much emphasis on minimum frames? Its only a single data point along the entire continuum of a test. A card could average 40 FPS and really never drop below 35, but if there was some load lag as the benchmark fired up it would return a min. fps value of 5 and make people think that there are long stretches of gameplay chugging along at 5 fps when it could simply be a blip. I mean, no one ever treats max FPS with any importance. Although a game averages 40 FPS, what if there are huge stretches of gameplay at 60?

Also, given what we know about the 480 and its 512 shaders, couldn't we extrapolate its performance figures assuming linear scaling? 14% more shaders still puts it roughly at 5870 performance levels, with a little more gravy thanks to higher clocks and some more bandwidth.

If you consider fps as just "data", than you're right it is just one data point. The fact is that we're talking about performance though... If I see one card dip into the single digits, and the other not I'm definitely going to be more inclined to go with the one that offers higher mins. To me, minimums carry more weight than max or average. Granted is the average is significantly lower, that will raise an eyebrow too.

I prefer it when reviewers offer us the FRAPS graph of the entire run (like HardOCP). This lets me actually see if the minimum is a freak occurrence, or a regular event. Also, it doesn't really matter what causes the 'blip'. If one card regularly has a sudden drop in fps and the other does not (even if the avg and max is higher), I'm probably going to lean towards the one without the blips.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
If you consider fps as just "data", than you're right it is just one data point. The fact is that we're talking about performance though... If I see one card dip into the single digits, and the other not I'm definitely going to be more inclined to go with the one that offers higher mins. To me, minimums carry more weight than max or average. Granted is the average is significantly lower, that will raise an eyebrow too.

I prefer it when reviewers offer us the FRAPS graph of the entire run (like HardOCP). This lets me actually see if the minimum is a freak occurrence, or a regular event. Also, it doesn't really matter what causes the 'blip'. If one card regularly has a sudden drop in fps and the other does not (even if the avg and max is higher), I'm probably going to lean towards the one without the blips.

So do you have any particular example of a GPU that has good averages but then is horrible in minimums?
 

Daedalus685

Golden Member
Nov 12, 2009
1,386
1
0
If you consider fps as just "data", than you're right it is just one data point. The fact is that we're talking about performance though... If I see one card dip into the single digits, and the other not I'm definitely going to be more inclined to go with the one that offers higher mins. To me, minimums carry more weight than max or average. Granted is the average is significantly lower, that will raise an eyebrow too.

I prefer it when reviewers offer us the FRAPS graph of the entire run (like HardOCP). This lets me actually see if the minimum is a freak occurrence, or a regular event. Also, it doesn't really matter what causes the 'blip'. If one card regularly has a sudden drop in fps and the other does not (even if the avg and max is higher), I'm probably going to lean towards the one without the blips.

Well, what causes the blip is somewhat important. I want to know whether the minimum is due to some random occurrence, or if it is due to a bottleneck in some aspect of a card that is brought out in a given part of a game.

I'd much rather own a card that gives 60fps all the time, yet dips to 5fps for a single sample period, than a card that continually fluctuates between 80 and 40 fps. More information is never a bad thing.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
So do you have any particular example of a GPU that has good averages but then is horrible in minimums?

If accurate, the benches posted in this thread...

Crysis Warhead DX10 / 19x12 / Enthusiast / 4xMSAA:
5870: 41.41 / 30.22 / 19.60 (Max / Avg / Min FPS)
GTX470 30.60 / 25.53 / 18.62

Same, but 8xMSAA:
5870: 37.87 / 23.44 / 4.96
GTX470 29.21 / 23.03 / 15.63

Granted, it is debatable if these are accurate and we don't have a FRAPS graph, but given the limited data we have the GTX 470 looks like the better card for Crysis specifically with 8xMSAA. A dip into the teens is still pretty bad, but single digits is definitely a slide show.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
If accurate, the benches posted in this thread...



Granted, it is debatable if these are accurate and we don't have a FRAPS graph, but given the limited data we have the GTX 470 looks like the better card for Crysis specifically with 8xMSAA. A dip into the teens is still pretty bad, but single digits is definitely a slide show.

So due to a single bench you are declaring that the GTX 470 is much stronger performer than a 5870...

What is the reason for the dip? Was it a single frame or was during a period of time?