Nvidia accusses ATI/AMD of cheating - Benchmarks

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

HurleyBird

Platinum Member
Apr 22, 2003
2,812
1,550
136
What does this have to do with the subject? Me defending Nvidia? Why not? I think they have superior products, so why wouldn't I?
Being late to the game doesn't make the product inferior, just makes it late.

Absolutely false. Longetivity is one of the most important measures of a video card, something that 9700pro and 8800GTX buyers will certainly attest to.

GTX 480 coming in six months late just takes that much time away from it's longetivity, and it DOES make it a worse product.

For a more extreme example, imagine GTX480 was further delayed, to the week before SI came out (and for the purpose of this example, lets say SI beats it and costs less). No one in their right mind would call this GTX 480 a good product, but if it had launched at the same time as 5870, they probably would. The example is extreme to make the relationship obvious; our six months late 480 is a *much* better product than the hypothetical 480 coming out a week before SI hits, but by the same measure it is not as good of a product vs. a hypothetical 480 that launched on time with or near 5870.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
GTX 480 coming in six months late just takes that much time away from it's longetivity, and it DOES make it a worse product.

I think you're missing the bigger picture of what Keys is saying here.
What you're saying is "The sooner a card is released, the better"...
But that doesn't work that way... because releasing a product sooner means that you have less development time and/or need to work with older technology. Which is why generally the latest products are also the greatest.
Otherwise we'd already have GTX480s and HD5870s in the stone age, right?

Keys' point is more along the lines of: "Longevity isn't related to WHEN a product is released, but HOW GOOD that product is at release time".
Had the HD2900 been released before the 8800GTX, then the 8800GTX would STILL be the card with the best longevity, because it was just a better card all around. Not because it was released first, but because it was the best of its generation.

The GTX480 just doesn't look to be a product with great longevity, because although technically it is the fastest single GPU card, it is not the type of earth-shattering product that the 9700Pro was, or the 8800GTX.
 

Aristotelian

Golden Member
Jan 30, 2010
1,246
11
76
...because releasing a product sooner means that you have less development time and/or need to work with older technology.

How can it follow necessarily from A's releasing a product earlier than B that A necessarily had less development time? (Aside: the logic being, necessarily (less development time) and/or (conjunction/disjunction) (work with older technology). Consider: A begins developing the product earlier than B, releases it earlier than B, so B rushes a product to market in order to compete with A's offering. B might have had less development time, because their product development cycle started later and they ended up rushing a product to market. I'm not sure how you can make a conclusion regarding development time here unless you can prove the development time. Also, can you prove that necessarily (less development time is always worse than more development time) for a given product? It's not clear to me how the value judgement can be conclusive either way here, am curious to see what you think about it in more detail.

Since you provided a disjunction in your statement as well (the 'or'), entailing that releasing a product sooner can also entail that A would be working with older technology, how does this follow? What do you mean by 'older technology' here? I'm not asking this in a pointed way, but are you making a compact comment on say, 'using an older process to get the card out faster' (such as the nm differences, implying dated technology?) or are you making the trivial claim that when A's product hits the market earlier than B's product that A's product is necessarily older technology (having been released earlier)? "Older technology" seems to imply that the technology is somehow worse off in comparison to the newer technology, but in this generation that doesn't seem to be such a cut and dry issue. I hope this has been clear, I tried to be concise regarding my confusion.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
(just my personal opinion, not posting as a moderator and not trying to set forum policy)

Eventually I hope Video splits into 4 cats. Nvidia, AMD, Other, Industry.

Go to a professional venue, a trade show or a technology conference, and you don't see the hosts creating artificial barriers of separation to keep ill-mannered individuals from crapping all over themselves and the venue.

Nvidia employees get along just fine with AMD employees in those professional settings.

This is a technical forum, those people who can conduct themselves with civility and decorum deserve a technical forum that respects the venue rather than a forum that coddles the disrespectful and shelters them from one another like barnyard animals that will fight unless a fence is erected.

You know what they do in a technical conference when a member can't behave themselves? They don't cater to the behavior setting up a fence and saying "AMD people on that side, NV folks on this side"...

Rather, they take out the trash and keep the place cleaned up for ALL the rest of the folks that can behave and do want to openly interact with the larger community as a whole.

I find it interesting that we don't have near the magnitude of these issues in the other technical forums...CPU, PSU, mobo, memory, cases...if the same cross-section of demographics can constructively interact in those technical forums then there is no excuse for us to not expect the same of our VC&G forum.

We just need to raise the bar, take out some trash, and let the other 99% of the community enjoy the resulting higher quality of signal:noise, same as they do in CPU, etc.

(just my personal opinion, not posting as a moderator and not trying to set forum policy)
 

brybir

Senior member
Jun 18, 2009
241
0
0
(just my personal opinion, not posting as a moderator and not trying to set forum policy)



Go to a professional venue, a trade show or a technology conference, and you don't see the hosts creating artificial barriers of separation to keep ill-mannered individuals from crapping all over themselves and the venue.

Nvidia employees get along just fine with AMD employees in those professional settings.

This is a technical forum, those people who can conduct themselves with civility and decorum deserve a technical forum that respects the venue rather than a forum that coddles the disrespectful and shelters them from one another like barnyard animals that will fight unless a fence is erected.

You know what they do in a technical conference when a member can't behave themselves? They don't cater to the behavior setting up a fence and saying "AMD people on that side, NV folks on this side"...

Rather, they take out the trash and keep the place cleaned up for ALL the rest of the folks that can behave and do want to openly interact with the larger community as a whole.

I find it interesting that we don't have near the magnitude of these issues in the other technical forums...CPU, PSU, mobo, memory, cases...if the same cross-section of demographics can constructively interact in those technical forums then there is no excuse for us to not expect the same of our VC&G forum.

We just need to raise the bar, take out some trash, and let the other 99% of the community enjoy the resulting higher quality of signal:noise, same as they do in CPU, etc.

(just my personal opinion, not posting as a moderator and not trying to set forum policy)

How do I sign up for your newsletter?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
How can it follow necessarily from A's releasing a product earlier than B that A necessarily had less development time? (Aside: the logic being, necessarily (less development time) and/or (conjunction/disjunction) (work with older technology). Consider: A begins developing the product earlier than B, releases it earlier than B, so B rushes a product to market in order to compete with A's offering. B might have had less development time, because their product development cycle started later and they ended up rushing a product to market. I'm not sure how you can make a conclusion regarding development time here unless you can prove the development time. Also, can you prove that necessarily (less development time is always worse than more development time) for a given product? It's not clear to me how the value judgement can be conclusive either way here, am curious to see what you think about it in more detail.

Since you provided a disjunction in your statement as well (the 'or'), entailing that releasing a product sooner can also entail that A would be working with older technology, how does this follow? What do you mean by 'older technology' here? I'm not asking this in a pointed way, but are you making a compact comment on say, 'using an older process to get the card out faster' (such as the nm differences, implying dated technology?) or are you making the trivial claim that when A's product hits the market earlier than B's product that A's product is necessarily older technology (having been released earlier)? "Older technology" seems to imply that the technology is somehow worse off in comparison to the newer technology, but in this generation that doesn't seem to be such a cut and dry issue. I hope this has been clear, I tried to be concise regarding my confusion.

Very simply: if product A started development earlier than product B, then product A started with older technology/less knowledge/experience etc.
nVidia's product cycle even seems to revolve around this very fact: GF104 contains some architectural features which are not in GF100, such as the superscalar execution unit.
So GF100 was released sooner, it also started development sooner, and as a result of that, it did not make use of this new technology, because it was not developed yet.

Or in other words: GF100 was released when it was, because that's how long it took to develop. If they wanted to release it sooner, it would probably not be as good as it is today... Or the other way around, they could have opted to include superscalar execution, but then it would not have been released before GF104 was.

Which brings us back to the balancing act... How much do you want to put into your chip design, how much will this delay development, and how much performance will you gain by this?
Or, in layman's terms: a chip will not necessarily be better when it is released sooner, and vice versa, delaying a chip will not necessarily lead to better performance.
 

brybir

Senior member
Jun 18, 2009
241
0
0
I think you're missing the bigger picture of what Keys is saying here.
What you're saying is "The sooner a card is released, the better"...
But that doesn't work that way... because releasing a product sooner means that you have less development time and/or need to work with older technology. Which is why generally the latest products are also the greatest.
Otherwise we'd already have GTX480s and HD5870s in the stone age, right?

Keys' point is more along the lines of: "Longevity isn't related to WHEN a product is released, but HOW GOOD that product is at release time".
Had the HD2900 been released before the 8800GTX, then the 8800GTX would STILL be the card with the best longevity, because it was just a better card all around. Not because it was released first, but because it was the best of its generation.

The GTX480 just doesn't look to be a product with great longevity, because although technically it is the fastest single GPU card, it is not the type of earth-shattering product that the 9700Pro was, or the 8800GTX.

I think as with all posts here, there is some merit to your point of view and to the poster you responded to but as usual it is all relative.


If you are in the market for the best of the best and need the best of the best at all times, buying a card that is 6 months late while knowing the next gen from the other company is 6 months away means you only get to be on top for 6 months then have to sell and start over. Then again, if you know what is roughly on top and it will stay that way for a year, you perceive that you are getting a better value.

This thought process then trickles down most other cards in that no one wants to buy a midrange card and 6 months later their midrange is now low end and the new midrange is 50% faster for the same cost. Many people perceive this as a loss in "value" to them, similar to how people hate when they buy a car then a year later the redesign is amazing and a much better car for the same price.

I think its just human nature, and I dont think it really speaks to the technical merit of a card, but more to the mentality of people when making purchases.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
Or in other words: GF100 was released when it was, because that's how long it took to develop. If they wanted to release it sooner, it would probably not be as good as it is today... Or the other way around, they could have opted to include superscalar execution, but then it would not have been released before GF104 was.

Are you saying it was the 3 months between GF100 release and GF104 release that allowed NVIDIA to include superscalar execution on GF104 vs GF100?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Are you saying it was the 3 months between GF100 release and GF104 release that allowed NVIDIA to include superscalar execution on GF104 vs GF100?

Yes and no.
GF104 builds on GF100. By developing GF104 later, they can build on that technology.
If they wanted to do everything in one go, it might have taken longer.
But there's no way of telling how much of the development was shared, and how much longer it would have taken if they tried to do everything in one product (the added complexity may blow up in your face exponentially, also making validation a lot more difficult, requiring more respins etc... pretty much what happend with AMD's Barcelona).
 

Aristotelian

Golden Member
Jan 30, 2010
1,246
11
76
Or, in layman's terms: a chip will not necessarily be better when it is released sooner, and vice versa, delaying a chip will not necessarily lead to better performance.

Right, that was the implication in my post as I'm a layman when it comes to chip design and product cycles (i.e., I'm not in the know). Thanks. I thought I saw a necessary claim in the bit I quoted from your previous post, but it seems you were merely making the contextual one.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,812
1,550
136
Keys' point is more along the lines of: "Longevity isn't related to WHEN a product is released, but HOW GOOD that product is at release time"

Obviously, it's related to both. If you keep performance constant and move the release date around, you'll have more longetivity the ealier the product is released. Likewise, if you keep release date constant and move performance around, the higher the performance the longer the longtivity will be.

Obviously, these two variables do not exist in a vacuum, generally the later the release date the higher the performance, but in the case of GF100 we're looking more of a product that was delayed due to execution, rather than a product that was pushed back so technology could catch up with the design. GF100 is on the same process as Cypress, yet the underlying tech is inferior in perf/watt and perf/mm2 to ATI's tech. Obviously DX11 performance makes things somewhat better, but that seems like it's more due to AMD skimping on the DX11 parts of cypress than any kind of architectural advantage.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
II think its just human nature, and I dont think it really speaks to the technical merit of a card, but more to the mentality of people when making purchases.

I don't think that has anything to do with it, to be honest.
9700Pro and 8800GTX were 'special' in that they were a huge leap in performance and features from everything that went before them, which resulted in their owners being able to play the latest games for 3-4 years with pretty good performance and image quality.
That's why these cards had great longevity.

The move from DX10 to DX11 was not that spectacular, because neither performance nor features made such a dramatic leap as 9700Pro/8800GTX did.
So there hasn't been any revolutionary or earth-shattering DX11 card so far... it's more of an evolutionary step.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,812
1,550
136
GF104 contains some architectural features which are not in GF100, such as the superscalar execution unit.
So GF100 was released sooner, it also started development sooner, and as a result of that, it did not make use of this new technology, because it was not developed yet.

And GF100 has some architectural features which are not in GF104. I beleive the majority of the changes are because they are designed for different markets. GF100 the GPGPU is much more impressive than GF100 the gaming GPU, and vice vera with GF104.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Right, that was the implication in my post as I'm a layman when it comes to chip design and product cycles (i.e., I'm not in the know). Thanks. I thought I saw a necessary claim in the bit I quoted from your previous post, but it seems you were merely making the contextual one.

Depends on how you look at it.
Say team A and team B will both have access to the exact same technology, and will get the exact same resources to build a chip during the exact same time period.
It is pretty obvious that team A and B will not come up with the exact same chip, right?
One of them may finish a bit ahead of time, another may not make the deadline... And most probably one chip will turn out to be more efficient/faster/whatnot.

So in that sense, no, it's pretty much impossible to say how long development of a certain chip will take.

But if you look at it from the other way:
If you alter the 'rules' from the above experiment a bit... give them access to more technology, give them more time, etc... Then they may be able to improve on their chip design... but it will take longer.
So it's pretty obvious that the same team will require more time to improve their chip... even if that improvement is 'implicit' in the development process itself (by having more resources etc).
They MUST have more time to implement more features/optimize certain parts more etc. That is a necessity.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
And GF100 has some architectural features which are not in GF104. I beleive the majority of the changes are because they are designed for different markets. GF100 the GPGPU is much more impressive than GF100 the gaming GPU, and vice vera with GF104.

I would disagree there... but that comes down to the definition of what an architecture is.
I find your argument similar to "A Pentium II has some architectural features which are not in the Celeron (L2 cache)".
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
But the GTS 450 and GTX 460 aren't anything special in DX11 games, from what I'm seeing. Or, in other words, those cards don't seem to leapfrog in performance past their price bracket (like the GTX 470 does to the HD 5870 in some cases).

http://www.hardwareoverclock.com/POV_TGT_GeForce_GTX_460_BEAST-6.htm

Just Cause 2
Alien vs. Predator
STALKER: CoP
Dirt 2

That's not to say that 5850 is not a better card than the 460 is overall. But HD6000 will have to improve DX11 performance, while Fermi 2 has to focus on power consumption more than anything.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
Last edited:

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
But HD6000 will have to improve DX11 performance, while Fermi 2 has to focus on power consumption more than anything.

I think Fermi II will have a whole slew of challenges for it. Better power consumption, better performance and a good amount of luck that AMD is not bringing out the 7 series on 28nm around the same time Fermi II is coming out.

Sure we have nothing concrete about exact performance numbers for AMD's 6 series single-gpu flagship but I think we can agree AMD is not going to bother bringing out a brand new series just to get laughed at when it is not 25% faster than a 5870 (faster than a 480)

The comments reviewers will drop about the 6870 when it trumps GTX 480 performance and if it manages to do it using less power and putting out less heat are going to put it in a very good light.

NV is going to need to not only catch up and exceed that performance but also do it with less heat output.

Is it almost a given at this point that Fermi II will not come out until 28nm ? Given we've heard nothing about it, and not even a mumble about refreshes of current Fermi, probably so.
 

Pantalaimon

Senior member
Feb 6, 2006
341
40
91
So, don't make your posts about me. Make them about the subject.

Unfortunately it doesn't work like that the moment you chose to become NVDIA's focus group member. Like it or not and people who don't like NVIDIA will pour their dislike of the company onto you, because you are now the manifestation of that company in this forum.

And like it or not, this particular moderator is happy to hit the infraction button when people derail threads by bringing up focus group bias & addressing that in the thread instead of the topic at hand. The fact that he's a focus group member has been disclosed for years in his signature. Anyone is free to take that into account when determining the motivation for any particular post. -Admin DrPizza
 
Last edited by a moderator:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
So there hasn't been any revolutionary or earth-shattering DX11 card so far... it's more of an evolutionary step.

That's partially because GTX275/280/285 and 4890 were such great cards. 9700Pro and 8800GTX replaced mediocre cards in the first place, 8500 (worse the GeForce 3) and 7900 series (worse than X1900 series). It's also related to the fact that 99% of games are dictated by the console market, which is primarily DX10 driven and uses ancient hardware at this time (2006 Xbox 360).

It's impossible to have a revolutionary DX11 hardware without revolutionary DX11 games (built from the ground-up) to take advantage of the capabilities. Plus the jump from DX8.1 to 9.0 was the biggest in the last 8 years terms of visuals (i.e., 8500 --> 9700). Windows XP and Vista also played roles in slowing down DX10/11 adoption. Developers had little incentive to push DX10/11 features for such a small fraction of users. IMO, DX10 probably will never take off and will be skipped entirely for DX11 in the next 1-2 years.

Again, this just reinforces the view that 2nd or 3rd generation of DX (version) hardware is really necessary to take advantage of that version of DX. It wasn't until X800/1800 series that DX9 games became truly great.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
While I think it's fine for a company to instruct a reviewer on how to get the best performance from their product, I have a problem with a company telling a reviewer what settings to use in their competitor's products. I think both company's recommendations should be followed for their own products. They should have no say though in the way the competitor's products are handled.
 

dust

Golden Member
Oct 13, 2008
1,328
2
71
Depends on how you look at it.
Say team A and team B will both have access to the exact same technology, and will get the exact same resources to build a chip during the exact same time period.
It is pretty obvious that team A and B will not come up with the exact same chip, right?
One of them may finish a bit ahead of time, another may not make the deadline... And most probably one chip will turn out to be more efficient/faster/whatnot.

So in that sense, no, it's pretty much impossible to say how long development of a certain chip will take.

But if you look at it from the other way:
If you alter the 'rules' from the above experiment a bit... give them access to more technology, give them more time, etc... Then they may be able to improve on their chip design... but it will take longer.
So it's pretty obvious that the same team will require more time to improve their chip... even if that improvement is 'implicit' in the development process itself (by having more resources etc).
They MUST have more time to implement more features/optimize certain parts more etc. That is a necessity.


While this makes sense, you have to consider the recent history as well. The way I remember it( I could be wrong), Nvidia was preparing the GTX 3xx line-up against the 5XXX series, however, when the number started showing on sites they freaked out, canceling their plan altogether and jumping to 4xx series instead.

Seems to me they went to a marathon just to be able to dump the GF100 asap and finally have an answer. They want the crown and the 3xx series wasn't cutting to it. If you look at the 460 now, it's probably what the gf100 should have been in terms of performance/power/heat.

Now if you give any company enough time they would most certainly pull a better performing unit than their competition, but of course that time can only be lost time for sales.

As others said before, if Fermi had been launched at maximum two months after the 5xxx it would have been a good year for all of us, the rates would have probably come down by now on both sides.

If I am wrong in my statement please feel free to correct me.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
While this makes sense, you have to consider the recent history as well. The way I remember it( I could be wrong), Nvidia was preparing the GTX 3xx line-up against the 5XXX series, however, when the number started showing on sites they freaked out, canceling their plan altogether and jumping to 4xx series instead.

Seems to me they went to a marathon just to be able to dump the GF100 asap and finally have an answer. They want the crown and the 3xx series wasn't cutting to it. If you look at the 460 now, it's probably what the gf100 should have been in terms of performance/power/heat.

Now if you give any company enough time they would most certainly pull a better performing unit than their competition, but of course that time can only be lost time for sales.

As others said before, if Fermi had been launched at maximum two months after the 5xxx it would have been a good year for all of us, the rates would have probably come down by now on both sides.

If I am wrong in my statement please feel free to correct me.

I'm not sure about the 3 series business, I don't think there was a plan at all for that, I believe they just went straight to 4XX series as a choice.

That said, GTX 480/470 were delayed and behind schedule, this is pretty well known and the information is available. They didn't plan to release it when they did, it took longer than they wanted to.

GTX 480 is a neutered chip and there is still not a glimmer of us ever seeing a full GF100 core.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
Note that I’m not condoning this behavior in any way, but the Far Cry 1 findings are flat-out wrong and have nothing to do with demoted render targets. This is especially true given the game doesn’t run with HDR by default, and their screenshots don’t show HDR in effect either.

In this case the water is degraded because the game occasionally resets its water setting to “low” or “custom” after a change is made to the GPU system. I’ve seen this many times on both ATi and nVidia hardware (with the same jagged edges), and changing the water setting back to “ultra” resolves the issue.

It’s also interesting that nVidia’s reviewer guide mentions Far Cry 1 and Serious Sam 2 because to my knowledge I’m the only person that still actively benchmarks both games. In fact I tested them in my recent GTX480 performance review.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
It's impossible to have a revolutionary DX11 hardware without revolutionary DX11 games (built from the ground-up) to take advantage of the capabilities.

That is only part of the requirement though.
As you said, the previous DX10 cards were not all that bad compared to the DX11 generation.
As a result, if you WOULD try to make a 'revolutionary' DX11 game, you'd probably run into the problem that the current breed of DX11 hardware would not be fast enough for it.
After all, the point about bringing up the 9700Pro and 8800GTX in the first place was that they could run the latest DX9/DX10 titles just fine, even years after their introduction (yes, second generation was better, but the first generation still held its own just fine).
I don't see that happening with the DX11 generation (I also don't see the second generation of DX11 hardware being that much better, or making DX11 games 'more mature').