ATI cheating in AF?

Page 14 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

CaiNaM

Diamond Member
Oct 26, 2000
3,718
0
0
Originally posted by: jim1976
Originally posted by: CaiNaM
http://www.techreport.com/etc/2004q...ng/index.x?pg=1

"What to make of all this?

Whatever the merits of ATI's adaptive trilinear filtering algorithm, ATI appears to have intentionally deceived members of the press, and by extension, the public, by claiming to use "full" trilinear filtering "all of the time" and recommending the use of colored mip map tools in order to verify this claim. Encouraging reviewers to make comparisons to NVIDIA products with NVIDIA's similar trilinear optimizations turned off compounded the offense. Any points ATI has scored on NVIDIA over the past couple of years as NVIDIA has been caught in driver "optimizations" and the like are, in my book, wiped out."

A cheat is always a cheat. And the fact is ATI cheated. But I don't see ATI's cheating to be anywhere near Nvidia's cheating IMO.

well, cheating is cheating imo; the intent is the same - deception. ati is no better.

still, in the great scheme of things, this stuff is always blown out of proportion, and in time it will be straightened out. i still like my x800, and i'm certainly not gonna lose sleep over it.
 

Dean

Platinum Member
Oct 10, 1999
2,757
0
76
They did not supply one screenshot to backup their accusations. What kind of journalism is techreport practicing now?
 

ChkSix

Member
May 5, 2004
192
0
0
Exactly, there is no such thing as 'different' levels of cheating that should be appiled to seperate companies. You can't say well Nvidia did 'X' so they cheated, but ATi did 'Y' which isn't cheating. No, I don't think so, and that in itself is a fanboy based comment. Cheating is cheating, and you can slice it anyway you like and still conclude with cheating in the end.

Now what makes it cheating is that in was hidden from the end user, and there is no option to turn it on or off altogether. What makes it even worse is when these cards are given to the reviewers for benchmarking purposes against another video card from say Nvidia, in ATi's review guideline the .pdf explicity states that the competitor's card must be set to "Full Trinlinear" for comparison. I am sure that Anandtech can verify this for you if you think I am lying. Think the end results using this method and guidelines are a fair comparison or yield accurate results? Not on this earth it isn't, and Nvidia suffered for this kind of practice, and rightfully so.

Another thing that makes it worse if the fact that this optimization works through hardware regardless of what is set in the Control Panel. In other words, on the ATi cards, there is currently no way to force "Full Trilinear" at all, but in testing up until now that's what everyone thought it was producing. And when a few people suggested to ATi that they include the option for it, much like Nvidia had to when it was uncovered on their end, it seemed to me like ATi was beating around the bush with comments like "we like to keep the CP as simple as possible" or "if enough want it we'll review the idea and see if it will be a worthwile addition".

Well excuse me, but you owe it up to your consumers who keep you alive by buying your products to do whatever it takes to play on the same field with other cards from other competing companies so that we can make the right choices when purchasing new hardware. It isn't rocket science either to add in a few lines of code for a few option boxes, or to include an 'advanced option' tab into the CP that allows an advanced user or reviewer to select what he does and doesn't want.

I don't care if it makes a difference in games or not. The bottom line and what should matter is the deceit. You are deceived into believing (and then consequently buying) a card that is capable of running something that in reality it may do a lot worse in. So what if the games do look the same? If the results of the initial benchmarking from reviewers (who many point to as proof of a better product and make a decison on) would have been different and ATi wasn't running the optimizations, would your decison affecting your purchase be different if it was lower? Of course it would be, unless you're completely stupid and want to drop money on a card that performs worse at the same levels of detail just out of brand loyalty.

As for drivers, how far can you think ATi can take the R420 with the Catalysts? Not much further I can guarantee because they are already mature for this line of cards. Granted, the R420 isn't a 9800xt or a 9700Pro, but it's arcitechture is very similiar and basically the same. In that respect, I do not think you will see much more of a performance increase FPS wise from future Cat releases without the use of optimizations, 3dc compression support aside. That doesn't mean to say it isn't fast, because it is a very fast card.

On the other hand, take Nvidia. Their drivers alone matured so much during the 5800-5950 line that Carmack himself ditched the seperate instructions for NV3x on Doom3 and went the normal path, because the driver maturity was that strong over time. The drivers alone again helped the NV3x line in regards to performance and shader issues, from doing horribly to doing pretty decently against the Radeons. I have firsthand knowledge of that owning both a 5900U and 9800pro. Optimizations aside and the idiotic methods employed with cheating, you have to hand Nvidia credit for it's outstanding driver support, which kept the company and the NV3x from sinking like the titanic entirely.

In the end, it brings me full circle to the 6800. If it is performing neck and neck with the R420 now and the drivers are in their infancy considering the new architechture, what is stopping it from completely dominating the scene in 6 months? Considering ATi's uncovering explaining it's neck and neck performance with the NV40 (it's optimizations), I think nothing is stopping it from killing the competition or adding optimizations of it's own to kill them with. Not only is it a technologically superior architecture over the R420, but it has the legroom and the support to grow way beyond what we have seen during it's first showing. And if you think that's extremely hard to believe or swallow, again I turn to the NV3x line for imminent proof of what constantly evolving drivers alone can do for a product.

Do yourself a favor (and I do not mean anyone in particular), and be an educated consumer. Do not go for one card because of a brand, because in the end, they simply don't give a sh|t about you. Business is about making themselves money in anyway they can, and that includes lying. Do the research, follow the forums and news closely and with an open mind, and base a decison on that knowledge gained. Sitting around and saying so what and it doesn't matter is extremely ignorant (again no one in particular). You should all care completely because it is your money they are getting, and possibly for an inferior product (and I don't mean any product in particular), which is only making themselves richer or funding their next project, which by the time it is released, will cost you another half a grand.

One last thing: ATi must have been feeling some sort of threat to incorporate optimizations that run via hardware and software to consider activating it on the R420 line and then keeping it from the consumer and reviewer. No one would do anything like this if they were confident that their product was superior to the competition. It is a sign of desperation as well as uncertainty, and Nvidia was a good teacher of that with it's Nv3x and the nonsense they pulled. Even though they condemed such mischeavious behavior, it seems like ATi has turned into a pretty decent student themselves and was avoiding the spolight (hence no one noticed) because everyone was all too focused on Nvidia and it's unethical ways.

As Albert Einstein once said, Great Spirits have always encountered violent opposition from mediocre minds.
 

Shamrock

Golden Member
Oct 11, 1999
1,441
567
136
if you cheat at a basketball game, and say you win 65-56, then you cheat at the next game, say you win 80-79...which is the worst cheat? neither, they are both identical result, you won by cheating!

And Ackmed, I do not see how you can live with yourself when you see that picture in CoD not being IQ degraded, it's OBVIOUS, and that is the first place your eyes take you. The link you posted to Beyond3d's post from Dave Baumann...I guessed it in under 2 seconds, and no i didnt goto page 2 to find the difference. I looked at about 1.5 inches from th ebottom and over about an inch, you can CLEARLY see the difference that the right is full Tri. And to be honest...straight lines isnt the best example for AF...make it a scene like a train going down tracks with foliage on the sides.
 

Shamrock

Golden Member
Oct 11, 1999
1,441
567
136
Originally posted by: TStep
Without starting Jihad again, can someone answer the following pertaining to the two image comparisons shown previously in this thread:

1) Why is there less detail regarding the leaves on the trees in the distance with the NV40? Does this have to do with trilinear filtering?

2) Why is the foreground on the R420 sharper than on the NV40? I had thought the trilinear filtering was only for smoothing distant objects in the screenshot. It is really noticeable in the surface details on the side of the fence post in the lower right corner of the screen shot.

3) It seem to me that the R420 is more detailed overall and the NV40 is slightly blurred over the whole screen, not just the distance. What causes this, or is this each manufacturer's interpretation of the game? If the NV40 has a blurrier foreground to begin with, will it not transition into the background less sharply?

Matthias99 answered, but I am still unclear.



1. my opinion is, I dont think the NV40 is losing the trees at all, it is a screenshot captured from a game that has flowing trees. The leaves on those trees may have been blowing and it captured it as the leaves are being turned over. If you want to get technical, look in the compass on the bottom left, the 2 green arrows are also not in the exact same spot.

2. AF is rendered throughout your whole screen, not just the background or foreground (depending on how much AF you apply such as 16x? correct me if i'm wrong, I am guessing) And since ATI doesnt "appear" to be doing full Trilinear in the screenshots, it will be "snappier" or clearer, while NV's is doing the full trilinear and blurring the image transitions more lifelike. Which brings me to #3.

3. NV's shot looks more bluured because that's how it is in real life. Go down a country road and stare at a farm or something, the haze will make things look blurry. ATI's shot is too "crisp". For instance, look at the fence. The wire on the fence in the NV shot tends to dissappear, as it will in real life, because your eyes simply cant see a 2mm wire at 200 yards as clearly as 1 yard. In this shot the ATI fence wire is still seen more clearly, and to me, it looks choppy.

This is clearly my opinion, and not fact....take it as you will.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: ChkSix
On the other hand, take Nvidia. Their drivers alone matured so much during the 5800-5950 line that Carmack himself ditched the seperate instructions for NV3x on Doom3 and went the normal path, because the driver maturity was that strong over time. The drivers alone again helped the NV3x line in regards to performance and shader issues, from doing horribly to doing pretty decently against the Radeons. I have firsthand knowledge of that owning both a 5900U and 9800pro. Optimizations aside and the idiotic methods employed with cheating, you have to hand Nvidia credit for it's outstanding driver support, which kept the company and the NV3x from sinking like the titanic entirely.

This is misleading at best. The Doom3 engine no longer has a separate codepath for the NV30 -- this is true. However, it's still running mixed-mode code -- it's just that NVIDIA *finally* added the ability to run shaders dynamically at either FP16 or FP32 precision without needing a separate codepath. NVIDIA didn't magically make NV30's FP32 faster; they just simplified doing mixed FP16/FP32 code. There's a big difference there.

In the end, it brings me full circle to the 6800. If it is performing neck and neck with the R420 now and the drivers are in their infancy considering the new architechture, what is stopping it from completely dominating the scene in 6 months? Considering ATi's uncovering explaining it's neck and neck performance with the NV40 (it's optimizations), I think nothing is stopping it from killing the competition or adding optimizations of it's own to kill them with. Not only is it a technologically superior architecture over the R420, but it has the legroom and the support to grow way beyond what we have seen during it's first showing. And if you think that's extremely hard to believe or swallow, again I turn to the NV3x line for imminent proof of what constantly evolving drivers alone can do for a product.

Will they see improvements? Sure. But don't delude yourself into thinking they can bypass the limitations and restrictions of their hardware. The NV3X cards *still* suck at full-precision SM2.0 compared to the R3XX cards, and no amount of driver tweaking is going to fix that.

Do yourself a favor (and I do not mean anyone in particular), and be an educated consumer.

I absolutely agree.

One last thing: ATi must have been feeling some sort of threat to incorporate optimizations that run via hardware and software to consider activating it on the R420 line and then keeping it from the consumer and reviewer. No one would do anything like this if they were confident that their product was superior to the competition. It is a sign of desperation as well as uncertainty...

Of course they were 'uncertain' about how their card would stand up, especially considering that, until just a week or two before they released the X800Pro, they probably had *no idea* how the 6800 would really perform. You don't think NVIDIA's sudden announcement of the 6800 Ultra 'Extreme' (which now looks like it may be a very expensive and very hard-to-find card) was a sign of desperation and uncertainty as well?

Now, it does beg the question of why ATI didn't *tell* anyone about this (considering they've been using it apparently for a YEAR on the 9600s)... not the brightest move if you ask me, but not exactly 'desperation'. Clearly they were happy with how it worked on the 9600, and decided to put it on the R420 as well. I just wish they'd said it up front...
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: Shamrock
And Ackmed, I do not see how you can live with yourself when you see that picture in CoD not being IQ degraded, it's OBVIOUS, and that is the first place your eyes take you.

The suggestion has been made that the R420 screenshots are actually using straight bilinear filtering (ie, not their adaptive trilinear at all). Considering that nobody else with an R420 has been able to produce screenshots like this, and ATI has said they can't reproduce it either, I'm inclined to at least delay judgement until something more definitive is available.

The link you posted to Beyond3d's post from Dave Baumann...I guessed it in under 2 seconds, and no i didnt goto page 2 to find the difference. I looked at about 1.5 inches from th ebottom and over about an inch, you can CLEARLY see the difference that the right is full Tri. And to be honest...straight lines isnt the best example for AF...make it a scene like a train going down tracks with foliage on the sides.

I honestly see no difference there, and so do 47% of the folks on Beyond3D. If you can tell at a glance which is trilinear, and it's CLEAR to you -- well, either you have better eyes or a better monitor than I. :p
 

Shamrock

Golden Member
Oct 11, 1999
1,441
567
136
I am just going by the screenshot, not what 100% of R420's can do.

I have glasses that makes me have 20/20 vision :p to me the right just looks better to me. it looks more transitional, not too crisp and jagged (like the left). or it MAY be that I have NV vid card? HAHAHAHA (just a joke, dont hang me!)
 

ChkSix

Member
May 5, 2004
192
0
0
Will they see improvements? Sure. But don't delude yourself into thinking they can bypass the limitations and restrictions of their hardware. The NV3X cards *still* suck at full-precision SM2.0 compared to the R3XX cards, and no amount of driver tweaking is going to fix that.

I'm not deluding myself. I can tell you that the limitations in hardware are leaning more on the R420 than on the NV40, especially knowing that the R420 is built on the R300 and it's drivers are already very mature. Wouldn't you agree? Sure their might be limitations on both, however I think just early testings with beta drivers and considering the new architecture, the Nvidia proved that the Geforce6800 can already run neck and neck with the R420 across the board and in almost every test. I think that message will change in the coming months.

This is misleading at best. The Doom3 engine no longer has a separate codepath for the NV30 -- this is true. However, it's still running mixed-mode code -- it's just that NVIDIA *finally* added the ability to run shaders dynamically at either FP16 or FP32 precision without needing a separate codepath. NVIDIA didn't magically make NV30's FP32 faster; they just simplified doing mixed FP16/FP32 code. There's a big difference there.

I meant the seperate codepath, and I should have elaborated more on the mixed moding. However, the codepath alone is enough to show you the strides and work Nvidia did with it's driver support.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,004
126
So you're saying there's some big difference in ATI and nVidia's filtering based on that marketing spin posted above?
Marketing spin? What are you talking about?

There are two different methods here. One is adaptive trilinear which both vendors do and the other is brilinear which only nVidia does (and ATi too if the app doesn't request AF, otherwise not).

I consider brilinear cheating.
I do not consider adaptive trilinear cheating, especially since it doesn't degrade quality.
And I do not care which vendor the above points apply to.

That you believe them when they say "it's true trilinear, no image degradation" even after seeing the articles showing image degradation?
The only degradation I've seen is massively magnfied images - when placed on top of each other - showing a few pixels out of place.

Weren't you the one saying that you don't stop to look at the screen when gaming yet now you're happy to use magnified images showing four pixels out of place as evidence? Who's changing their tune now, huh?

and clock for clock 9800 and X800 are roughly the same then isn't $400-$500 a bit steep for a nothing more than a driver upgrade?
The level of naivety in that post is simply stunning. Not only is the adaptive trilinear done in hardware but it's probably not providing a performance gain of much more than a few percent.

Of course the extra eight pipes, the superior core and memory clocks, the 3Dc compression, etc all mean nothing because all that matters is the four pixels are out of place in magnified images. :roll:

Thus. it would be interesting to see if it would be possible to get a 9800 Pro (save $200), apply the software optimizations, and get near X800 performance.
Please tell me you're joking. If you aren't then you should be worried about bigger things than ATi's trilinear.

I always knew that the ATi IQ was far worse than nVidia IQ.
Of course you did. :roll:
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: ChkSix
Will they see improvements? Sure. But don't delude yourself into thinking they can bypass the limitations and restrictions of their hardware. The NV3X cards *still* suck at full-precision SM2.0 compared to the R3XX cards, and no amount of driver tweaking is going to fix that.

I'm not deluding myself. I can tell you that the limitations in hardware are leaning more on the R420 than on the NV40, especially knowing that the R420 is built on the R300 and it's drivers are already very mature. Wouldn't you agree? Sure their might be limitations on both, however I think just early testings with beta drivers and considering the new architecture, the Nvidia proved that the Geforce6800 can already run neck and neck with the R420 across the board and in almost every test. I think that message will change in the coming months.

I think NVIDIA has a solid architecture from what I've seen, but as of yet we don't really know all that much about its strengths or weaknesses. If its overclocking limitations are as extreme as they sound (NVIDIA being unable to even hit 450Mhz with their *good* 6800U chips on air), that could be a killer flaw. They also seem particularly weak in shader-heavy games like Far Cry (which does not bode well if many games will be using pixel shaders in the next year). NVIDIA went heavy on the features this time around, and it could hurt them badly if they can't get a noticeable performance edge out of SM3.0, or generate some sales with their video encoder chip. I'd love for them to pull it off (competition is a good thing), but IMHO they still have an uphill battle.

I'm a professional software engineer. Given the choice between an architecture that is proven, has stable drivers (which could still improve), and currently looks to have the performance edge, and one that is new, has what looks like flaky beta drivers (which will undoubtedly improve, but when and by how much nobody knows) -- but that may well have more potential -- I'm gonna go with the proven one if I have to choose. But right now it's still too early to tell which will come out on top in the long run. In fact, this round could end up being truly 'won' by the refresh parts in 6-12 months!

This is misleading at best. The Doom3 engine no longer has a separate codepath for the NV30 -- this is true. However, it's still running mixed-mode code -- it's just that NVIDIA *finally* added the ability to run shaders dynamically at either FP16 or FP32 precision without needing a separate codepath. NVIDIA didn't magically make NV30's FP32 faster; they just simplified doing mixed FP16/FP32 code. There's a big difference there.

I meant the seperate codepath, and I should have elaborated more on the mixed moding. However, the codepath alone is enough to show you the strides and work Nvidia did with it's driver support.

What 'strides'? They recently added a feature they should have had at launch if they actually intended for developers to write meaningful amounts of mixed-mode code. I haven't read anything indicating they made noticeable shader performance improvements. I mean, it's great that they're improving things, but it's not going to help all that much if the NV30's shader performance is still bad.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Originally posted by: Dean
They did not supply one screenshot to backup their accusations. What kind of journalism is techreport practicing now?
Did you read the entire article? Their argument they are making has nothing to do with the resulting IQ, but rather with the fact (as they present it) that ATi knowingly encouraged reviewers to disable nVidia's optimizations while running running their own. Essentially, ATi appears to have been stacking the deck in their favor. I would like to see if other review sites can confirm/deny that the benchmarking guidlines given by ATi were as techreport claims. Anandtech... Hint.. Hint...

Obviously. the overall IQ does matter to the end user, but a hareware comparison should be done in a controlled, consistent environment. Which does not appear was possible based on the information given at techreport.
 

ChkSix

Member
May 5, 2004
192
0
0
Originally posted by: Matthias99
Originally posted by: ChkSix
Will they see improvements? Sure. But don't delude yourself into thinking they can bypass the limitations and restrictions of their hardware. The NV3X cards *still* suck at full-precision SM2.0 compared to the R3XX cards, and no amount of driver tweaking is going to fix that.

I'm not deluding myself. I can tell you that the limitations in hardware are leaning more on the R420 than on the NV40, especially knowing that the R420 is built on the R300 and it's drivers are already very mature. Wouldn't you agree? Sure their might be limitations on both, however I think just early testings with beta drivers and considering the new architecture, the Nvidia proved that the Geforce6800 can already run neck and neck with the R420 across the board and in almost every test. I think that message will change in the coming months.

I think NVIDIA has a solid architecture from what I've seen, but as of yet we don't really know all that much about its strengths or weaknesses. If its overclocking limitations are as extreme as they sound (NVIDIA being unable to even hit 450Mhz with their *good* 6800U chips on air), that could be a killer flaw. They also seem particularly weak in shader-heavy games like Far Cry (which does not bode well if many games will be using pixel shaders in the next year). NVIDIA went heavy on the features this time around, and it could hurt them badly if they can't get a noticeable performance edge out of SM3.0, or generate some sales with their video encoder chip. I'd love for them to pull it off (competition is a good thing), but IMHO they still have an uphill battle.

I'm a professional software engineer. Given the choice between an architecture that is proven, has stable drivers (which could still improve), and currently looks to have the performance edge, and one that is new, has what looks like flaky beta drivers (which will undoubtedly improve, but when and by how much nobody knows) -- but that may well have more potential -- I'm gonna go with the proven one if I have to choose. But right now it's still too early to tell which will come out on top in the long run. In fact, this round could end up being truly 'won' by the refresh parts in 6-12 months!

This is misleading at best. The Doom3 engine no longer has a separate codepath for the NV30 -- this is true. However, it's still running mixed-mode code -- it's just that NVIDIA *finally* added the ability to run shaders dynamically at either FP16 or FP32 precision without needing a separate codepath. NVIDIA didn't magically make NV30's FP32 faster; they just simplified doing mixed FP16/FP32 code. There's a big difference there.

I meant the seperate codepath, and I should have elaborated more on the mixed moding. However, the codepath alone is enough to show you the strides and work Nvidia did with it's driver support.

What 'strides'? They recently added a feature they should have had at launch if they actually intended for developers to write meaningful amounts of mixed-mode code. I haven't read anything indicating they made noticeable shader performance improvements. I mean, it's great that they're improving things, but it's not going to help all that much if the NV30's shader performance is still bad.


Well Matthias, I guess it all comes down to the end user and their preference. Yes the Cats are very mature which means they are proven, and don't have much more to go further without optimizations. If you're already squeezing as much as you can out of the code, what additons can be added to further enhance performance if you're hardware limited? And believe it or not, sooner or later ATi will have to rebuild it's hardware completely, which has yet to happen in 2 1/2 years now. And when it finally gets to that point, they will face the same uncertainties that Nvidia is now, only by then Nvidia may be working towards another brand new design.

In that respect, my eye is a little squinted towards ATi. Knowing the company for so long ( I am 31 now) I know what they are capable of and what they aren't. This might seem ignorant, however a company in my eyes that so far has delivered one hit (the r300 and now r420 ..same thing basically) in all the years of producing horrible products, still has to prove to me at least, that it can do it again by making something entirely new from the ground up. When they do it and suceed, I might be a little less suspicious of them. Yes, I know...why do it if what you have now works. But then again, why do optimizations and not tell anyone about them if what you have now can go head to head with the same settings without a fear of losing? Their history keeps me wondering, although the 9700 onward has been phenominal.

Their optimizations proved to me the same thing Nvidia proved to me when they did it. They were desperate. If they weren't, there would be no reason for it. And you can tell me anything, but cheating in any aspect of life is a sign of desperation, whether it comes from Nvidia or from ATi does not matter. The concept does. It is a feeble attempt to get ahead hoping both the competiton and the people that judge you won't catch you in the act. All while painting a false picture of your true self.

I should add that the only reason why I would choose Nvidia here over ATi is architecture. Unless the ATi offering was knocking the daylights out of Nvidia's offering across the board (which isn't the case, they are basically even throughout the spectrum), I cannot see spending 500 dollars on something that is roughly two years old with enhancements such as additional pipelines and speed increases on it's core. I love bleeding edge technology, and I am aware that there will always be problems to iron out with everything brand spanking new. But that's how it works in our hobby/profession. Technology should always progress forward, even if the designs are wrong and don't work like they should first time around. Because you can always learn off of the mistakes of something new and perfect them, at which time it would lead you to another bold push in another direction. You can't learn anything new if you just tinker with the same thing over and over and over again, already knowing it inside and out, backwards and forwards.

Turbine engines would never have been if the prop wasn't completely perfected and someone said, ok out with the old, let's try something bold and new and walk down a path never taken before by anyone. Same holds true for the new Scramjet and Hypersonic engines being developed now because the Turbine has reached a point of perfection. Quantum Mechanics and the Uncertainty Theory came at the turn of the century, which led to microprocessors and the computer revolution. If no one had the courage to push the envelope to it's breaking point, we would all still be using punch cards or the basic Chinese computer (forget the name) to do our mathematical computing for us instead of desktops, laptops, pocket pcs and the like.

Progression can only come if one is willing to take a chance and dive into the unknown. And this time around, ATi dissapointed me by not doing that and elongating a seriously old design, that is if one thinks of it in terms of technology.
 

Ackmed

Diamond Member
Oct 1, 2003
8,499
560
126
Why do some people still call it a cheat, when the IQ is the same. I have yet to see that it degrades the IQ, so to me its not a cheat. If someone can link a pic of it degrading IQ.. then I would tend to agree. Misleading.. ? Yes. A cheat? Not to me.

You do have to wonder why they would not put it in the release notes, or say something about it. People always find this stuff out now days, and it will always turn bad for whomever it is. It did take them about a year to find this, which also confuses me as to why people call it a cheat.

Whoever said there is no such thing as bad press... its wrong.
 

ChkSix

Member
May 5, 2004
192
0
0
I think that is the only thing that makes it a cheat. The fact that they did not include that to the review sites or anyone else for that matter, yet wanted the competitor's card to run at "full trilinear" according to their reviewers 'guidelines'.

I am sure that if they would have admitted this beforehand, no one would have said one thing about it. I know I wouldn't have cared. But they put themselves in the same position that Nvidia did by not disclosing information that very well could have made a difference in the benchmarking and end results, even if those differences were slight.

When Nvidia did it, I was like wtf!, why try to take a sneaky advantage? When ATi did it, my feelings were the same.
 

Ackmed

Diamond Member
Oct 1, 2003
8,499
560
126
But ask yourself this, its been doing that for over a year. And ATi's AF is usually regarded as better, so how can it be cheating if it looks better or the same? Do you think its even in the same league as NV when their 61.11 drivers turn off AA at 2&4x in Farcry, when selected in the CP. It wasnt in the readme, or anywhere else either. They rushed the drivers out just in time for the X800 Pro reviews.. with this "bug" in them. Giving them a huge advantage, is the game that ATi beats them the worst in.

And interesting parts of the ATi interview.

Q. Why did ATI say to the general public that they were using trilinear by default, when in fact it was something else? (quality is ok, i agree, but you did deceive, by claiming it to be a trilinear)

A. We understand that there was confusion due to the recent reports otherwise. We provide trilinear filtering in all cases where trilinear filtering was asked for. As has been demonstrated many times by several people - almost every hardware has a different implementation of lod calculation and filtering calculations. If we start calling all the existing filtering implementations with different names - we will end up with many names for trilinear.

Its up to each person to believe what they say. How they have explained it in the interview makes sesnse to me. If only they had put it in the readme a year ago when they started it, I think it wouldnt have been a big deal at all. But like most people, I feel decieved. But I still dont feet cheated.
 

Ackmed

Diamond Member
Oct 1, 2003
8,499
560
126
I have read that, and it does make me ponder things differently. We just have different opinions, which is fine with me. I dont have any problem with that. At least we've been discussing things... without acting like children.

I wasnt going to post this, because I wont be able to order it till tomorrow evening.. but here is a X800XT-PE for $459.51 at CDW. Someone ordered one.. and got a confirmation email. I plan to order one tomorrow. It says in stock, and ships 4-6 days usually. REALLY hard to beat that.. especially since its cheaper than 9800XT and 5950U's still.

http://www.cdw.com/shop/products/default.aspx?edc=646747
 

ChkSix

Member
May 5, 2004
192
0
0
That's an awesome price. I would leap on it in a second if I was planning on buying the X800XT.

I agree bro. I just love sharing information, even if we don't agree at times. We both win through differences of opinion and ideas in the long run.

Cheers,
:beer:

Mike
 

Dean

Platinum Member
Oct 10, 1999
2,757
0
76
Here is a quote from a game programmer (davepermen who hangs out at Beyond 3d)

and give me any reason to see why this should be called a cheat. it is trilinear where trilinear is needed. it is quality based filtering where quality based filtering is what is needed. it is bilinear only where bilinear is what is needed. etc..

do you program? i'm curious. as a programmer, knowing the side of the graphics-developers, i can certainly say, in my point of view, NO customer should complain this time. brilinear was another issue. quality degradation IS an issue. this is not quality degradation. not by any chance (except bug in driver, wich is unlikely.. because the bug-catcher simply does trilinear.. )
 

chsh1ca

Golden Member
Feb 17, 2003
1,179
0
0
NVidia's brilinear has quality degradation, ATI is not doing brilinear.

I notice nobody has taken me up on the IQ comparison. :/