• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

Anand's 9800XT and FX5950 review, part 2

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Given that a majority of these "bugs" seem to only effect games/apps used in benchmarks and with their documented 3dmark cheats, then yeap. That's the hole NV dug for themselves when they admitty over stepped the optimization bounds. Remember how each ATI driver was scrubbed for a time being after the Q3 fiasco? I am not saying its right. But that's the reaction here, at nVnews, 3dgpu, rage3d, TR, FS and other sites.


You seemed to forget ATI was caught cheating in the exact same test.

Should I consider the flickering in Tomb Raider cheating? How about the total lack of fog in Combat Mission? The Fog issue in Combat Mission has been known for years.





 

jbirney

Member
Jul 24, 2000
188
0
0
Originally posted by: Genx87

You seemed to forget ATI was caught cheating in the exact same test.

Should I consider the flickering in Tomb Raider cheating? How about the total lack of fog in Combat Mission? The Fog issue in Combat Mission has been known for years.

I did not see ATI cheating in UT2k3. B3D did not seem to think they cheated back when they did their TR:AOD benchmarks. ATI has not be caught lower IQ in 3dmark, ect. I am not saying ATI is 100% in the clear. We have Q3 in the past. They did optimize for a synthetic benchmark which in my book is cheating. But there is a night and day difference in the two compaines atm with regaurds to dirver optimizations....
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
I did not see ATI cheating in UT2k3. B3D did not seem to think they cheated back when they did their TR:AOD benchmarks. ATI has not be caught lower IQ in 3dmark, ect. I am not saying ATI is 100% in the clear. We have Q3 in the past. They did optimize for a synthetic benchmark which in my book is cheating. But there is a night and day difference in the two compaines atm with regaurds to dirver optimizations....


So you do think flickering equals cheating? And yes ATI was caught with a lower IQ in 3dmark in game test 4.

They way you are sounding it isnt do night and day at all. You equate any kind of driver rendering issue as a cheat. flickering and not rendering fog == cheating in your book correct? So that means ATI is cheating in both Combat Mission and Tomb Raider. At least according to you...................
 

DaveBaumann

Member
Mar 24, 2000
164
0
0
So Valve isn't trying to license this engine...? I was under the impression that they were but if you say they aren't then I guess that would make sense. They don't care about engine licensees at all, but if someone happens to be willing to work with that they have they can license it, I guess if that's what's going on it makes sense. So Gabe isn't going after Sweeney and Carmack.

Eh?

Ben, when we're talking about a different path they are not actually talking about something that is generic to the engine, but basically a rewrite of all the shader code that is utilised. This will only be of benefit to other licensee's if they use the same shaders that Valve do, which is highly unlikely since they are specific to Valves game (and essentially 'outside' of the engine). The issue is that smaller developers won't necessarily have the time to write two sets of shaders for their games and will probably stick top the base HLSL route when writing their, which will show a large performance difference to the path utilised by the FX's which has completely rewritten shaders optimised specifically for the FX.

IMO, the message Valve was trying to convey was two fold:
To consumers - If you have an FX board and start downloading addons via Steam, don't be surprised if they are much lower performance as they are most likely to be straight, high precision DX9.
To Developers - if you start using our engine and see a massive performance disparity between ATI and GeForce FX don;t look to our engine, look to NVIDIA.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Dave-

Ben, when we're talking about a different path they are not actually talking about something that is generic to the engine, but basically a rewrite of all the shader code that is utilised. This will only be of benefit to other licensee's if they use the same shaders that Valve do, which is highly unlikely since they are specific to Valves game (and essentially 'outside' of the engine).

You were talking about there being no good reason for Valve to take the time to make the most optimized path for DX9 to run on FX boards that they could. The only way that is truly viable is if they have no intention of trying to license this engine the way the D3 and U2 are licensed. It would be one thing if all hardware was fast on it already, it something else entirely when you have major performance issues with the market leader of gaming hardware.

The issue is that smaller developers won't necessarily have the time to write two sets of shaders for their games and will probably stick top the base HLSL route when writing their, which will show a large performance difference to the path utilised by the FX's which has completely rewritten shaders optimised specifically for the FX.

And the FX5200 has outsold all of the boards that perform well using that engine combined. If Core attempted to license the TRAoD engine most dev houses wouldn't even consider it a vague option, it is far too slow on everything lacking close to the visuals that the major licenses have. With the HL2 engine not being fully optimized for the FX series, Valve has built an engine that will only work well with a company that has a 20% marketshare total and even the majority of that won't run the engine properly. When the other major engines perform well on everyone's hardware it isn't good business sense.

To Developers - if you start using our engine and see a massive performance disparity between ATI and GeForce FX don;t look to our engine, look to NVIDIA.

Because it is good to avoid optimizations that we know can help on the performance front, we just don't feel are worth the time to implement?

JB-

Given that a majority of these "bugs" seem to only effect games/apps used in benchmarks and with their documented 3dmark cheats, then yeap.

Like Halo months before its release? Or bugs that showed up TRAoD with drivers that were available before it had a benchmark? Or Homeworld2 where there isn't a built in bench? The bugs end up fixed and performance is the same to better. If they are spending their time coding cheats when they can manage to get the same to better performance rendering the image the right way they are throwing away a lot of money.

Remember how each ATI driver was scrubbed for a time being after the Q3 fiasco?

Which I wasn't a supporter of. Want to scrub for crash/hard lock bugs go for it.

Cat 3.8 fixed that issue and my son and I have been enjoying playing MP for the last 4 days with no lock ups on his 8500.

That is something I consider a major driver bug, a position I have held for a very long time. I don't shut my primary rig down, I like to measure up time in months.

However that little SOB is much better at the banshee than I am...oh well. Btw there is a command line switch to get AA to work on Halo on the R9700. Disables the camafluge effect.

Not ff is it? If all you lose is the camo effect, that wouldn't be too bad(what is the command line switch?).

Yes you say you lumped all of the 3dmark into one, but that's where most of the cheats come in.

And they cheated in 3DMark2K3, you see me quoting performance numbers from it?

Then what about total lack of Tri-linear filtering in any D3D app/game? That's not a bug. That's there only to increase FPS scores and is a hack/cheat.

I think it is a hack, not a cheat.

And I know your going to say that ATI's AF is the same thing. No its not. It was a design trade off ATI's engineers made during the time they were working on the R300 core. We knew from day one of this limitation. The Tri-linear cheat is much different.

How is it different? Because it is not possible to work around it?

Sorry as BFG said that's your issue. Just because you do not believe him does not make it false.

He offered no proof. If I say ATi is cheating on every thing, commited murder and stole the designs for the R3x0 and planted a biological weapon at MS's headquarters to get their way with DX and the XB2 contract would it be your issue if you didn't believe it? He made the accusation, he needs to offer proof.

However he is the one that's in the best position to make that call. Not you nor I.

Show me Carmack or Sweeney backing him up.

Besides did you know that Activision was paid 4 million by NV for Doom3 rights?

We all knew that the NV3X was going to be fast at Doom3, its obvious from their architecture and the fact that Carmack has been telling everyone openly exactly what the game was going to need to run fast. The difference between John and Gabe is that John was on record extensively long before any promotional money was handed over.

Again that remains to be seen and with no public drivers we can take that claim with a truck load of salt. So far AT is not a reliable source to do IQ test anymore.

Anymore? You want me to dig up quotes of Anand saying that ATi's performance mode AF is nigh identical to their quality? This site has never exhibited the ability to tell the difference between bilinear and trilinear in any reasonable fashion.

Nv has never been the leader which just lost the lead (yes they have been in the leader role..but not the just lost lead role).

The original Radeon and the R8500 don't jibe with that statement.

They lost the Xbox2 contract to ATI and suffered massive losses due to the delay of the FX line.

They have turned more profit then ATi since the launch of the R9700. Their marketshare is also up since that point(not at ATi's expense, but it is up).

Then when the FX line shipped it showed massive issues with DX9 code.

With PS 2.0, there is more to DX9 then PSs.

NV when on a campaign to discredit 3dmark, dropped out of 3dmark beta program,

They dropped out of the beta program prior to the launch of the NV30 and prior to the launch of 3DMark2K3.

Then you have NV saying it over optimized (not diver/compiler bugs) and admitting to over zealous optimizations.

They got nailed and admitted it. Compare that to Aquamark where some bugs show up in one of the beta builds and then vanish in another, that's supposed to be the same thing?

All of a sudden drivers removing the option to do trilinear in D3D (which breaks their own optimization guide lines)

Isn't it against ATi's guidelines to post drivers that reboot or hardlock systems, or is that considered a feature? :)

the fact that custom time demos show no difference on ATI scores yet large drops on NV scores

I tried locating it, where is it on FS? I can't comment on that as I have no idea what you are talking about.

That tells me (my theory) that NV is still scared of being in 2nd place to ATI

Right when they manage to actually get in first place for the first time(they finally passed Intel based on the last Mercury numbers I saw). In terms of marketplace, they are still far ahead of ATi. In terms of performance they have trailed ATi multiple times in the last few years.

Your theory seems to be based off new drivers bugs that only crop in in 3d benchmark games/application from a driver team that is gold

And oddly enough those issues have cleared up. You may think writing a compiler is easy, I sure as hell don't.

Then all of Values claims.

Claims, that's what they are.

has many remarkable timings of driver releases which disable anti-cheat programs the moment they occur

I don't think they are good enough to do it.

that has a major developer (Value) conspiring to hurt NV which in turn will hurt Values sales as NV fans have already said they wont buy HL2 due to the poor prefromance on FX cards

One that was paid millions of dollars, refused to allow beta drivers to bench a beta game and stated they did not optimize as much as they could have(these are all based on their own statements, no conspiracy talk here).

Again by Occam's razor which you quoted given two outcomes the less complicated one is more often correct. Pretty easy to see one is much less complexed than the other.

Only if you consider when something bad happens there must be a conspiracy somewhere. I've never seen a conspiracy theory that I put any faith in, this one included.
 

jbirney

Member
Jul 24, 2000
188
0
0
I think it is a hack, not a cheat.

Sorry but I don't agree. They had Trilinear before. They remove it to inflate FPS scores. That in my book is a cheat.

How is it different? Because it is not possible to work around it?

Its a hard ware limitation that was a design trade off that been inhertent since the AT first preview of the R300 back in Aug of last year. The Trilinear thing was working just fine on the FX up until the last set of drivers. A hardware limitation is not even in the same ballpark as this "hack" as you call it.

He made the accusation, he needs to offer proof.

You said he is more or less lying and have no proof to back it up. Like I said he has more access to drivers. He know more about whats going on than you or I do. He is in the best postion to make the call. Was he correct? I dont know. But given NV craptacular past 5 months of cheats/hacks sorry I am going have to give that a bit more consideration that Gabe was probably closer than you were..


We all knew that the NV3X was going to be fast at Doom3, its obvious from their architecture and the fact that Carmack has been telling everyone openly exactly what the game was going to need to run fast. The difference between John and Gabe is that John was on record extensively long before any promotional money was handed over.

Do you know that for a fact? Why did JohnC felt he had to add a custom code path for the NV30? Why not just use the comman ARB path that ATI uses? yeap we know the 3x is faster as its running in lower precesion.


I tried locating it, where is it on FS? I can't comment on that as I have no idea what you are talking about.
http://firingsquad.gamers.com/hardw...erf/default.asp

It?s all about the custom demos

If you recall our MSI GeForce FX5900-TD128 and e-GeForce FX 5900 Ultra reviews, you?ll remember that we used custom demos for games like Quake 3 and Splinter Cell for benchmarking purposes. By using custom demos, we ensured that our test results were indicative of real game play performance, rather than the performance of a stock timedemo. In what came as a huge surprise to us, ATI?s RADEON cards came out on top in Quake 3 once custom timedemos were used, completely contradicting the results we obtained with Quake 3?s stock timedemos a month earlier on our GeForce FX 5900 Ultra preview. If anything, this has empowered us to implement custom timedemos in all of our articles going forward.

Digilife reported a simular finding as did Guru3D.


And oddly enough those issues have cleared up. You may think writing a compiler is easy, I sure as hell don't.

Oddly enough we dont have public driver or any analysis done by a creditable site to back that up :)


Claims, that's what they are.
If you looked at their slide all of their claims have been documented by other sites except for the video screen shot stuff.


I don't think they are good enough to do it.

According to Undwinder the guy that writes the tool thats what he saw. And I put more stock in him atm.

One that was paid millions of dollars, refused to allow beta drivers to bench a beta game and stated they did not optimize as much as they could have(these are all based on their own statements, no conspiracy talk here).

You mean those same beta drivers that showed image corruptions on 3DGPUs AQ3 test? Sorry but DriverHaven and other sites looked at those drivers and found many issues. Again why allow non-public drivers to be used in benchmarks again? This is no different than what NV did with those Doom3 test. Except that HL2 was closer to being done.

Only if you consider when something bad happens there must be a conspiracy somewhere. I've never seen a conspiracy theory that I put any faith in, this one included.

Thats the point. If it just was ONE thing than its no big deal. Its not one thing. Its the same thing over and over, lower IQ for faster Speed. No thanks.


I know we will always disagree on things. You from your many post here have gone out of your way to again make an excuse NV when they have clearly been caught time and time again lower IQ to gain speed for only BENCHMARK GAMES. I give up. But I will leave this thread with this info so you can enjoy AA in HALO:


Not ff is it? If all you lose is the camo effect, that wouldn't be too bad(what is the command line switch?)

The forced pixel shader from the tweaker doesn't work with the final version (result in corrupted files error message) of the game but there's a simple workaround explained in the readme file of the game. First my test shows that there's no performance improvement forcing 1.4, use 1.1 instead. To do so just use a shortcut of the game executable and add " -use11" at the end of the target line. should look like this "c://path/to/halo.exe -use11".

Also, I see some people bencharking with FSAA enabled, Halo officially doesn't support FSAA but there's a workaround for this that will in conterpart disable the camouflage effect and maybe 1-2 other little thing but I personally preffer having FSAA as it's an effect pressent all the time and not only a little 20 seconds here and there...

To enable FSAA, run a "halo.exe -timedemo" shortcut, you will get a timedemo.txt in your Halo install directory. Open this and you'll see a line like this: "2100MHz, 1024MB, 128M ATI Radeon 9700 PRO (DeviceID=0x4e44) Driver=6.14.10.6378 Shader=1.4"

Find your DeviceID, in this case 0x4e44, and look for your specific device number in your config.txt in the Halo install directory. Under your DeviceID, add the following lines:

DisableRenderTargets
Break

It should look like this:

0x4e44 = "Radeon 9700 PRO"
DisableRenderTargets
Break
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
They cheated in 3DMark2K3, the reason I didn't address in particular the rest of the accusations is that you have a bunch of them lumped together in a take it or leave it fashion.
You did address them by singling out only the static clip planes but claiming the rest were bugs. So let me ask you again, do you claim shader subsitution - along with the rest of Futuremark's findings - are bugs or cheats?

You claim nV uses shader replacement all over the place, that I take issue with.
Well they did it in 3DMark so there's a high chance they're doing it in most games, especially since they went to great lengths to hide it. Also we're seeing all manner of cheats all over the place, in a wide range of popular games that just happen to be benchmarked.

You want to bash nVidia on that one you better bash ATi too.
I openly admitted that ATi had done something similar (they didn't subsitute, they optimised but relied on application detection for it to work).

Then I also pointed out that they freely admitted it and removed it. nVidia never admitted anything and released drivers that stopped anyone else from ever finding out whether they were cheating.

Like I said before, two different attitudes - one not the attitude of a vendor who is afflicted by genuine bugs and has nothing to hide.

You familiar with Linux at all?
Yes, I'm also familar with the price of fish.

Now if nVidia had always been open with their driver architecture I would say that this line of thought was a stretch, but they have always been very protective of their drivers much to the dismay of the Linux community.
Right, and the "protection" just happened to come straight after the 3DMark cheats were exposed and Unwinder's publicily available RivaTuner defeated nVidia's cheating and you continue to insist that this action was nVidia's plan all along? That's really reaching and it's about as likely as the cheats being genuine bugs (ie, they just happen to occur when the game's string is "UT2003.exe" and nothing else).

6:1 v 4:1 compression, of course memory useage changed.
You're quite right in this respect and I forgot about them when I made the post.

However I read your interpolation explanation but I still fail to see what possible gain a vendor has from storing 32 bits' worth of data and then, after using up the required resources to do so, suddenly chooses to do a crappy interpolation which AFAICT gains them absolutely nothing except horrendous image quality. Why do something like that? And why do it for three generations' worth of hardware?

No, I'm convinced there's something more going on here.

On one side, on the other side the other vendor didn't support S3TC 3/4/5 which sucked ass compared to theirs.
I wasn't aware of this on the R100 but until I do some more research I'll take your word for it. However you are missing the big picture which is simply the fact that nVidia needed to use DXT3 in order to achieve comparable image quality to ATi's DXT1. nVidia's DXT1 was so bad that it was unusable but the benchmark graphs with it enabled sure didn't tell anyone that.

Was it more unfair then the V5's inablity to do trilinear filtering?
Yes because the difference between approximated trilinear and true trilinear is nowhere near as big or as blatent as the difference between nVidia's DXT1 verses ATi's DXT1.

Was it unfair to pit the V5 against the R100 and NV1X when the V5 had superior FSAA?
No, as long as clear image quality screenshots were posted. But most sites had the "run-and-gun" approach and all people ever saw was the framerate graphs.

What about the V3 not being able to run the highest quality textures?
Yes, that was definitely an issue, as was 16 bit colour. Fortunately the Voodoo3 had been dropped from the benchmarking industry by that time.

It was a factor that should have been and was looked at by numerous sites.
You're quite right and I completely agree with you that it extended to all vendors, not just nVidia. However nVidia's DXT1 issue was the most blatent one because it wasn't even in the same ballpark of image quality offered by the competitors.

It was as much a cheat for nVidia as the R300 core boards not supporting PS 3.0 is.
No, it's about as similar as a car is to chocolate. Not supporting a feature by design is a totally different problem to supporting a common feature in a faulty fashion for three generations of hardware.

I have two games that it impacted, one of them I fixed myself.
So explain how it worked on your Ti4600 then. Also I guess the Ti4600 must be "cheating" for not supporting PS 2.0, right? And the NV1x is "cheating" for not supporting any version of PS, right?

It's as much a cheat as what nV has been experiencing in most of the recent benches.
No it isn't.

The difference between a hack and a cheat is they aren't rendering anything 'wrong' per se, they are simply reducing the workload to get a comparable(although always with lower IQ) effect.
Except the definition of a cheat is not simply rendering inaccuracy. The issue in question requires both application detection and prior knowledge in the driver for nVidia to be able to pull it off and for that reason, it's a cheat.
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
Unbelievable to me this much effort goes into debating differences in video cards that people who didn't know what to look for wouldn't likely see......

 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: Rollo
Unbelievable to me this much effort goes into debating differences in video cards that people who didn't know what to ook for wouldn't likely see......

Rollo, you're one of the number one debaters and instigators here in video (along with at times myself) so saying something as banal as this is both pointless and hypocritical.

Please keep the comments to constructive debate and not accusations or personal attacks. Not necessarily you jiffy, but do be more polite. :)

AnandTech Moderator
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
JB-

Sorry but I don't agree. They had Trilinear before. They remove it to inflate FPS scores. That in my book is a cheat.

So ATi's hardware engineers don't know how to do true anisotropic? Why? Is it too complicated for them? What about PVR's box filter? Are you now switching your former position and stating that Kristof and crew are also in to cheating? Yes, I'm trying to put you on the spot here since you either now have to say that PowerVR is also cheating and has been for years or acknowledge a difference between them. I did not slam PVR back then for cheating. My position has remained constant, yours is the one that had to change.

A hardware limitation is not even in the same ballpark as this "hack" as you call it.

What about PVR?

You said he is more or less lying and have no proof to back it up.

And he ran off at the mouth for some time about how they bent over backwards for the NV3X architecture while they didn't even compile it for optimal performance on their default code path. Do I think he is in any way, shape or form impartial, not even close.

Do you know that for a fact?

Carmack's .plan updates are public record, everyone knew what he was doing with Doom3.

Why did JohnC felt he had to add a custom code path for the NV30?

Because it mapped to what he wanted to do in Doom3. Carmack's requests for exactly what he wanted in terms of features were things that were public knowledge back in the early NV2X days. nVidia gave him custom extensions that allowed the specific features he wanted to perform well on their boards. When ATi releases some of their own extensions that improve performance and Carmack doesn't use them then you can certainly make the argument he is as biased as Newell.

Why not just use the comman ARB path that ATI uses?

ATi is running in reduced quality versus nVidia if you want to take that discussion to the logical conclusion. Carmack went on record years ago stating he needed 64bit color for his next engine, ATi didn't offer it, nVidia did. He has gone on record stating there is no discernable quality difference going beyond FP16, so why would you? If you state that it is better to run in higher precission no matter what(which is a valid line to be sure) then you must acknowledge that nVidia is running in higher precission then ATi using ARB2. And in comparison to HL2, I'm not suggesting that Valve use anything nV specific, but MS's provided product.

You mean those same beta drivers that showed image corruptions on 3DGPUs AQ3 test? Sorry but DriverHaven and other sites looked at those drivers and found many issues.

No, I meant the 52.14s.

This is no different than what NV did with those Doom3 test. Except that HL2 was closer to being done.

If id had forced reviewers to use the most recent released drivers then the 5200Ultra and 9800Pro would have been quite close in the bench.

You from your many post here have gone out of your way to again make an excuse NV when they have clearly been caught time and time again lower IQ to gain speed for only BENCHMARK GAMES.

If you will state that PowerVR has been cheating for years and ATi is cheating in TRAoD then I would say you have a valid argument. Otherwise, your shifting your standards to accomadate how you want to see things.

BTW- Thanks for the Halo info :)

BFG-

So let me ask you again, do you claim shader subsitution - along with the rest of Futuremark's findings - are bugs or cheats?

They were cheating in 3DMark2K3, I've said in numerous times.

Well they did it in 3DMark so there's a high chance they're doing it in most games, especially since they went to great lengths to hide it.

Why do you make that leap?

Also we're seeing all manner of cheats all over the place, in a wide range of popular games that just happen to be benchmarked.

List some. List the exact title, exactly what the cheat is and what it causes. I'll run through the bugs in the Cats and see how many comparable issues I can come up with for you to put against your list so you know in advance. You list something that is without a doubt a cheat and I'll gladly say they cheated. So far I've seen a bunch of jump to conclusion BS that has left sites backpedaling or ignoring when the issues are fixed and performance is up even more.

nVidia never admitted anything

That would be a very d@mning point if it were true. They did admit it, and they followed that up by posting their new guidelines for drivers(I'm not saying they follow these, but it was posted when they admitted to the cheating).

Right, and the "protection" just happened to come straight after the 3DMark cheats were exposed and Unwinder's publicily available RivaTuner defeated nVidia's cheating and you continue to insist that this action was nVidia's plan all along?

You honestly think they could come up with a scheme that the community couldn't hack through inside of a few days? If they could, the entertainment industry would gladly pay nVidia's driver team millions and millions of dollars to sort out their piracy issue.

However you are missing the big picture which is simply the fact that nVidia needed to use DXT3 in order to achieve comparable image quality to ATi's DXT1.

Not quite. nVidia's DXTC3 was superior to ATi's DXTC1 in terms of image quality. Don't take my word for it, Oldfart already mentioned it in this thread.

However I read your interpolation explanation but I still fail to see what possible gain a vendor has from storing 32 bits' worth of data and then, after using up the required resources to do so, suddenly chooses to do a crappy interpolation which AFAICT gains them absolutely nothing except horrendous image quality. Why do something like that? And why do it for three generations' worth of hardware?

That was what S3 did, and they created the standard. You said it yourself, it was only a slight performance decrease using '3' despite using 32bit interpolation and having lower levels of compression. They followed the creator of the standard. They didn't exceed it. Why keep it the same? How many games does it impact? Would they have been better served reducing the amount of time they spent on another aspect of their hardware to solve an issue that effects a very limited amount of games, one of them never in an official capacity? They knew they had an issue, they implanted a switch in the registry to force the use of S3TC3 for those that wanted to(unfortunately that would not work for UT as the textures were all precompressed which was not the case with the other titles that compressed at run time).

nVidia's DXT1 was so bad that it was unusable but the benchmark graphs with it enabled sure didn't tell anyone that.

Same with S3's, why aren't you bashing them about it?

No, as long as clear image quality screenshots were posted. But most sites had the "run-and-gun" approach and all people ever saw was the framerate graphs.

That sure as hell wasn't the case at the Basement.

However nVidia's DXT1 issue was the most blatent one because it wasn't even in the same ballpark of image quality offered by the competitors.

Again, what about S3?

Not supporting a feature by design is a totally different problem to supporting a common feature in a faulty fashion for three generations of hardware.

No, no, no. S3 created the feature, nVidia followed their implementation to the letter including all formats. Your issue is that they did not exceed the specification. Using that same logic, the R9700Pro has faulty PS2.0 support. They can't support 1,000 instructions in a single PS, nor can they run FP32. If your line is that following a spec exactly is wrong, then ATi is wrong with their PS 2.0 support.

So explain how it worked on your Ti4600 then. Also I guess the Ti4600 must be "cheating" for not supporting PS 2.0, right? And the NV1x is "cheating" for not supporting any version of PS, right?

If I applied your logic then absolutely. I don't apply your logic however.

The issue in question requires both application detection and prior knowledge in the driver for nVidia to be able to pull it off and for that reason, it's a cheat.

The brilinear? They aren't using app detection for that, they do it for all D3D games. As far as using app detection for optimizations, PowerVR does this an incredible amount. Are they cheating in d@mn near every game you have heard of?
 

DaveBaumann

Member
Mar 24, 2000
164
0
0
You were talking about there being no good reason for Valve to take the time to make the most optimized path for DX9 to run on FX boards that they could. The only way that is truly viable is if they have no intention of trying to license this engine the way the D3 and U2 are licensed. It would be one thing if all hardware was fast on it already, it something else entirely when you have major performance issues with the market leader of gaming hardware.

What did you not get? This is not an "engine" issue but a shader issue, many of which will be specific to the title in question. Epic face the same issue if other licensees want to use their own shaders in their game, and id as well if they want to use something other than the unified lighting model.

And the FX5200 has outsold all of the boards that perform well using that engine combined. If Core attempted to license the TRAoD engine most dev houses wouldn't even consider it a vague option, it is far too slow on everything lacking close to the visuals that the major licenses have. With the HL2 engine not being fully optimized for the FX series, Valve has built an engine that will only work well with a company that has a 20% marketshare total and even the majority of that won't run the engine properly. When the other major engines perform well on everyone's hardware it isn't good business sense.

Again, the issue is not engine specific but shader specific. As for the FX5200, HL2 predomanently treats it as DX8 so far, and TR:AOD treats it as fully DX8.

So ATi's hardware engineers don't know how to do true anisotropic?

Where is this definition of "True" anisotropic filtering Ben? You keep stating it as though there must be one...
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
This is not an "engine" issue but a shader issue, many of which will be specific to the title in question.

To Developers - if you start using our engine and see a massive performance disparity between ATI and GeForce FX don;t look to our engine, look to NVIDIA.

This is of course the reason why Valve did not use the optimized compiler and demanded that only a certain driver set be used, including drivers they had not tested yet. Because they wanted to send a message to developers that nVidia had horrendous shader performance as long as you set the test up just right for developers?

By your account Valve was trying to send a message, you have expanded and said that message was based on their shaders, and you have already stated that there would be no point in them optimizing their shader code for the pure DX9 path. Based on everything you have collectively stated about the bench you seem to be supporting just what I was saying.

Epic face the same issue if other licensees want to use their own shaders in their game, and id as well if they want to use something other than the unified lighting model.

Why would anyone want to use the D3 engine and ignore the ULM? Besides that though, I don't think Epic is going to tell their licensees not to use MS provided optimizations nor do I think they will insist that their licensees use older drivers because the new ones have some bugs in them. If, as you stated, Valve was trying to send a message it seem rather convenient that it ended up being so much in the favor of the company that was paying them massive promotional fees. If Sweeney came out with a UT2K4 bench that was compiled in such a way that it couldn't map to the R300 core boards properly, what would you think about that?

Where is this definition of "True" anisotropic filtering Ben?

Non adaptive(in terms of shutting off when they deem you won't notice). Looking at nV's current brilinear they are obviously sampling from multiple mip maps, sometimes. It is a hack and not true trilinear. The FX doesn't have true AF either(ignoring current drivers and looking at the older ones, the situation is now worse obviously).
 

oldfart

Lifer
Dec 2, 1999
10,207
0
0
Not quite. nVidia's DXTC3 was superior to ATi's DXTC1 in terms of image quality. Don't take my word for it, Oldfart already mentioned it in this thread
Not really. I said DXTC3 looks better than DXTC1 because it uses a lower level of compression. It is a different spec. Why should anyone be impressed by that? It's like saying Fords 4 cylinder car is faster than Chevy's 4 cylinder car, but wait! Substitute a V6 in Chevy's car an it is faster than Fords 4 cylinder!
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
btw GenX the fog issue is still in the 52 series in RTCW

http://www.digit-life.com/articles2/radeon/asus-ati.html#p3

So now we know they didn't fix anything and the framerate is still bloated


I read that part and I didnt notice them saying anything except there are "bug's". BTW I noticed in Rightmark ATI has an issue. Are they cheating?

And this is what your article said about Aquamark.

"Some testers mentioned flaws in quality in this test with the first versions of NVIDIA's drivers 5x.xx. This time the rendering quality looks equal. I do not except any optimizations or changes in the driver operation, but from the user's point of view there is no difference in quality.
 

DaveBaumann

Member
Mar 24, 2000
164
0
0
Why would anyone want to use the D3 engine and ignore the ULM?

Thats a shader integral to the engine - I'm not saying it would be inored, I'm saying that the optimisations for that shader would transport from title to tile, as this is really the major selling point of the engine (even though its application will be limited).

If, as you stated, Valve was trying to send a message it seem rather convenient that it ended up being so much in the favor of the company that was paying them massive promotional fees. If Sweeney came out with a UT2K4 bench that was compiled in such a way that it couldn't map to the R300 core boards properly, what would you think about that?

Ben, up until recently there was only one way of compling HLSL. However, I have had confirmation of what I said earlier - the way that MS have implemented the GF comilper would essentially mean that two sets of HLSL would need to be support, thus bringing up the issues of support for smaller developers.

Non adaptive(in terms of shutting off when they deem you won't notice).

No, I asked for the specification, not your interpretation of the specificion. Please point it out to me.

However, if I were to apply my interpretation of the specification to AF then I would say that "Adaptive" is very much part of the specification. Forget the angle roations issue, this is a implemtation specific issue and not really waht is meant by "adaptive", it is meant to represent only taking the number of sampels that are required dependant on the angle of elevation up the X axis - this is why it is termed as "Maximum Anisotropy". The floor in an FPS title ususally requires a high degreee of filtering because of the angle its displayed at, however if you are looking at a wall that it paralell with the veiwport there is hardly any need to take 8X or 16X samples here.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
^

Sorry about the earlier post - I guess I overstepped my bounds. Just tring to steer the thread back in the direction of video and not pointless throwaway lines (although I seem to have succumbed to my own ambition ;) ).

I'll try to keep it more civil and constructive.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Oldfart-

However you are missing the big picture which is simply the fact that nVidia needed to use DXT3 in order to achieve comparable image quality to ATi's DXT1.

Not quite. nVidia's DXTC3 was superior to ATi's DXTC1 in terms of image quality.

Not really. I said DXTC3 looks better than DXTC1 because it uses a lower level of compression.

How is it 'not really'? If I were to say that ATi needed to use 32bit color to achieve comparable image quality to 3dfx's 16bit color back in the Quake3 days how would you respond to that? ATi's 32bit color was superior in IQ to 3dfx's 16bit, nVidia's DXTC3 was superior in image quality to ATi's DXTC1, we have already covered that it is doing different things. Should we ignore ATi's DXTC1 because they did not follow the specifications that the creator of the standard used? Of course not.

Dave-

Ben, up until recently there was only one way of compling HLSL. However, I have had confirmation of what I said earlier - the way that MS have implemented the GF comilper would essentially mean that two sets of HLSL would need to be support, thus bringing up the issues of support for smaller developers.

And looking at it from a purely business standpoint, you would want to support the larger market first.

No, I asked for the specification, not your interpretation of the specificion. Please point it out to me.

What I'll point out to you is what you asked-

Where is this definition of "True" anisotropic filtering Ben?

You said nothing about the specification for it, you asked for the definition. I could have posted the non square sampling actual definition, and by that definition the adaptive methods are not full anisotropic either.

The floor in an FPS title ususally requires a high degreee of filtering because of the angle its displayed at, however if you are looking at a wall that it paralell with the veiwport there is hardly any need to take 8X or 16X samples here.

And then you turn 15 degrees. Adaptive may sound nice in theory, and it may even look good to theorists who don't spend much time gaming too.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Originally posted by: Genx87
And yes ATI was caught with a lower IQ in 3dmark in game test 4.
I must have missed that. Link, please?

And this is what your article said about Aquamark.

"Some testers mentioned flaws in quality in this test with the first versions of NVIDIA's drivers 5x.xx. This time the rendering quality looks equal. I do not except any optimizations or changes in the driver operation, but from the user's point of view there is no difference in quality.
If you actually look at those AM3 screens D-L provided, you'll see nV shows multiple instances of AF inferior to ATi (particularly in the distance, but most noticable on the shot with the green buggy in the foreground), and D-L was remiss not to note them.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
ATI implicated in Game Test 4

This doesnt surprise me this has been drowned out by the fanATIcs.

If you actually look at those AM3 screens D-L provided, you'll see nV shows multiple instances of AF inferior to ATi (particularly in the distance, but most noticable on the shot with the green buggy in the foreground), and D-L was remiss not to note them

So you are saying the link you provided the reviewers are wrong? Let me get this correct...............................

And if you really think ATI's AF is better than Nvidias oh boy.....................

Key here for you fanATIcs. 90,0,45, 22.5

 

oldfart

Lifer
Dec 2, 1999
10,207
0
0
Ben
If I were to say that ATi needed to use 32bit color to achieve comparable image quality to 3dfx's 16bit color back in the Quake3 days how would you respond to that?
I'd say ATi's 16 bit color implementation was flawed and they should not have to run in 32 bit color to match 3DFX's 16 bit color. Just like nVidia's flawed DXTC1. They shouldn't need to run in a higher mode just to work around a poor design.
ATi's 32bit color was superior in IQ to 3dfx's 16bit, nVidia's DXTC3 was superior in image quality to ATi's DXTC1,
Thats right. But you are glossing over the actual issue. nVidia's DXTC1 was inferior to ATi's DXTC1. This is the spec used in the game, not DXTC3. How about comparing apples to apples instead of apples to oranges? I've never heard of any game using DXTC3. Every one I have seen uses DXTC1.
Should we ignore ATi's DXTC1 because they did not follow the specifications that the creator of the standard used? Of course not
No we should not. ATi had a superior implementation of DXTC1. nVidia's looked like cr@p. So much so that people had to come up with a hack to force DXTC3 instead of DXTC1. Again, it doesn't even work in all games.

You take a well known nVidia deficiency, and spin it into a benefit and at the same time criticize ATi who did not have the issue. This post is a perfect example of why you are criticized for being so heavily biased towards nVidia. I dont know if ATi followed the spec or not, but it is obvious their implementation of if blows nVidia's out of the water. If they went above and beyond the actual spec to make it look good instead of doing the minimum, good for them.

BTW, I did a Google search for DXTC1 and DXTC3 specs and could not find anything good on it. A link to what the DXTC1 sec actually is would be interesting to read.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I'd say ATi's 16 bit color implementation was flawed and they should not have to run in 32 bit color to match 3DFX's 16 bit color. Just like nVidia's flawed DXTC1. They shouldn't need to run in a higher mode just to work around a poor design.

So the R300 and FX should never be compared to the NV2X line in terms of AF then. Using your logic on this one.

Thats right. But you are glossing over the actual issue. nVidia's DXTC1 was inferior to ATi's DXTC1.

Since it was running in OpenGL, it was actually S3TC. Why do you fail to criticize them? S3, the creator of the format, had the exact same issues as nVidia. 3dfx didn't support it at all(they didn't have a license for S3TC) and remapped it. So you have the creator of the standard doing it one way, the enthusiast market leader doing it that same way, another company changing the whole thing around and using a proprietary method, and then ATi's modified way of doing things. You can whine about one, despite it being exactly the same as the creator of the standard.

How about comparing apples to apples instead of apples to oranges?

ATi's S3TC1 looked better, when have I said anything otherwise? BFG said that in order to make them look COMPORABLE you had to run nV in S3TC3, I said that wasn't quite true as nV's S3TC3 looked better then ATi's S3TC1. All of these are facts that you yourself have stated, but when I say them they somehow are biased nVidia propoganda.....

So much so that people had to come up with a hack to force DXTC3 instead of DXTC1.

What about S3?

You take a well known nVidia deficiency, and spin it into a benefit and at the same time criticize ATi who did not have the issue.

How the hell is it spinning to say something you yourself already stated? I did not criticize ATi about their S3TC1 implementation, I simply said it did not follow the spec, in fact I quite clearly stated they EXCEEDED IT.

This post is a perfect example of why you are criticized for being so heavily biased towards nVidia. I dont know if ATi followed the spec or not, but it is obvious their implementation of if blows nVidia's out of the water.

No actually it is a perfect example of how biased you are against them. How many times talking about this issue have I asked What about S3? Why can't you answer that? You seem to see everything as ATi v nVidia.

I did a Google search for DXTC1 and DXTC3 specs and could not find anything good on it.

Look for S3TC. S3 created the standard, DXTC is merely a license of it. You can start off looking at Sharky's, he has an article about the issue from way back(when it first appeared on S3 hardware). S3 is a graphics card company by the way, not sure if you have heard of them or not.
 

oldfart

Lifer
Dec 2, 1999
10,207
0
0
I'm not biased at all. I've owned 5X more nVidia cards than ATi and will gladly buy another nVidia card if they have the best offering. I'm not putting them or anyone else in a 5 year penalty box because I dont like what they are doing at the moment or in the past. I keep my options open.

And yeah, I've heard of and have owned S3 cards, as well as Trident, Matrox, 3dfx, Rendition, ATi, and nVidia. Currently, I have in the 3 PC's in this house 1 of each 3dfx, nVidia, and ATi. What about S3? This thread was about ATi and nVidia since these are the cards being discussed and are the ones people actually buy. S3 had a special set of Q3 maps that ran with S3TC. I never had one of the Savage line of cards and dont know how they looked in Q3 and UT (Metal API). I never saw any discussion of them when this was a hot topic. If you say they were also 16 bit, I believe you. If so, they also would have the same issue as nV.

The question that started this whole S3TC/DXTC thing was not mine, but I'll quote it.
Here's a question for you - even today will you finally admit that nVidia's 16 bit S3TC/DXT1 sucks ass? Or will you instead continue to claim that because the number of bits wasn't strictly defined in the spec, nVidia have done nothing wrong?
I looked around SE and didn't find the spec. Is DXTC1 strictly defined as 16 bit, or not? Is 16 bit a minimum spec? Is it open to 32 bit? Did MS change anything from the S3TC spec Vs DXTC?

If anyone has or has a link to the actual spec, please post it.

My point is yeah, DXTC3 looks better than DXTC1 as well it should. It is 4:1 Vs 6:1 compression. It also is a hack that doesn't work in all games, and is slower than DXTC1.

I'm done with the S3TC discussion. It was there in GF/2/3 cards and was corrected in GF4 and up (if it was not a "problem", why was it fixed in GF4 cards?). It's an old issue that no longer applies unless you still own a GF/2/3 card (actually I do).

My apologies to the forum members for hijacking this thread. It's not about S3TC/DXTC.
 

Tom

Lifer
Oct 9, 1999
13,293
1
76
Is my understanding correct that currently ATI cards support DX9 games essentially the way Microsoft intended DirectX to work as a universal standard, and that the latest Nvidia cards require specific drivers from Nvidia which sort of adapt their card to DX9 games in a way that is game specfic ?

If that is the case, and please correct me if I'm wrong, an issue that would concern me is what happens when DX9 games become common place and in the case of some titles that aren't big sellers, Nvidia decides not to write the special code to get the best performance out of their card for that specific game ?

If that happens wouldn't the method ATI uses for supporting DX9 in a more generic way work better for the games that aren't mainstream ?