• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

Anand's 9800XT and FX5950 review, part 2

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

oldfart

Lifer
Dec 2, 1999
10,207
0
0
From this we can tell that the default mode with ATI's DirectX9 parts (Radeon 9500 and above) will be the full precision, HLSL mode with full HDR effects enabled by default. It seems as though Valve are still working hard with NVIDIA to enable all the effects and it would seem that there are trying to get a solution for HDR enabled but this appears to be dependant on NVIDIA enabling some output buffers.

Also, from this it would appear that there are two "Mixed Modes", one for GeForce FX 5200 and 5600, which utilses PS1.4 as well as PS2.0, but the Mixed Mode for GeForce FX 5900 uses fully DirectX9 shaders, albeit with the marjority in partial precision mode. It would seem from Brian and Gary's comments that, should NVIDIA enable the buffer support, GeForce FX 5900 may eventually be able to support some form of HDR rendering, but the FX 5200 and 5600 will probably miss out on it by default due to their reliance on PS1.4 shaders.
Dont see any mention of any wrongdoing here. Dont see any "admission of rigging" nor do I see any rigging even implied. Where is the

"The quotes I pulled were from Valve. They rigged the test and admitted to it(although not in so many words)

Valve has admitted what they did. With the source code out, they couldn't be expected to hide it any longer"

info?

Sorry Ben, dont take this wrong, but I have never come across a more nVidia biased person on the net than yourself. You are very knowledgeable, but because of that, your posts must be taken with a huge pile of salt.

BFG put it well
You certainly know your stuff but you also have an incredible nVidia bias to the point where they absolutely can't do anything wrong. nVidia's cards could explode and cause fiery deaths and you'd still find some way to justify it.
LOL :) I got a kick out of that one!

As far as conspiracy theories go, you have your fair share. ATI, Valve, and Microsoft are in on it, as is Anandtech for the mysteriously doctored screenshots in this article.



 

reever

Senior member
Oct 4, 2003
451
0
0
nd ATi replaced the leave shader. They hoped their loyalists wouldn't notice or remain quiet and they did for the most part.

And so did Nvidia, I see no problem in testing benchmarks with replaced shaders just as long as both cards are doing the same thing, it's when one of them is doing less work and the other one is doing the full work where it becomes a problem

As far as AF, I've never seen an ATi card with any driver that did it properly, why don't you point one out.

Point out a driver in the det 50 series that actually performs true trilinear and true filtering on all texture stages like the 44.03 drivers, you can also show me where you can activate the real filtering in the driver control panel or application without having to use an antidetector script all of the time

http://www.3dcenter.org/artikel/detonator_52.14/index3_e.php
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Dont see any mention of any wrongdoing here. Dont see any "admission of rigging" nor do I see any rigging even implied.

I guess if you asked about HL3 you wouldn't get any information on how they rigged up the public bench they released either. HDR was not part of the public bench, that was explicitly talked about on pretty much every site I can recall. I pulled the quote that was relevant.

Sorry Ben, dont take this wrong, but I have never come across a more nVidia biased person on the net than yourself.

Really? Last board I spent money on was ATi. Could you please explain the logic in the above statement? Have you ever seen me attempt any defense of any 'wrongdoing' that was factual in terms of nV? You ever see me try to say that they were doing anything but cheating with their clip planes in 3DM2K3? Would you like me to dig up quotes slamming the AF on the FX line? Would you care to dig up quotes of me criticizing the edge AA on any of their current parts? Why don't you show me statements that are not fair that I have made about the companies involved, either undue praise for nV or undue criticism for ATi.

Your issue seems to be that I fail to heap undying praise unto the divine Essence that is ATi in all Their splendor. Sorry, but I call them like I see them.

As far as conspiracy theories go, you have your fair share. ATI, Valve, and Microsoft are in on it

Is this for real? ATi paid Valve millions of dollars. That is a point of fact in public record. That can not be argued. There is no debating it. It is done.

Microsoft released a compiler that was optimized for the GeForce FX. Again, that is public record. It can not be argued.

Valve had that compiler. They said they did. They said they used said compiler the 'mixed mode' for the nV parts. The question is then why did they not use the optimized DX9 code for the nV parts? Did the fact that ATi paid them millions of dollars have something to do with it? I would tend to believe so. Did I at any point say or imply that ATi paid them money for that purpose? No.

Microsoft released a compiler for the explicit reason of improving the performance on FX cards as I have already stated. How in the world can you read that as including MS in some conspiracy? They did something to HELP nVidia's parts, VALVE failed to use that in the public bench that released to show HL2's performance for the pure DX9 mode.

as is Anandtech for the mysteriously doctored screenshots in this article.

The screenshots were very clearly messed up, even some of the fanatics had an easy time figuring that one out. Look at THUGSROOK's screenshot.

Reever-

And so did Nvidia, I see no problem in testing benchmarks with replaced shaders just as long as both cards are doing the same thing, it's when one of them is doing less work and the other one is doing the full work where it becomes a problem

I see a problem with it. You can easily end up in one upmanship from the companies if that type of behavious is allowed.

Point out a driver in the det 50 series that actually performs true trilinear and true filtering on all texture stages like the 44.03 drivers, you can also show me where you can activate the real filtering in the driver control panel or application without having to use an antidetector script all of the time

I'm not a fan of the FX's texture filtering, never have been, and I've stated it before. I prefer proper AF, the NV2X core boards are still the best in that regard hands down to date in the consumer market(too bad they are so slow). What it comes down to between the current boards is what is acceptable levels of quality given their optimization trade offs, at least that is what it should be. Instead you have a group of fanatics that are willing to embrace anything ATi does and denounce anything nVidia does. Show me a good discussion about the relative strengths of filtering optimizations on this board. There aren't any. The fanatics would turn it in to a flame fest and try to deny several facts about the situation(as an example, refusing to believe that ATi's performance mode AF is bilinear filtering).
 

reever

Senior member
Oct 4, 2003
451
0
0
(as an example, refusing to believe that ATi's performance mode AF is bilinear filtering).

Kinda hard to do that when ATI explains all of it's filtering methods in tech docs, but then again there have been no arfguments on this board about filtering anyways, so that assumption goes out the window
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Ben, how would using the latest MS HLSL compiler for the DX9 modes have helped nV if they barely helped for the nV-specific mixed mode? With no PP hints or changing code to capitalize on the FX's strengths, what makes you think using the latest compiler would have helped nV in any significant way on the pure/full DX9 path? I really don't think it would have made a large difference.

I guess we'll see what difference a later compiler will make when Valve actually releases the HL2 benchmark, but I really doubt it will make a great one.
 

oldfart

Lifer
Dec 2, 1999
10,207
0
0
Your issue seems to be that I fail to heap undying praise unto the divine Essence that is ATi in all Their splendor. Sorry, but I call them like I see them.
Not at all. Its the opposite. You nit pick and go to great lengths to point out every flaw from one company and ignore, justify, or put blame where it doesn't belong on another. Very one sided.

I call them like I see them as well. I dont have my head in the sand an ignore plain facts. I've owned far more nVidia cards than I have ATi (like 5:1). Except for one that had unacceptable 2D, I liked every one of them and will be happy to buy another one. When nVidia makes a better card, that is what I will buy. I'm not going to put them in some cockamamie "5 year penalty box". A lot can change in 6 months or 1 year never mind 5. Your strong nVidia bias is well known, not just here, but on other forums as well.

Your accusation of Anandtech doctoring the shots in this review is a perfect example of ridiculous bias. resorting to conspiracy theories is really reaching. What is their motive for doing so?

Here is the thread @ beyond3D discussing the article where Valve "admitted to rigging the test" in your words. Beyond3d is a site that has many people that know quite a bit about the 3D business. Not one single comment about Valve rigging anything, or doing anything improper. None. Why is that? I would think Valve rigging the test to make nVidia look bad would certainly be a topic of conversation if there was any inkling that that was what was going on. Are they in on the big anti nVidia conspiracy too?


From what I have read, nVidia's new drivers improve HL2 performance dramatically. Valve didn't change their code. nVidia improved the drivers. That shows where the blame for poor performance in that particular game lies. Just like all the other games that will get a large increase from the new 52.xx drivers (this review).

Or wait. Maybe all those other game companies are in on the anti nVidia conspiracy too!
 

Tom

Lifer
Oct 9, 1999
13,293
1
76
Is my understanding correct that currently ATI cards support DX9 games essentially the way Microsoft intended DirectX to work as a universal standard, and that the latest Nvidia cards require specific drivers from Nvidia which sort of adapt their card to DX9 games in a way that is game specfic ?

If that is the case, and please correct me if I'm wrong, an issue that would concern me is what happens when DX9 games become common place and in the case of some titles that aren't big sellers, Nvidia decides not to write the special code to get the best performance out of their card for that specific game ?

If that happens wouldn't the method ATI uses for supporting DX9 in a more generic way work better for the games that aren't mainstream ?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Pete-

Ben, how would using the latest MS HLSL compiler for the DX9 modes have helped nV if they barely helped for the nV-specific mixed mode?

Close to 50% difference. How much would the compiler have helped I'm not sure, but can you think of one good reason not to use it? Do you not find it extremely odd that they would not use the optimal compiler for nV particularly given all their talk about how long they had to optimize for it.

Not at all. Its the opposite. You nit pick and go to great lengths to point out every flaw from one company and ignore, justify, or put blame where it doesn't belong on another.

Give examples to back up your claims. I have taken issues with ATi's drivers, their rolling lines problem, and their texture filtering. That is pointing out every flaw they have....?

For nV I have stated that most of the issues people are calling cheats are not. Take for instance the 'cheating' in Halo. After all the BS their bugs are fixed and performance is up considerably.

Your accusation of Anandtech doctoring the shots in this review is a perfect example of ridiculous bias.

Real impartial. I have explicitly stated multiple times that I in no way think they tried to doctor screenshots for one company or another. You think I'm the only one that noticed-

"Anand has his head in his anus" The initial posts mentions issues with the screenshots also.

Not one single comment about Valve rigging anything, or doing anything improper. None. Why is that?

Mention a fault of ATi on that board and you will get flamed fairly heavily the overwhelming majority of the time. Read through the thread I linked, Anand is on nVidia's payroll according to a great deal of the people at B3D.

Are they in on the big anti nVidia conspiracy too?

Hardware T&L is useless and developers are never going to use it. We don't need 32bit color. Anisotropic filtering is overrated. T-Buffer effects are going to take the industry by storm. 3dfx is in excellent financial shape and will be the clear market leader when Sage and Fear hit. Beyond3D's forums have a great track record for impartiality ;)

From what I have read, nVidia's new drivers improve HL2 performance dramatically. Valve didn't change their code. nVidia improved the drivers.

In mixed mode from what I have seen. Without the DX9 code compiled with the FX optimized compiler we will not see the best performance that can reasonably be done. Without using the FX compiler they amplified the difference between mixed mode and pure DX9, find someone who knows what they are talking about that disagrees with that.
 

reever

Senior member
Oct 4, 2003
451
0
0
Beyond3D's forums have a great track record for impartiality

Always judge a website by the people on the forums who aren't affiliated in any way with the site, thats my rule
 

Nebor

Lifer
Jun 24, 2003
29,582
12
76
Originally posted by: reever
Beyond3D's forums have a great track record for impartiality

Always judge a website by the people on the forums who aren't affiliated in any way with the site, thats my rule

Where did he judge the website? He only judged the forums themselves. Yeah, I'm here, run away!
 

DaveBaumann

Member
Mar 24, 2000
164
0
0
Originally posted by: BenSkywalkerDo you not find it extremely odd that they would not use the optimal compiler for nV particularly given all their talk about how long they had to optimize for it.

But they are. They are using it for the path that the GeForce FX's are actually going to use.

However, I don't quite think you are getting the point of the HLSL compiler updates. The HLSL compiler already contained a profile for PS_2_0, AFAIK what has been added is a PS_2_x profile - i.e. a profile for the extended model shader model that currently only maps to the FX series. The base DX9 path can't use this because it might (should there be long enough shaders) not work for other DX9 hardware that is about or coming up.

But, hey, what do I know, I'm just a biased fanboy
rolleye.gif
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
Try to enable AA on your R9700Pro running Halo. Is that a cheat?
I don't have Halo.

There are cases where AA may not work correctly due to driver bugs or game issues, why you think they are cheats is beyond me.
Because these "bugs" magically popped up after the NV30 was released and magically only seemed to raise benchmark scores.
Then the anti-cheat programs came along and temporarily stopped then.
Then out pops nVidia's new magic anti-anti-cheat drivers.
So people start using screenshots to compare differences.
Then out pops nVidia's screenshot enhancing drivers.

If you think those are driver bugs then you need to learn the definition of a bug.

As far as AF, I've never seen an ATi card with any driver that did it properly,
rolleye.gif


If you take issues with optimizations, then you need to do so in an honest fashion and state that ATi does not and never has supported AF or you take the position that you don't like particular optimizations.
That's really, really reaching. Show me in the OpenGL or DirectX specification that states that AF must be performed in exactly the same way as NV2x boards do it.

Now show me a spec that allows nVidia to stop doing trilinear AF despite the user selecting the option in the control panel.

I don't believe Gabe in any way shape or form on that. Not even close.
I'm afraid that's not my problem. You're making these outlandish claims and therefore the burden of proof is with you. I've got almost every single reviewer and tech website illustrating nVidia's cheats (including FutureMark themselves) but not only do you deny these claims, you also then turn around and claim that ATi is the one that is cheating.

I'm sorry, but such behaviour is simply nothing more than zealotry.

The static clip planes I've already said was a cheat, the rest absolutely.
Riiight, then I guess all the cheats that FutureMark listed were simply figments of their imagination. The lack of Z clearing which just happened to work during benchmark runs but nowhere else? The complete shader subsitutution which just happened to accidently pop up only during 3DMark test runs? And the fact that when the anti-cheat programs arrived these - as you call them - genuine optimisations suddenly completely failed?

You realize that most of your ranting about nV cheats and changing shader code have vanished with the latest drivers and they are faster then before?
Hang on, I thought you claimed that all vendors performed shader substitution and that is was nothing unusual? So how could they "vanish" from nVidia's drivers?

Perhaps you think I don't know the difference between subsitutution and optimisation and if you do, you're dead wrong. What nVidia did was neither normal nor an optimisation.

it's a safe bet that they were bugs as had been stated numerous times.
Even if they were bugs why aren't you ripping into nVidia like you rip into ATi whenever you find a single pixel out of place? If your claims are to be believed then we've seen an absolutely horrendous amount of nVida bugs in the last twelve months, yet I've never seen a peep out of you about it. In fact you're now using nVidia's bugs as defence to further prove how good nVidia is!

The quotes I pulled were from Valve. They rigged the test and admitted to it(although not in so many words).
They didn't rig anything. If mixed mode with Microsoft's compiler can't match ATi's full precision then full mode with Microsoft's compiler certainly won't either. Stop this ridiculous nonsense and just admit for once that nVidia hardware is inferior to ATi's hardware.

Valve has admitted what they did.
They did no such thing.

Use the 2.7Cats and run the Nature test, pay close attention to the leaves.
Catalyst 2.7 does not exist.

The driver size is reduced v the earlier versions that had the anti optimizations.
The initial size of the file is irrelevant to how well it compresses.

nVidia is in the midst of some giant conspiracy to you. Apply Occam's razor to your logic and see what you think.
The irony is simply killing me. What possible purpose did nVidia have to release anti-anti-cheat drivers if they weren't actually cheating? And why is it that ATi neither has such "counter-measures", nor do the same anti-cheat programs have any effect on them?

Really? Last board I spent money on was ATi. Could you please explain the logic in the above statement?
You rip into ATi at any opportunity and then with the same breath proclaim nVidia's superiority, often with 180 degree turns as it suits you.

You rip into ATi for every driver bug yet are now using nVidia driver bugs to justify that they're superior.

When screenshots are posted showing nVidia superior, you rip into ATi about every single pixel that's out of place and slam them for everything under the sun. Then when screenshots are posted of ATi being superior, you then start consipracy theories about how the images have been doctored and that someone is on ATi's payroll.

In this very thread you've defended nVidia on the grounds that ATi's R3xx has been around longer and that more developers have had time to work with it rather than NV3x boards. Yet when the situation was reversed during the R1xx/R2xx days you ripped into ATi at any opportunity you had and never once used such reasoning to defend them.

I'm not talking about a one off in this thread; this is a trend that I've seen you do for the last three years that I've posted here. No matter what the situation, no matter what the evidence, you always find some way to put nVidia up on a pedestal and slam the competitors.

Here's a question for you - even today will you finally admit that nVidia's 16 bit S3TC/DXT1 sucks ass? Or will you instead continue to claim that because the number of bits wasn't strictly defined in the spec, nVidia have done nothing wrong?
 
Apr 17, 2003
37,622
0
76
As far as AF, I've never seen an ATi card with any driver that did it properly

the most comprehensive IQ review i have ever read was done by firing squad, who came to the conclusion the AA goes to ATI and AF is a dead tie
 

DaveBaumann

Member
Mar 24, 2000
164
0
0
That's really, really reaching. Show me in the OpenGL or DirectX specification that states that AF must be performed in exactly the same way as NV2x boards do it.

Ironically, 3Dlabs (- the inventors of OpenGL, more or less -) P10's Anisotropic Filtering is the same as R200's filtering (albeit with Trilinear).
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
Hey Dave, what are your thoughts on Ben's interpretation of the news you posted?

Do you think Valve really fudged the numbers?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Dave-

But they are. They are using it for the path that the GeForce FX's are actually going to use.

Why? Because Valve is going to force them to?

The HLSL compiler already contained a profile for PS_2_0, AFAIK what has been added is a PS_2_x profile - i.e. a profile for the extended model shader model that currently only maps to the FX series. The base DX9 path can't use this because it might (should there be long enough shaders) not work for other DX9 hardware that is about or coming up.

So you are saying that there would not be a performance improvement?

But, hey, what do I know, I'm just a biased fanboy

Why don't I quote you-

Were the Shaders in the benchmark compiled with the latest version of HLSL (i.e. the version that orders the assembly for a more efficient use of the FX's register limitations)?

Why did you ask this question since you are now implying that it doesn't make a difference?

Ironically, 3Dlabs (- the inventors of OpenGL, more or less -)

What is SGI to OpenGL then?

BFG-

Then out pops nVidia's screenshot enhancing drivers.

If we pile all the evidence of this together we have nothing, nothing except comments from a man we know was paid millions of dollars from ATi.

I've got almost every single reviewer and tech website illustrating nVidia's cheats (including FutureMark themselves) but not only do you deny these claims, you also then turn around and claim that ATi is the one that is cheating.

You have listed one. I have not argued that one bench cheat. You have ranted on about cheats in numerous other benches, show any evidence of it at all.

Hang on, I thought you claimed that all vendors performed shader substitution and that is was nothing unusual? So how could they "vanish" from nVidia's drivers?

The screwed up shaders are pretty much all fixed and performance is up. The things that supported your 'cheat' accusation against every bench nV ran are gone. If you say any glitch is a cheat, then you must be willing to admit that ATi is cheating in TRAoD.

Even if they were bugs why aren't you ripping into nVidia like you rip into ATi whenever you find a single pixel out of place?

I have criticized nV in this very thread numerous times. You want to know why it appears so one sided? There aren't a bunch of nVidiots around trying to deny every flaw. I don't need to spend a bunch of time restating the same things over and over again to them. If there weren't any fanatics and this thread, and it was only nVidiots, then it would give the appearance of looking the other way.

If your claims are to be believed then we've seen an absolutely horrendous amount of nVida bugs in the last twelve months, yet I've never seen a peep out of you about it.

Actually I've mentioned in multiple threads the reason I won't buy a FX is because of driver bugs. Funny that.

Catalyst 2.7 does not exist.

Which version was it directly before 3DMark2K3 came out? The version of drivers that shipped on the CD with my R9500Pro had nice, individualy animated leaves. Then they released their driver sets with replaced code and the performance went up considerably.

The initial size of the file is irrelevant to how well it compresses.

If they already had a bunch of shader replacement code in there, and they added a whole bunch more, how does their file size keep dropping?

The irony is simply killing me. What possible purpose did nVidia have to release anti-anti-cheat drivers if they weren't actually cheating? And why is it that ATi neither has such "counter-measures", nor do the same anti-cheat programs have any effect on them?

You have to work under the assumption that nVidia changed their driver structure for the purpose of looking better on a few benches instead of it being planned already. If they did it in response to the anti detect, they have some incredibly fast thinking people on their staff to come up with a countermeasure and release it as quickly as they did.

You rip into ATi for every driver bug yet are now using nVidia driver bugs to justify that they're superior.

Here is where you are really starting to lose it. Quote me saying the FX is superior to the R3X0 boards. Point out where I have said that the FX has flawless drivers. Try and find me stating that the FX has overall superior PS performance to the R3X0 boards. Try and find me stating that the FX has better edge AA then the R3X0 parts. You can't do it. Your issue is that I will not slam who you want me to for everything you want me to. You have been ranting for months on end about how nV is cheating in everything possible and that all of their cheats destroy IQ to raise performance. Now they get their drivers sorted out and performance is up and you still can't acknowledge the fact that maybe they just plain fvcked up. It has to be a giant conspiracy.

In this very thread you've defended nVidia on the grounds that ATi's R3xx has been around longer and that more developers have had time to work with it rather than NV3x boards.

It has been the lead dev platform, that is a big advantage.

Yet when the situation was reversed during the R1xx/R2xx days you ripped into ATi at any opportunity you had and never once used such reasoning to defend them.

Look back again, I stated numerous times that nV had an advantage because they were the lead dev platform. I discussed at great length why nVidia had an edge when they won the XBox contract as they would be looking at ports that were hard coded to nVidia's parts not to mention it put DX8 behind their part. As was the case in the days of 3dfx with Glide, then nVidia, and for the last year ATi the lead dev platform has an edge. They always have and you can search until your blue in the face you will not find me saying otherwise. ATi was just announced as the core for XBox2, guess what? It gives them an advantage yet again.

Yet when the situation was reversed during the R1xx/R2xx days you ripped into ATi at any opportunity you had and never once used such reasoning to defend them.

You think of everything in defending and attacking. The lead dev platform has an advantage. It isn't a matter of attacking or defending. Saying the sky is blue is not attacking the sky, it simply is.


To he!l with it, I feel like demonstrating the quality of your memory-

If your only looking for a gaming board and don't plan on running Win2K, then the Radeon may be a good choice for you.

The GTS Ultra is $500, if you have the money to spend and want the fastest bar none card, then by all means go for it. If I was buying a board right now for a Win9X gaming machine I would go with the Radeon.

It has the best feature support in terms of 3D and particularly video, has solid performance even if it isn't up to GF2 levels, and also has consistently high levels of 2D quality which is hit or miss with the GF based boards, particularly if you have a Trinitron tubed monitor.

If the Radeon was priced the same as the GF2 Ultra that would make things more difficult, the speed difference between those two is huge, the GF2 being nearly twice as fast. But fro ~$250-$300 the 64MB DDR Radeon is IMHO the best gaming board for a Win9X machine right now.

Link, my words from 9-8-2K.

The Radeon is NOT DX8 ready at all, not even close. DX8 is a major overhaul particularly in terms of D3D and the Radeon isn't even close to having full feature support. The Radeon may be the closest, but in some respects it is more limited then the GeForce1 in terms of feature support(register combining for instance). Overall the Radeon has the best current gaming feature set, but people should not think that they are getting a fully DX8 compliant board.

Link. I did criticize the original Radeon saying it wasn't a full DX8 compliant board(which it wasn't even close as everyone now knows), but then again I said it had the best feature set for gaming.

If I was buying a gaming board right now, I would definately go with a Radeon.

Oldfart-

So you decided on the Radeon? Glad you are enjoying it, one quick tip-

"Was I shocked! UT in DirectX rocks! Very smooth and looks great. Better than Glide! Must just be a problem with other <*cough Nvidia cough*> DirectX drivers. So far, I'm very happy with the card."

You haven't seen anything yet, dust off that second CD and start playing with S3TC textures(and better performance), click here for details.

UT looks and plays like sh!t on Glide or D3D in comparison to this.

Another link for you.

The 400MHZ GF2 boards are close to shipping, there already has been a review of one(not the Gainward BS board either)-

http://www.tomshardware.com/graphic/00q3/000816/suma-02.html

Until these are a reality and we see what kind of pricepoint we are looking at I would go with a Radeon DDR. The best feature support and comparable speed, as long as you are running Win9X. If you run Win2K then I would go with a GF2 based board.

Your memory improving any yet?

I'm getting bored digging them up, if you wish I will post more of my actual posts versus your apparently very different memory of what happened.

Here's a question for you - even today will you finally admit that nVidia's 16 bit S3TC/DXT1 sucks ass? Or will you instead continue to claim that because the number of bits wasn't strictly defined in the spec, nVidia have done nothing wrong?

You asked two different questions here. One is did nV's 16bit S3TC interpolation suck? The answer to that is yes compared to the 32bit interpolation. Did they do anything wrong? The answer to that is no if you look at S3's implementations. They followed the creator, they didn't one up them.

I'm not talking about a one off in this thread; this is a trend that I've seen you do for the last three years that I've posted here. No matter what the situation, no matter what the evidence, you always find some way to put nVidia up on a pedestal and slam the competitors.

And the above quotes that I actually made were what exactly? You were the one ranting like a fanboy in the Radeon threads I searched through. Do not mistake your own zealotry at the time for mine.
 

DaveBaumann

Member
Mar 24, 2000
164
0
0
Why? Because Valve is going to force them to?

Force who to do what? You can still use the DX9 path, but regardless it?s still going to be full precision which is likely to be the big inhibiter.

AFAIK, the major change to the HLSL update in terms of performance is that it restructures the way the texture and ALU ops are written out to assembly, such that its more done in the fashion of interspercing the texture and ALU operations in order to reduce register usage. However, this is reliant on the shader needing a mix of texture and ALU ops to get any improvement.

So you are saying that there would not be a performance improvement?
Why did you ask this question since you are now implying that it doesn't make a difference?

In the post you are quoting that not what I said at all. I said, I believe the HLSL update for GeForce FX compiles to a PS_2_x render target which, being the case, would preclude them from using it in the 'standard' DX9 path as it would compile to instruction length and operations that PS_2_0 boards may not be able to run. ([edit] Note, that I was not under the impression that this was the case when I wrote the question, but having talkig to some people since these replies this is my current understanding - I am currently seeking further clarification)

I'm saying that, yes there is the possibilities for being a performance difference but there is also the possibility that it can't be used in the standard PS_2_0 path. I would guess that even if this could be used, the instruction reordering would only help out to a certain extent as the full precision requirements would still be the inhibiter, and the fact that the 'standard' DX9 path uses PS math for vector normalisation would limit the number of texture lookups per shader thus reducing the capabilities for interspersing texture lookups with ALU use in the compiled assembly.

We are, however, also talking about a work in progress, and these new updates to the HLSL compiler have only very recently become available (actually, I think they may still be in beta stage).

What is SGI to OpenGL then?

SGI licensed an API from 3DLabs for their workstations, this then went on to form the basis for OpenGL. Its 3Dlabs API that is the underpinnings of OpenGL, which is why they still have such a voice on the ARB (despite their market position).
 

DaveBaumann

Member
Mar 24, 2000
164
0
0
If we pile all the evidence of this together we have nothing, nothing except comments from a man we know was paid millions of dollars from ATi.

Actually, I would refute that we know this. While I'm sure that the actual number may well be in the millions, I'm fairly certain the number that has been touted from ATI's financials does not pertain to the Half Life 2 deal.

The fact that the quote state "incentive" for a "development" deal is not the same as a marketting budget for an OEM bundling deal. I have been informed by some analysts that in fact the $6M pertains to an arrangement with Dave Orton and certain key ArtX personnel as a sort of retainer - it was an incentive to keep these guys at ATI such they they would get paid it is they secured a continuation of the development contract with Nintendo for ATI (rather than those guys going off and doing their own thing as they did and SGI).
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
In the post you are quoting that not what I said at all. I said, I believe the HLSL update for GeForce FX compiles to a PS_2_x render target which, being the case, would preclude them from using it in the 'standard' DX9 path as it would compile to instruction length and operations that PS_2_0 boards may not be able to run.

They talked extensively about how much additional time and effort they put in to getting the FX up to speed and they stated that they used the optimized compiler for the mixed mode path, how much work would it have really taken to compile the code and simply set up to run if it detected a FX part? They already went through the trouble of manually recoding large portions of their shaders, by their statements, and they couldn't bother to compile the code they already had with a switch they also had already coded for the mixed mode? Actually, they have numerous different rendering paths in place, why did they not simply add one that was their default code optimized using the compiler? It is without a doubt simpler then recoding all the portions of the code that they did.

I'm saying that, yes there is the possibilities for being a performance difference but there is also the possibility that it can't be used in the standard PS_2_0 path. I would guess that even if this could be used, the instruction reordering would only help out to a certain extent as the full precision requirements would still be the inhibiter, and the fact that the 'standard' DX9 path uses PS math for vector normalisation would limit the number of texture lookups per shader thus reducing the capabilities for interspersing texture lookups with ALU use in the compiled assembly.

it would be nice to be able to 'compile' them into a binary form, but then they are actually compiled at runtime and optimised for the hardware they are running on .. is there a way to do this?

Compile the effect with target fx_2_0, not vs_* or ps_*. (I asked your question in microsoft.public.win32.programmer.directx.graphics and that's the answer I got; reading the docs, they also say that.)

Is this not correct)I'm asking, I know you read sarcasm in my posts when it isn't there on occasion)??

I'm saying that, yes there is the possibilities for being a performance difference but there is also the possibility that it can't be used in the standard PS_2_0 path.

They already have half a dozen paths at least, this would have been much simpler to implement then most of the others based on Valve's own statements.

I would guess that even if this could be used, the instruction reordering would only help out to a certain extent as the full precision requirements would still be the inhibiter, and the fact that the 'standard' DX9 path uses PS math for vector normalisation would limit the number of texture lookups per shader thus reducing the capabilities for interspersing texture lookups with ALU use in the compiled assembly.

I'm not insinuating that the compiled code would come close to making the ATi and nV perform evenly, however particularly because of the types of optimizations utilized in the mixed mode if they had run them both under the original compiler the performance gap between the two paths would have narrowed decently(comparing mixed mode to 'standard' DX9). What level of performance increase is available in the standard DX9 path simply using the optimized compiler? Even if it is only 5% there is still no real reason for them to have not utilized it particularly given the lengths they claimed they went through to optimize for the NV3X parts(and even moreso due to the fact that it was far simpler then any of the other optimizations they used).

We are, however, also talking about a work in progress, and these new updates to the HLSL compiler have only very recently become available (actually, I think they may still be in beta stage).

If they didn't use them for both paths I wouldn't consider it an issue. They did use it for both which should amplify the performance difference that is there. I'm not saying they should have rebuilt the game from the ground up, I'm not saying they should have used Cg, I'm not saying they should have done anything except use the same compiler for the mixed mode and the DX9 code paths. If they would have used the 'standard'(btw- I'm not putting that in 's to be a wise @ss, just due to DX9 now having multiple compilers) DX9 compiler for both code paths they would have shown closer performance and not given people the impression that nV's parts are as sensitive to developer optimizations as they did.
 

DaveBaumann

Member
Mar 24, 2000
164
0
0
Is this not correct)I'm asking, I know you read sarcasm in my posts when it isn't there on occasion)??

I don't know how that relates to this, also that was posted in August and the HLSL updates may not have been available. As I said, I am currently seeking further clarification.

Note: the fact that MS haven't just changed the base HLSL output to the "FX friendly" render target by default, and there is evidently a "switch" there indicates that MS have reasons for keeping them separate.

What level of performance increase is available in the standard DX9 path simply using the optimized compiler? Even if it is only 5% there is still no real reason for them to have not utilized it particularly given the lengths they claimed they went through to optimize for the NV3X parts(and even moreso due to the fact that it was far simpler then any of the other optimizations they used).

I fail to see the point, or any gain for them. Why make another render path just to use a different compiler switch that might give a small performance increase - they are already making an optimised path for the FX series, being the Mixed Mode, that utilises a range of techniques in order to increase the performance whilst still maintinaing as much of the DX9 quality and functionality as the FX currently allows - why on earth would they make another for what could only amount to a tiny gain? That makes no sense - that would further increase development times (yes, they have several paths, but these have been coded over a period post HL1's release) and have another path that requires support.

The mixed mode is the "optimised DX9 path" for 5900 (note that my Q&A session also states that for 5900 it is pure DX9 anyway), so there is not point creating another. Another issue is that the full DX9 path uses HDR, which would create further issues with your suggestion as NVIDIA's drivers currently aren't supporting the formats they require.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Ben, I'm not sure the only difference between the mixed mode and DX9 paths is the updated HLSL compiler. Valve also changed a lot of shaders/calculations to favor the NV30 architecture, which I doubt are present in the pure DX9 path, and I doubt an updated HLSL compiler would use without improperly changing the dev's intended effect. So I doubt that 50% increase is due entirely or even mainly to the updated HLSL compiler, but I may be wrong.
 

DaveBaumann

Member
Mar 24, 2000
164
0
0
Pete, some of the major differences between the mixed mode (for 5900) and full mode are listed in that Q&A each one of the shaders has been rewritten with the precision hints and any vector normalisation in the shaders has to be rewritten to be done via cubemaps (I would imagine there is a fair amount of other stuff there as well). Its not a case of replacing the shaders in the full precision path as they have kept them as is and then replicated them with the "FX friendly" shaders in the Mixed Mode.

Coincidentally, dependant on the shader composition, doing the vector normalisation via cubemaps can actually assist R300 class chips as well because the texture address ops run in parallel with the ALU ops, and if the shader is ALU bound then doing it by burning texture cycles can become free.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Yeah, I was mainly thinking of the vector normalizations (the only thing I remembered from reading all those HL2 articles :)). (I also noted that only the 5900 would use the DX9 path, which is important, as most people aren't buying $250+ retail cards.) Would _pp be a mixed-mode-specific thing, though? I thought _pp was part of the DX9 spec and, as such, would be in the regular DX9 path, too. Does changing some shaders to use the _pp hint noticably lower IQ?

Interesting about cubemaps helping the R300, too. Is there an advantage to vector normals over cubemaps in terms of quality or simplicity that leads Valve to use the former for the DX9 path?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I fail to see the point, or any gain for them. Why make another render path just to use a different compiler switch that might give a small performance increase -

Why not optimize for the board? In comparitive terms a recompile and additional path mapped to it should be very simplistic.

they are already making an optimised path for the FX series, being the Mixed Mode, that utilises a range of techniques in order to increase the performance whilst still maintinaing as much of the DX9 quality and functionality as the FX currently allows

But they are reducing the image quality(based on Newell's comments) while they could have had a faster path with full quality on the FX for the public bench. If Carmack had ATi specific extensions available to him that offered performance improvements and he didn't use them people would be flying off the handle about the conspiracy.

That makes no sense - that would further increase development times (yes, they have several paths, but these have been coded over a period post HL1's release) and have another path that requires support.

How much work would it take to implement a different rendering path that has the exact same code base as one they already have implemented? They already have DX7, DX8 1.0, DX8 1.1, DX8 1.4, DX 9 'Pure' along with FX5200, FX5600 and DX9 MM. All they needed to add was a switch for the DX9 'Pure' path to use the compiled code for the FX. This would have improved performance and had very little additional work particularly compared to the custom paths they already created for the FX.

The mixed mode is the "optimised DX9 path" for 5900 (note that my Q&A session also states that for 5900 it is pure DX9 anyway), so there is not point creating another.

Except for improved performance. Newell talked about the reduced IQ of the MM path, why wouldn't anyone want to see an improved performing version of the highest quality path?

Another issue is that the full DX9 path uses HDR, which would create further issues with your suggestion as NVIDIA's drivers currently aren't supporting the formats they require.

HDR was disabled for the public bench so its not particularly relevant to this discussion(although I see how it will effect the final product). Actually, with Valve setting up two different integer buffers built for HDR this is yet another example of a more complicated optimization they are already doing versus recompiling the code. Given the lengths they claimed they went through bending over backwards for the FX they could have at least compiled the default shaders for the 5900.

Pete-

Ben, I'm not sure the only difference between the mixed mode and DX9 paths is the updated HLSL compiler.

It isn't close to the only difference, I'm saying that by them using one compiler for one and another compiler for another they gave the appearance of there being a larger disparity then there actually is in terms of the FX relying on developer optimizations. Mixed mode would have been considerably slower running under the older compiler then it is on the new one(due it to increasing the amount of texture lookups).