Anand's 9800XT and FX5950 review, part 2

Page 14 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Once again, stay on topic. We are talking about your Halo comments here right?

So then state how much faster a board has to be by the definition of the word before it is faster to your type.

BFG-

The point is that they don't and because the application doesn't request AF then it's their choice as to how it does it.

Trilinear is being requested and you are getting bilinear over the majority of the screen, not even a bilinear/trilinear hack(for most of the screen, you do have the one mip level fully filtered). How many games do you own that have an option for AF? I think I have maybe six, out of several hundred.

You've finally quantified (see the bold) what you mean but you didn't do this before.

Uhhhh-

They did something to HELP nVidia's parts, VALVE failed to use that in the public bench that released to show HL2's performance for the pure DX9 mode.

That is from my post on October 11th 10:45AM(EST), over two weeks ago. I actually quantified it multiple times, perhaps not recently though.

However I'd also like to add that Microsoft's compiler may not have been available at the time the path was benchmarked and also the fact that Valve have gone out of their way to make the mixed mode path as optimal as possible which basically makes the full precision path somewhat irrelevant.

They benched the game for their big PR event that they did that many sites attended, and they had the compiled MM path ready to go. They without a doubt had the compiler to use. By Dave's comments in this thread, they were trying to show how much special optimizations you need to use for nV's parts. It is because of this that I have had major issues with the way in which it was done.

ATi is still ahead when nVidia is completely optimal and I'd believe that was the idea Valve were trying to get across.

I wouldn't count on that ;) They still have a decent lead when they are both running the 'pure' path, but not when nV is running MM. Why do you think Valve released the bench when they did? There was no launch of any ATi part, there was no particular date milestone they were shooting for, there was no particular reason for them to do so when they did. An odd thing happens to nV's performance with the latest Dets under HL2, although that information was obtained via someone who did something they shouldn't have but I digress.

Basically it's when the drivers detect the presence of a shader and completely ignore the whole thing and replace it with a pre-compiled version that may or may not produce the exact same output.

That was a semantics thing, that is run time, not real time :p

I wasn't aware of that and I agree with you that it's a bit strange that the reference renderer doesn't work with some of the tests.

One of the other tests is exploiting a bug in DirectX that MS has stated they intend to fix. I'm not taking issue with every bench out there that ATi is dominating at. He!l one of my favorite games, Mafia, ATi is running significantly faster then nV right now(and to add to that, ATi's AA makes a big difference on that game as there are power lines all over the place and it is likely the best example I've seen for when AA is still needed). I take issues with certain benches for certain reasons. ShaderMark has numerous issues that the devs are aware of and they all favor ATi's hardware even when RefRast says it is wrong.

Yes but you've also argued against using Max Payne 2 as a benchmark on the grounds of what the developers say, the same developers that then turn around and say that using 3DMark is a much better option. Who are we to believe then? You can't selectively mix and match statements when it suits you.

I quoted the guy from Remedy, who helped make the game, and the main reason for that was the adaptive nature of Max Payne2 and why it is a lousy benchmark. Back in the day when DX7 titles were just starting to hit Sacrifice was a game that was using some of the features we had been looking forward to and it came with a built in framerate counter and also would give you a vertice count, poly count and texture amount being used. The game would have made an excellent one for benches except that the game used dynamic rendering depending on what the game was doing at the moment and what your hardware could handle. This meant that you could run the bench on different hardware and get identical results while one of the rigs was doing a lot more then the other. The guy from Remedy that I quoted was saying that MaxPayne2 uses a similar technique, it makes it a very bad engine to bench. He did say to use 3DM2K3, and I included it because I'm not going to selectively quote the guy. 3DM2K3 is certainly a better bench then one that differs the workload for each system, doesn't change my view on it though.

Because it's correct and it comes directly from FutureMark's findings? Because the shader subsitution has been listed many times in other reviews that discuss nVidia's dubious optimisations?

For 3DM2K3 I agree they were doing that, my issue has always been with all of the other titles that people have talked about. They have obviously been working on their compilers for a long time now, the earlier versions of them exhibited bugs and people jumped to the cheat conclusion. I take issue with that.

He has also explained, like all of the other developers, that the path is necessary to get any reasonable form of performance from the NV3x.

Reasonable form of performance? He stated that the NV30 was half as fast running full FP32 as it was FP16/INT12. The NV30 is already considerably faster(you can check that one if you would like Oldfart, when I want to say something is decently faster I will explicitly state so) then the R3x0 boards under most settings and that was with nV's poor performing drivers. Knocking the framerate down to half it wouldn't be as fast as ATi's highest end parts(using the old drivers anyway), but it is far from having trouble reaching reasonable performance.

Also it appears that even your holy grail application Halo is also using a special reduced mode path to get reasonable performance on nVidia's boards. What do you say to that Ben?

It uses PP as that is all it needs. That has been my line pretty much all along, the overwhelming majority of shaders we are going to see anytime soon are not going to require anything higher then FP16, I have quoted Carmack and Sweeney as stating that general line of thought also.

Whether the lower precision looks as good as ATi's full precision is debatable; what isn't debatable is that the NV3x core has severe architectural problems when running at full precision.

Superior precission. That additional precission is overwhelmingly useless(handy for the Quadro line though). How fast do you think ATi would be running FP32 with their current parts? 1/100,000th the speed of nV give or take? ;)

How many shaders are going to need FP24? I've stated numerous times in this thread that I expect the R3x0 to be faster then the NV3X line in shader limited situations, the issue is how much faster and how frequently that will happen. If most shaders only require FP16, which it appears is the case right now, then the impact of moving from partial to superior precission isn't going to be a major issue. If there comes a time when most shaders are going to require FP24 and will still perform on ATi's parts then it will be one.
 

oldfart

Lifer
Dec 2, 1999
10,207
0
0
Ugh, I said I'd stay out.
So then state how much faster a board has to be by the definition of the word before it is faster to your type
Certainly something more than a ***RIDICULOUS*** < 1 FPS @ an unplayable 16 x 12 resolution! (your type) Any sane unbiased person (my type) would agree on that point. What number? I dont know, it depends on the game I guess. It has to be at least noticeable at a playable framerate. If I have to throw something out, 20% at least when you are talking about the minimum playable framerate range of > 30 FPS without doing it by lowering IQ would be considered at least noticeable. This is why I and the reviewers all over the web (my type), call the cards equal in performance instead of a biased opinion (your unique type) of calling the nVidia card faster for no credible reason other than it's your favorite company.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
it's your favorite company.

WTF kind of moron would have a 'favorite' company based on something they bought? I can see how you could end up disliking a company, but I digress.

If I have to throw something out, 20%

Link. By my standard, a P4 3.2 is faster then a 3.0C, 2.8C, 2.6C and the 2.4C, by your standard it isn't faster then any of them. That's $394 v $163 according to Pricewatch in terms of pricing, without one being faster then the other by your standards. Mine dictate they are. It is realy easy to be one of my type, I don't have to worry about singing a different tune depending on the situation. Have a set of standards you follow and apply them equally. In that particular bench I linked to, I consider the 3.0C to be faster then the 2.8C(I know, biased BS according to what you have been saying) despite the margin of victory being only 2%. I guess it must be my blind loyalty or some other such thing, I really can't comprehend your line of thinking and don't want to put words in to your mouth about it.

BTW- One of these days maybe I will actually act like you say I do for one day, I'm sure everyone would be quite pleased to see me air every single negative aspect that I can about ATi and then spin the other nonsensical crap about them as if it were fact(as so many in the ATi camp do every single day they post here). I could also go on a glowing appraisal of everything nV and spin everything that is wrong with them away too. Maybe I can start to jump in to every ATi v nVidia thread on the forum and adding a spin to that for a day, and do it with as much accuracy as the numerous Radeon fans do every day here.
 

oldfart

Lifer
Dec 2, 1999
10,207
0
0
I like how you always stray off the subject and try to change the context of what is being discussed. I guess when you have no valid point to make, that is all that is left to do.

Back on the subject again

Once again.....Stay on topic......We are talking about YOUR post that nVidia cards are faster than ATI in Halo. We are not discussing CPU's. That is an entirely different subject.

My point is this.

BOTH NVIDIA AND ATI HAVE NEARLY IDENTICAL PERFORMANCE IN HALO. EVERY REVIEW SITE AGREES. YOUR RIDICULOUS ASSERTION THAT NVIDIA IS FASTER BECAUSE OF < 1 FPS DIFFERENCE IN AN UNPLAYABLE 1600 X 1200 RESOLUTION HAS NO MERRITT. NONE.

Try to get around it, change the subject, make excuses all you like. It doesn't change the truth.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
We are not discussing CPU's. That is an entirely different subject.

You need to have a different set of standards, what a shock!

BOTH NVIDIA AND ATI HAVE NEARLY IDENTICAL PERFORMANCE IN HALO. EVERY REVIEW SITE AGREES. YOUR RIDICULOUS ASSERTION THAT NVIDIA IS FASTER BECAUSE OF < 1 FPS DIFFERENCE IN AN UNPLAYABLE 1600 X 1200 RESOLUTION HAS NO MERRITT. NONE.

Try to get around it, change the subject, make excuses all you like. It doesn't change the truth.

You admit that nVidia is faster while calling it a ridiculous assertion in the same sentence. Your statement that it is unplayable at that setting- have you compared Halo's benchmark to its in game performance? I've already explained it a bit and you can also find quotes from Gearbox on the subject, the bench is their to get relative numbers, not to give you an indication of the actual framerates you will be seeing(it factors in non rendering/gameplay performance, I could have sworn I mentioned that in this thread). Unplayable is a matter of opinion anyway, but likely you would be looking @~50FPS average through most of the game if you are seeing ~30FPS by the bench. That is of course off the subject, but you are the one who brought it up.

Do I consider something that gets 100FPS faster then something that gets 99.9999999999FPS? Yes, I do. You have been stating time and time again it isn't a big enough difference, but you don't think that means its faster....? What does 'faster' mean to you? Does it depend on whether you are comparing a video card or processor as your above comment indicates? Faster means the same to me no matter what I'm discussing(well, using the definition of the word that pertains to speed in relation to something else- it doesn't change).

The truth of the matter is you don't like accuracy to the degree in which it is being used here based on your comments. You have stated that they perform the same when in fact they do not, not even within 1% most of the time. If it is close enough by your standard(whichever standard you need to use for this line) then you can accurately say that it is close enough by your standard for this particular situation(maybe, I can't rationalize swapping standards around but I assume you can).
 

oldfart

Lifer
Dec 2, 1999
10,207
0
0
You admit that nVidia is faster while calling it a ridiculous assertion in the same sentence
So now you are reduced to semantics? That is all you have to offer? It gets better all the time.

Fine, by your twisted view of things, if nVidia is "faster" than ATi by 0.9 FPS @ 16 x 12 in Halo (again, we have to ignore the fact they are SLOWER in playable resolutions and MUCH slower with AF enabled) then they truly svck big time with the massive FPS difference in other PS heavy games.

Oh yeah. None of those matter because they are all biased, flawed, invalid, rigged, paid off, etc etc (part pf the global anti nVidia conspiracy you know).

Spin it all you like. No one but you cares about a < 1 FPS difference @ 16 x 12 (or any resolution for that matter). If it makes you feel better about your favorite company, I'm happy for you.
 

Rogodin2

Banned
Jul 2, 2003
3,219
0
0
"Check out the Halo bench sometime and compare it to what you get in game. It will give you a good comparison to look at relative performance of the game, but don't put faith in to the actual framerate number."


That's not the point because the benchmark is the leveling field for the cards-it doesn't have to give an exact indication of "how the full game will run". As I understand it you are denying the fact that Halo is actually faster (based on a more than 51% fps average of one company in the benchmark-ati-and we know that you like it black and white) on the radeon because Bungie has stated "the demo doesn't acuractly reflect what you might experience in real gameplay".

And we can't use MaxPayne2 because Remedy has said that we should use a 'trusted' benchmarking tool-3dmark.

But we can't use 3D mark because it isn't a game and nvidia has rendered it "dead."

So what can we use Ben ?

Rogo
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
So now you are reduced to semantics?

I've been posting in these forums for around six years(well, these particular forums came on line four years ago but I was here for quite a while prior to that) and I have always been big on semantics nor have I ever said otherwise. Is this some sort of shock to you? I've already been picking on other people's posts due to semantics in this thread on this page.

For the rest of your BS, refute what I'm saying. I've been trying to be civil towards you even after your half wit comments and barbs until I realized you don't deserve it. If you continue to reply with your ignorant posturing and inability to comprehend the English language I can assure you I will be just as pleasant as you have been.
 

rbV5

Lifer
Dec 10, 2000
12,632
0
0
Do I consider something that gets 100FPS faster then something that gets 99.9999999999FPS? Yes, I do. You have been stating time and time again it isn't a big enough difference, but you don't think that means its faster....?

.99 = 1...Look it up

The rest of the world seems to have a grasp at what ~equal means.
 

oldfart

Lifer
Dec 2, 1999
10,207
0
0
Ben, I know you have been here a long time as have I. Despite the tone of this thread, I have a great deal of respect for your technical knowledge (you know that is true). That does not excuse the long time one sidedness you have always shown toward nVidia. I'm not alone in that assessment. The people in this thread that have been here a long time and know you well are the ones that have the most issue with it. Maybe you can't see it, but it comes out plain as day in your posts. The Halo topic exemplifies it.

And you are right about being civil. This is not me and I apologize for it to all the members here. This video forum really brings out the worst in people somehow.

This time, I'm really done. My points have been made. Lots of data to back it up. Nothing further can be gained from additional posts.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Originally posted by: BenSkywalker
Do I consider something that gets 100FPS faster then something that gets 99.9999999999FPS? Yes, I do.
Ben, not to nitpick in a thread full of nitpicks, but do you really think current benchmarks are accurate enough to comfortably state that a result of 100.00fps is faster than one of 99.99fps? I generally consider anything within 5% equal, as I don't consider PC benchmarking to be an exact science.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
As for the game developers, they know which cards perform well with what settings, and they decide which cards to run DX9 shaders on. So far, I doubt anyone will recommend you run DX9 shaders on a 5200, unless you're still a fan of 640x480 gaming at under 30fps. As nV's claim to hold the lion's share of the "DX9 market" is predicated on including as slow a DX9 card as the 5200, I find it a misleading statistic. That's Common Sense 101.


This is irrelevent for the most part because you have 2 groups of people.

A. Clueless n00b who buys an OEM machine or the cheapest part they can find. They get a 5200 which is a DX9 compliant card. When they plop it into their machine and run a game with DX9 shaders the game will probably default to a DX8 path and the person would never know the difference.
B. Enthusiast who wouldnt be caught dead with a low end video card in thier gaming machine.

So it isnt a misleading statistic. Just because it doesnt live upto your expectations as a DX9 card does not negate the fact Nvidia with the help of a DX9 low end card has managed to capture 70% of the DX9 market. And in the end when a game company sees such lopsided figures they will in turn develope for that hardware. That is common sense 101.

I'm approaching this from a gamer's perspective, not an economist's. That marketing manager will be laughing while his irate consumers are wondering why their DX9 cards can't run DX9 games (even ones marketed as guaranteed to run well on nV hardware, like TR:AoD) at anything approaching playable framerates.


Again nobody who is buying an 80 dollar video card is going to expect it to run the newest greatest games as 100FPS with all the goodies on. They expect it to perform well at lower end resolutions and settings. There wont be mobs of irate 5200 card owners because those people know what they are getting into when they plop down 80 bucks for one.

Have they had the fastest card in the channel "for over a year?" You'll note that when the 5800U came out in Feb/Mar, some reviewers placed it on top of the 9700P. Sure, 3D fans may have known the 9700P was the superior card, but not everyone would have known it from reading things like Anand's or THG's 5900U reviews.

Are you back peddaling and saying ATI hasnt had the fastest all around card in the channel since the 9700 Pro came out? Wow I have turned ATI's own against them ;)

Maybe I should rephrase that. They had the fastest 3d card in the channel for about 8-10 months and since the 5900U was released as at least a competent competitor. It still doesnt make any difference as they have failed to capture any of the stand alone market.

I like how you continually and oh-so-cleverly refer to me as a "fanATIc," and also to generalize that I refer to all of nV's cards as "crap." If you weren't so willing to paint me with the broad brush of "not agreeing with you, thus wrong," you'd know I'm mainly against the marketing of nV's GF4MX and 5200 lines. But, considering your non-response to every other rebuttal of your misguided posts in this thread, I suppose I shouldn't be surprised you'd try to bolster your argument by creating a strawman that's easier to hit. Have fun swinging.


Well it is hard to not label you as such. When was the last time you said anything good about Nvidia? Are you even capable of saying anything good about them? And when you perform you next upgrade will you consider both tracks? you dont have to answer the question but if you arent able to do any of the above. Then the name fits.....................if you are able to look at ATI\Nvidia objectively then I pegged you wrong.

You're free to view an attempt at (increasingly argumentative) conversation based on fact "whining." You're also free to ignore any and all of the facts I've put before you to refute your previous posts. You've obviously gone ahead and done both--good show.


Well the problem is nothing you presented was fact. It was all your own opinion on how the "facts" I presented were nothing but propoganda. I dont know how stating hard market figures can be considered propoganda? The only facts in our conversation are the numbers which speak for themselves. Everything else is just our opinions on why,who, and what they mean.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: Genx87
As for the game developers, they know which cards perform well with what settings, and they decide which cards to run DX9 shaders on. So far, I doubt anyone will recommend you run DX9 shaders on a 5200, unless you're still a fan of 640x480 gaming at under 30fps. As nV's claim to hold the lion's share of the "DX9 market" is predicated on including as slow a DX9 card as the 5200, I find it a misleading statistic. That's Common Sense 101.


This is irrelevent for the most part because you have 2 groups of people.

A. Clueless n00b who buys an OEM machine or the cheapest part they can find. They get a 5200 which is a DX9 compliant card. When they plop it into their machine and run a game with DX9 shaders the game will probably default to a DX8 path and the person would never know the difference.
B. Enthusiast who wouldnt be caught dead with a low end video card in thier gaming machine.

So it isnt a misleading statistic. Just because it doesnt live upto your expectations as a DX9 card does not negate the fact Nvidia with the help of a DX9 low end card has managed to capture 70% of the DX9 market. And in the end when a game company sees such lopsided figures they will in turn develope for that hardware. That is common sense 101.

Actually, OEM machines for "clueless n00bs" usually go on the very cheap for video. We're talking GeForce MX 420 (or 440 if they're lucky), or Radeon 7000 (9000 if they're lucky). FX 5200's aren't that common on low-end systems.

I'm approaching this from a gamer's perspective, not an economist's. That marketing manager will be laughing while his irate consumers are wondering why their DX9 cards can't run DX9 games (even ones marketed as guaranteed to run well on nV hardware, like TR:AoD) at anything approaching playable framerates.


Again nobody who is buying an 80 dollar video card is going to expect it to run the newest greatest games as 100FPS with all the goodies on. They expect it to perform well at lower end resolutions and settings. There wont be mobs of irate 5200 card owners because those people know what they are getting into when they plop down 80 bucks for one.

Check the video forums. People spend $80 on an FX 5200, which is a fair chunk of change for just a video card for most people and they expect at least playable framerates at resolutions and settings that are not grotesque. I'm talking 800X600 with decent eye candy on. And often FX5200 users don't get this. The vast majority of people who walk into Best Buy or wherever and plunk down $80 expect to be able to play the newest games, especially when they can just buy a Gamecube for a little bit extra.

I like how you continually and oh-so-cleverly refer to me as a "fanATIc," and also to generalize that I refer to all of nV's cards as "crap." If you weren't so willing to paint me with the broad brush of "not agreeing with you, thus wrong," you'd know I'm mainly against the marketing of nV's GF4MX and 5200 lines. But, considering your non-response to every other rebuttal of your misguided posts in this thread, I suppose I shouldn't be surprised you'd try to bolster your argument by creating a strawman that's easier to hit. Have fun swinging.


Well it is hard to not label you as such. When was the last time you said anything good about Nvidia? Are you even capable of saying anything good about them? And when you perform you next upgrade will you consider both tracks? you dont have to answer the question but if you arent able to do any of the above. Then the name fits.....................if you are able to look at ATI\Nvidia objectively then I pegged you wrong.

Why does he need to say anything good about Nvidia? We all know that Nvidia owns a bigger share of the market than ATI. We know they were the undisputed king for several years. We know how good the unified Detonator drivers were, and how they kept putting out solid release after solid release, cranking out performance increase after performance increase, and breaking ATI's heart upon every new product launch. And we also see that with the recent string of questionable drivers and image quality issues with the 5x.xx series until very recently that Nvidia's "perfect string" of video cards and driver releases has suddenly gone stale. Obviously it's not total crap, and their cards are still very competitive, however they have made more than a couple of bad decisions recently, from both a hardware and driver point of view.

And just like you *claim* Pete to do, Genx87, you are even more stubborn in not giving any credit whatsoever to ATI.

You're free to view an attempt at (increasingly argumentative) conversation based on fact "whining." You're also free to ignore any and all of the facts I've put before you to refute your previous posts. You've obviously gone ahead and done both--good show.


Well the problem is nothing you presented was fact. It was all your own opinion on how the "facts" I presented were nothing but propoganda. I dont know how stating hard market figures can be considered propoganda? The only facts in our conversation are the numbers which speak for themselves. Everything else is just our opinions on why,who, and what they mean.
[/quote]

What is "fact"? According to you, it's only whatever you praise to be so. Nvidia cheated on 3dmark 03. It isn't on every website's front page so you don't consider this fact. Nvidia cheated on clipping planes, but this isn't fact to you.

 

reever

Senior member
Oct 4, 2003
451
0
0
Maybe I should rephrase that. They had the fastest 3d card in the channel for about 8-10 months and since the 5900U was released as at least a competent competitor. It still doesnt make any difference as they have failed to capture any of the stand alone market.

http://www.the-inquirer.net/?article=11054

"Mercury claims the total standalone graphics market fell nine per cent in the quarter, sequentially. Nvidia lost three per cent to hold 54% of the market, while ATI moved up by two per cent to 37%."

Nvidias share either last quarter or last year was 65%
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Are you back peddaling and saying ATI hasnt had the fastest all around card in the channel since the 9700 Pro came out?
Well, you were talking about perception becoming reality (5200), and it was not universally accepted that 9700P was the king when the 5800U debuted. Edit - To be clear, I thought the 9700P was the *best* card in the market until the 5900 and the Det52's made things interesting. I'm sure the 9700P was slower in some benchmarks, but (IIRC) usually at that point framerates were so high or differences so small that I thought the 9700P's superior AA IQ won out.

As for saying something good about nV, I've been doing so recently in light of their performance improvements with the Det 52's, and their newly revamped and newly re-competitive line-up. I still think ATi cards are slightly better buys because of better AA quality and better potential PS2 performance (I'm really looking forward to HL2) and a better bundle (HL2), but nV cards are still very competitive and sometimes faster in current games, so they're no longer a clearly risky or poor choice from my perspective.

I presented a lot of fact on AF quality, which you ignored. You falsely accused ATi of lowering IQ in 3DM03 (in fact, that's the first time you called me a fanatic while replying to a post of mine with a nonsequitur or more wrong info). You don't know how ATi does AF with the R3x0. Just reading your posts again gets me a little angry, as it's so obvious how you ignored everything I said and continued to throw up smokescreens. You haven't admitted to learning something or being mistaken a single time in this thread, and I think I've given you reason to at least once. And, again, I believe you're stretching the meaning of "fact" by saying that nV's boatload of 5200 sales means that they own the DX9 market. Developers can't squeeze blood from a stone--that's clear enough from the fact that even you stated most games will default the 5200 to DX8 mode. So where does that leave us? IMO, that leaves us with a marchitecture win, preying on consumer ignorance/assumptions. I'm just as disappointed in the equally-unnecessarily-confusing "9100" and "9200." But if I see benchmarks showing a 5200 performing DX9 shaders at a playable clip, I'll change my mind. As it'll probably be Doom 3 and HL2 that will sway me most, we've got some time before I'll be convinced calling the 5200 DX9 and using it as a reason to claim the majority of the DX9 market are legit.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
Ben:

Trilinear is being requested
Yes and you do get full trilinear if you don't force AF. If you force AF you get partial trilinear which is a tad questionable decision but I wouldn't call it cheating. If it did no trilinear at all (like nVidia) even when the control panel was selected then I'd take issue to that.

Besides, with ATi's 16x AF the first mip-map boundary stretches much further than nVidia's 8x boundary so you often can't even see what's beyond it anyway. If you were using a lower tap rating you might see if but with ATi's extremely fast AF there's no reason not to use the maximum values.

The two main points here are:
(1) If the game requests trilinear AF it gets it, unlike nVidia which does not provide it. Even if the game doesn't ask for AF it still gets one trilinear mipmap if the user requests it, unlike nVidia, who again does not.
(2) ATi's numerous AF advantages basically neutralise any issues that trilinear filtering only the first mip-map might cause.

How many games do you own that have an option for AF?
I actually find the R300's bilinear AF to offer equal image quality so it doesn't worry me.

They benched the game for their big PR event that they did that many sites attended, and they had the compiled MM path ready to go. They without a doubt had the compiler to use.
It's highly likely the full precision path had been finished for a long time while they were still putting the finishing touches into the mixed mode path.

By Dave's comments in this thread, they were trying to show how much special optimizations you need to use for nV's parts.
Perhaps, but there's no denying that they actually did the optimisations. It's not like they produced a crap mixed mode path and then tried to show that as a performance indicator. No matter what you do to the standard path you'll always want to run the mixed mode path to get better performance anyway.

They still have a decent lead when they are both running the 'pure' path, but not when nV is running MM.
Even the 9600 Pro was edging the 5900 in a lot of the benchmarks. I wouldn't call that close.

Why do you think Valve released the bench when they did?
I believe it was being prepared for the 9800XT's release which will be bundling the game with it. Of course Valve's code was then leaked so the release has since been pushed back.

I quoted the guy from Remedy, who helped make the game, and the main reason for that was the adaptive nature of Max Payne2 and why it is a lousy benchmark.
Hopefully there'll be a way to disable this feature so that accurate benchmarks can be conducted. Of course I'd like to point out that it doesn't necessarily imply that ATi hardware is doing any less work than nVidia's hardware; in fact it could well be the reverse if the game is yet again switching to a mixed mode, reduced precision path when it detects nVidia hardware like every other DirectX 9 class game does so far.

For 3DM2K3 I agree they were doing that, my issue has always been with all of the other titles that people have talked about.
But this issue has been mentioned in general across a wide range of websites and comments were made for nVidia to implement some form of code re-ordering that could work universally on any shader instead of hard-coded shader subsitution. They're finally reordering now but I'm convinced there was/is far more shader subsitution going on than in just 3DMark.

then the R3x0 boards under most settings and that was with nV's poor performing drivers.
IIRC that wasn't the case at all. When the Doom III bench was released ATi didn't know about and didn't have to optimise their drivers, unlike nVidia who had planned the whole thing with Carmack. Also IIRC Carmack was saying he wasn't 100% happy with the way it was done since the only benchmark results that were being released were the ones that nVidia had had time to optimise for.

It uses PP as that is all it needs.
And it follows the trend that all future games will likely continue.

How fast do you think ATi would be running FP32 with their current parts? 1/100,000th the speed of nV give or take? ;)
I dunno. But considering they run FP24 at full speed I'd say that's a fair trade-off.

Do I consider something that gets 100FPS faster then something that gets 99.9999999999FPS?
I wasn't going to get involved with this but I have to agree with the others that making comments that nVidia is faster than ATi on the grounds of a .9 FPS difference is really quite ridiculous, especially since the margin of error in benchmarking tends to be around 3% and anything within that is considered to be benchmarking noise.

Also that's nowhere near the alleged 70% claim you made a while ago, plus you're ignoring the fact that the ATi cards are significantly faster when AF/AA is cranked in the game. So if anything I'd say ATi hardware is faster than nVidia hardware in Halo, not the other way around like you claim.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Oldfart-

That does not excuse the long time one sidedness you have always shown toward nVidia.

And the only time I really get those accusations is when people are on another bandwagon. Back when the Radeon launched and I was reccomending that, I was still being called a nVidiot for not reccomending it with enough vigor. In the days of the Kyro2 I was a nVidiot for not seeing with clarity how TBRs were going to be the dominant force in the industry within a couple of years(heh). Today I'm a nVidiot because I'm calling people out on the bashing they have been doing over the last several months which we can now see wasn't valid criticism overwhelmingly. nVidia had bugs in their drivers, that was the bulk of it. For my issues with benches, noone seems to be taking argument with what I am saying is wrong with them, they just don't like hearing it pointed out. There are benches that put nV on a level playing field with ATi that I am saying I ignore, and others that show ATi smacking nV silly that I think are perfectly valid(and actually, one I do care a decent amount about). For the HL2 issue, I'm not seeing people taking a major issue with that anymore(as it really does speak for itself). The driver bugs being just that speaks for itself. If I was nearly as biased as many think, I would be jumping all over the ATi 'cheats' that nV's PR monkeys spun up, but it is so much BS just as it was the other way. I haven't always shown one sidedness towards nVidia, it simply is what is remembered when it goes against the grain.

I'm not alone in that assessment. The people in this thread that have been here a long time and know you well are the ones that have the most issue with it.

When I don't agree with them they have an issue with it, but not when I do. I don't recall you having an issue when I was saying buy Radeon over the GeForce :)

Maybe you can't see it, but it comes out plain as day in your posts. The Halo topic exemplifies it.

The Halo topic was about semantics, something I'm not ever going to apologize for in terms of being anal ;)

I will apologize to you for the hostility though, I usually laugh pretty much everything off, but I have respected you for some time and the discussion was getting under my skin. If I'm ever down your way I'm still planning on taking you out for a beer :D

Pete-

Ben, not to nitpick in a thread full of nitpicks, but do you really think current benchmarks are accurate enough to comfortably state that a result of 100.00fps is faster than one of 99.99fps? I generally consider anything within 5% equal, as I don't consider PC benchmarking to be an exact science.

That particular example would depend on how repeatable it was, but I was talking about considering one thing being one ten billionth of a frame faster still being faster. Faster is faster. When you have two people racing and they are within a thousandth of a second, do you call it a tie or give the gold to the guy who managed to be one thousandth faster?

RBV5-

.99 = 1...Look it up

Most certainly not. The example they are discussing is taking a fraction and converting it over to decimal where you have a never ending string of .3s and .6s. The reason for that is those fractions fail to equate out to a reasonable decimal, not because .333333333 and .666666666 non repeating are equal to 1, they would equal .999999999.

The rest of the world seems to have a grasp at what ~equal means.

~equal!=equal

BFG10K-

Yes and you do get full trilinear if you don't force AF. If you force AF you get partial trilinear which is a tad questionable decision but I wouldn't call it cheating.

I don't consider it cheating either, I didn't consider what PVR did cheating and I don't consider what nV is doing cheating. I think of all of them as hacks, and they all are inferior to a proper implementation, but not cheating.

It's highly likely the full precision path had been finished for a long time while they were still putting the finishing touches into the mixed mode path.

All I'm talking about is a simple recompile, nothing drastic. They take the code they already had, as is, and compile it with a different flag. That is fairly simple.

Perhaps, but there's no denying that they actually did the optimisations. It's not like they produced a crap mixed mode path and then tried to show that as a performance indicator. No matter what you do to the standard path you'll always want to run the mixed mode path to get better performance anyway.

True, all that was missing on that front was the Det50s. My issue with how they did the bench was mainly that they showed a larger performance rift between the nV paths then reasonably should be expected.

Even the 9600 Pro was edging the 5900 in a lot of the benchmarks. I wouldn't call that close.

Well, I don't want to support what they did, but let's say there are new bench results out. They are public on a major site too.

IIRC that wasn't the case at all. When the Doom III bench was released ATi didn't know about and didn't have to optimise their drivers, unlike nVidia who had planned the whole thing with Carmack.

They didn't have their improved scheduler which they do have in the new drivers, that's what I meant by that.

Also IIRC Carmack was saying he wasn't 100% happy with the way it was done since the only benchmark results that were being released were the ones that nVidia had had time to optimise for.

Actually Carmack changed the bench on them at the last minute. They had one planned they knew they would win big, he changed it to one that made them a bit nervous(paraphrasing his comments).

Hopefully there'll be a way to disable this feature so that accurate benchmarks can be conducted. Of course I'd like to point out that it doesn't necessarily imply that ATi hardware is doing any less work than nVidia's hardware; in fact it could well be the reverse if the game is yet again switching to a mixed mode, reduced precision path when it detects nVidia hardware like every other DirectX 9 class game does so far.

MayPayne2 is DX8.1, there isn't any DX9 shaders(this is based on Remedy's comments). They have one PS 1.4 shader in the game(they use it for mirrors) the rest are lower. Taking the way the game is coded into consideration, there shouldn't be the performance rift there is. That isn't to say that ATi hardware may well be faster, just that I'm not going to put faith in to bench numbers when the dev of the game says not to because they may be rendering different things.

But this issue has been mentioned in general across a wide range of websites and comments were made for nVidia to implement some form of code re-ordering that could work universally on any shader instead of hard-coded shader subsitution. They're finally reordering now but I'm convinced there was/is far more shader subsitution going on than in just 3DMark.

Now we are seeing performance that is at or exceeds the level of the drivers with issues with no IQ problems(in terms of funky shaders that I am aware of). If you look at certain elements of what we were seeing in terms of shaders screwing up, they all seemed to revolve around light interaction while each of the different benches had different shaders, they were exhibiting a like FUBARed state. To me this would likely indicate that they were rescheduling things improperly. To hand code replacement shaders for all the different benches we saw glitches with and have them all screw up in light interaction, someone would have caught that, they were way too obvious(far moreso then anything in 3DM2K3).

I believe it was being prepared for the 9800XT's release which will be bundling the game with it. Of course Valve's code was then leaked so the release has since been pushed back.

The code being leaked didn't push it back from 9/30(although it may have well pushed it back quite a bit further). There was no way they were going to hit that date, they didn't even manage to hit that date with the public bench.

And it follows the trend that all future games will likely continue.

If that is all that is needed then that would mean nVidia was 'right' in their choice. Have FP32 support for the Quadro, FP16 for DX9 games. I don't think it is quite that simple, I think there will certainly be some shaders that require FP32 and that of course will impact nV's performance fairly badly, that is why I have continued to state that I think overall ATi will hold a decent performance edge in PS 2.0 limited situations.

I dunno. But considering they run FP24 at full speed I'd say that's a fair trade-off.

I think that in retrospect nV would have made some changes to their part, no argument here on that. I just don't think the situation is nearly as dire as what a lot of others seem to think.

Also that's nowhere near the alleged 70% claim you made a while ago, plus you're ignoring the fact that the ATi cards are significantly faster when AF/AA is cranked in the game.

The 70% was going the other way, that is what many were saying we would say ATi beat nVidia by due to the HL2 bench. Also, AA and AF don't work properly in the game. AF works in certain areas, and doesn't appear to work at all in others(this includes comparing my Ti4200 with no AF to a R9800Pro with 8x, no difference in certain situations in terms of filtering). The FX5900 seems to be doing something in the same situation, but it almost looks like it is point filtering while it is applying AF(texture clarity is improved, but the shaders have some pretty bad aliasing, does look much sharper then the R9800 though). This depends on where you are in the game though.

AA doesn't work unless you disable certain effects in the game, and I really haven't seen any benches under those conditions.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
And, again, I believe you're stretching the meaning of "fact" by saying that nV's boatload of 5200 sales means that they own the DX9 market. Developers can't squeeze blood from a stone--that's clear enough from the fact that even you stated most games will default the 5200 to DX8 mode. So where does that leave us? IMO, that leaves us with a marchitecture win, preying on consumer ignorance/assumptions. I'm just as disappointed in the equally-unnecessarily-confusing "9100" and "9200." But if I see benchmarks showing a 5200 performing DX9 shaders at a playable clip, I'll change my mind. As it'll probably be Doom 3 and HL2 that will sway me most, we've got some time before I'll be convinced calling the 5200 DX9 and using it as a reason to claim the majority of the DX9 market are legit.


You cant stretch the meaning of fact when the numbers(which are facts) say Nvidia owns 70% of the DX9 market. Your "opinion" is you dont think 5200 cards are DX9 because of performance issues with PS 2.0.

Check the video forums. People spend $80 on an FX 5200, which is a fair chunk of change for just a video card for most people and they expect at least playable framerates at resolutions and settings that are not grotesque. I'm talking 800X600 with decent eye candy on. And often FX5200 users don't get this. The vast majority of people who walk into Best Buy or wherever and plunk down $80 expect to be able to play the newest games, especially when they can just buy a Gamecube for a little bit extra.


80 dollars for a video card? Well i guess that is your opinion. And I would be really surprised if the 5200 doesnt play a lot of games at "playable" frame rates. Hell just for kicks I busted out DAOC on my Athlon 800 + GF2 MX400 and it played the game fine at 800X600.

Like I said if somebody is plopping down 80 bucks for a video card expecting to play the newest games fast then they are ignorant. Would you expect if they plopped 80 bucks down on a 9200 they would be equally as pissed? I mean come on...........................................these are budget cards and they will run games at budget speeds.

Actually, OEM machines for "clueless n00bs" usually go on the very cheap for video. We're talking GeForce MX 420 (or 440 if they're lucky), or Radeon 7000 (9000 if they're lucky). FX 5200's aren't that common on low-end systems.

While that may be true I dont think Nvidia gathered 70% of the DX9 market selling 5200s in the standalone market. I went shopping for computers with my sister and a lot of the machines had either 9100s or 5200s as their video cards. A couple still had TNT2s and a couple had GF4 MXs. 1 machine had a 9700 Pro.

Why does he need to say anything good about Nvidia? We all know that Nvidia owns a bigger share of the market than ATI. We know they were the undisputed king for several years. We know how good the unified Detonator drivers were, and how they kept putting out solid release after solid release, cranking out performance increase after performance increase, and breaking ATI's heart upon every new product launch. And we also see that with the recent string of questionable drivers and image quality issues with the 5x.xx series until very recently that Nvidia's "perfect string" of video cards and driver releases has suddenly gone stale. Obviously it's not total crap, and their cards are still very competitive, however they have made more than a couple of bad decisions recently, from both a hardware and driver point of view.


Why? Do you even need to ask? Look at what he is claiming. He is claiming to be a level headed, unbiased poster. I asked him if he really is as lvl headed and not a fanATIc as he seems to claim. Well I didnt really ask him to respond, but to think on the inside to himself and reflect. I guess we will leave it upto him to decide on his own.

And just like you *claim* Pete to do, Genx87, you are even more stubborn in not giving any credit whatsoever to ATI.

While I may not express it here since most of mine time is dealing with fanATIcs. I have admitted several times ATI has made a pretty good chip in the R3.xx and when I do my next upgrade, I will be looking at both NV40 and R420 options. That is a lot more than some people in here would do.

What is "fact"? According to you, it's only whatever you praise to be so. Nvidia cheated on 3dmark 03. It isn't on every website's front page so you don't consider this fact. Nvidia cheated on clipping planes, but this isn't fact to you.


A. I think in this case the hard market numbers are fact.
B. What does the rest of your little tantrum have to do with the market numbers?

"Mercury claims the total standalone graphics market fell nine per cent in the quarter, sequentially. Nvidia lost three per cent to hold 54% of the market, while ATI moved up by two per cent to 37%."

Nvidias share either last quarter or last year was 65%


Well in my original argument didnt I say Nvidia had 54%??????????? The article I linked to in case you didnt notice the date was from early Aug. That article in case you didnt read it was talking 2nd qtr numbers.

 

baraka

Junior Member
Apr 15, 2003
24
0
0
Originally posted by: Genx87
This is irrelevent for the most part because you have 2 groups of people.

A. Clueless n00b who buys an OEM machine or the cheapest part they can find. They get a 5200 which is a DX9 compliant card. When they plop it into their machine and run a game with DX9 shaders the game will probably default to a DX8 path and the person would never know the difference.
B. Enthusiast who wouldnt be caught dead with a low end video card in thier gaming machine.

So it isnt a misleading statistic. Just because it doesnt live upto your expectations as a DX9 card does not negate the fact Nvidia with the help of a DX9 low end card has managed to capture 70% of the DX9 market. And in the end when a game company sees such lopsided figures they will in turn develope for that hardware. That is common sense 101.

I'm approaching this from a gamer's perspective, not an economist's. That marketing manager will be laughing while his irate consumers are wondering why their DX9 cards can't run DX9 games (even ones marketed as guaranteed to run well on nV hardware, like TR:AoD) at anything approaching playable framerates.


Again nobody who is buying an 80 dollar video card is going to expect it to run the newest greatest games as 100FPS with all the goodies on. They expect it to perform well at lower end resolutions and settings. There wont be mobs of irate 5200 card owners because those people know what they are getting into when they plop down 80 bucks for one.
So, you don't think it's misleading to advertise the card as DX9 compliant even though it's going to run in DX8 mode?

Also, I'd like to point out the inconsistency of your statements:

Clueless n00b who buys an OEM machine or the cheapest part they can find. They get a 5200 which is a DX9 compliant card. When they plop it into their machine and run a game with DX9 shaders the game will probably default to a DX8 path and the person would never know the difference.

and

Again nobody who is buying an 80 dollar video card is going to expect it to run the newest greatest games as 100FPS with all the goodies on. They expect it to perform well at lower end resolutions and settings. There wont be mobs of irate 5200 card owners because those people know what they are getting into when they plop down 80 bucks for one.

First you say the people who buy these cards don't know what to expect and then you say they really do.





 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
So, you don't think it's misleading to advertise the card as DX9 compliant even though it's going to run in DX8 mode?

Misleading? Depends..........................

If you are buying an 80 dollar card that is being advertised to run a DX9 class game at the highest resolution at high speeds then yes. If you are buying a DX9 (compliant) card because it is (compliant) then no.

Also, I'd like to point out the inconsistency of your statements:

Here let me help you get this correct......

Clueless n00b who buys an OEM machine or the cheapest part they can find. They get a 5200 which is a DX9 compliant card. When they plop it into their machine and run a game with DX9 shaders the game will probably default to a DX8 path and the person would never know the difference.

and

Again nobody who is buying an 80 dollar video card is going to expect it to run the newest greatest games as 100FPS with all the goodies on. They expect it to perform well at lower end resolutions and settings. There wont be mobs of irate 5200 card owners because those people know what they are getting into when they plop down 80 bucks for one.

First you say the people who buy these cards don't know what to expect and then you say they really do.



These are two different types of people.
The clueless n00bs who buy an OEM machine are not buying a video card but rather a computer that happens to come with a DX9 compliant video card. And the clueless n00b who is buying the cheapest card are one in the same. The chances are they dont know the difference and will never know the difference when a game goes into DX8 mode for the video card.

The 2nd group of people are people who are buying an 80 dollar video card knowing full well the card will run at budget speeds and budget resolutions. These are totally different types of buyers.

 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: Genx87
The 2nd group of people are people who are buying an 80 dollar video card knowing full well the card will run at budget speeds and budget resolutions. These are totally different types of buyers.

People (noobs) don't expect "budget resolutions" to run or look like crap, which they do.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
That particular example would depend on how repeatable it was, but I was talking about considering one thing being one ten billionth of a frame faster still being faster. Faster is faster. When you have two people racing and they are within a thousandth of a second, do you call it a tie or give the gold to the guy who managed to be one thousandth faster?
Uncle! UNNNNCLE!!!


;)

If you are buying an 80 dollar card that is being advertised to run a DX9 class game at the highest resolution at high speeds then yes. If you are buying a DX9 (compliant) card because it is (compliant) then no.
People don't buy cards for marketing features. I didn't buy an ATi card because it was 2D "compliant," but because it does 2D well. Similarly, I wouldn't want to buy a 5200 because it's DX9 "compliant," but because it renders DX9 effects at a playable framerate. No one is saying "at the highest resolution at high speeds" (nice strawman), but I don't think a 5200 can render at a low res at decent speeds. I'm curious to see 5200 framerates in 3DM03, AM3, DIII, and HL2 at 6x4 or 8x6, but most review nowadays seem to think monitors can't go lower than 10x7 ('cept for TR, but they haven't been given a chance to try a 5200 with D3 or HL2).
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
I think of all of them as hacks,
Well I guess as long as you're consistent that's fine. Personally I believe that if a program requests something it should always get it unless the hardware isn't capable or the user has overidden the request. Developers like Gabe seem to agree with me and I say it's a pretty reasonable stance to take. As a developer you don't want drivers to constantly shift the goal posts whenever you're trying to do something.

All I'm talking about is a simple recompile, nothing drastic.
Assuming that the code would work perfectly unaltered, which it might not have.

They take the code they already had, as is, and compile it with a different flag. That is fairly simple.
If you shift to a different compiler or even a different version of the same compiler your code isn't automatically gauranteed to work unaltered, especially if your original code was built to work around compiler issues with your previous compiler.

I'm not saying this is what happened, all I'm pointing out is that a recompile sometimes isn't as simple as you make it sound.

True, all that was missing on that front was the Det50s.
The Det50s were beta at that time and numerous websites were ripping into them about degrading image quality even further. Valve themseslves asked reviewers to not use them because of questionable optimisations they had found in them.

They are public on a major site too.
Link?

They didn't have their improved scheduler which they do have in the new drivers, that's what I meant by that.
Given Doom III only uses a few simple 1.x shaders I wouldn't expect OOE to help it much, if at all.

Actually Carmack changed the bench on them at the last minute.
I know the current ATi drivers at the time had problems running it.

They have one PS 1.4 shader in the game(they use it for mirrors) the rest are lower.
The NV3x has performance hits when running 1.x shaders too, just not as much as it does with version 2.0.

To me this would likely indicate that they were rescheduling things improperly.
That sounds like a reasonable theory but I have problems giving nVidia the benefit of the doubt after everything we've seen from them. These problems are simply not one-off like Quack and they extend beyond shaders too. Also they must've tested the drivers and found that they had such widespread issues so they should've disabled the instruction reordering until it was working reasonably well (if it was indeed the cause of the problems).

There was no way they were going to hit that date, they didn't even manage to hit that date with the public bench.
Yes but the 9800XT came out what, two weeks later? Valve's plan must've been to benchmark current cards at the time and then allow reviewers to again use the HL2 benchmarks when ATi's XT boards arrived. Given ATi is Valve's bundling partner there's no doubt in my mind that the benchmarks were timed to coincide with the 9800XT's release.