Anand's 9800XT and FX5950 review, part 2

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Rogodin2

Banned
Jul 2, 2003
3,219
0
0
Genx

Pulling numbers out your arse again eh?

Post your source (and it better be a valid one) for the percentages.

rogo
 

spam

Member
Jul 3, 2003
141
0
0

Ben,

Another thing I noticed is that your view of history seems to be unique. I have not seen any other source that has suggested this area of complicity and conspiracy regarding development of Direct 3d standards, there has been no general outcry of foul! or unfair! Yours seems to be a "voice in the wilderness". If it suits them, ATI and Nvidia have had no qualms about where to find the others' dirty linens.

 

TerryMathews

Lifer
Oct 9, 1999
11,464
2
0
Originally posted by: BFG10K
Look at Halo performance with PS 2.0, real staggering difference there isn't it?
Halo? Given it runs on the X-Box's Direct 8.x hardware I can't imagine there'd be a staggering amount of PS 2.0 code in there if any at all.

I was going to keep my mouth shut, but I couldn't resist: As an owner of both an Xbox and a PC with a Radeon 9700, I can unequivocably say that Halo on the PC looks very little like Halo on the Xbox. Whether it supports PS2.0, I'm not sure but saying or implying that they use the same graphics rendering engine is incorrect.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I have not seen any other source that has suggested this area of complicity and conspiracy regarding development of Direct 3d standards,

I said absolutely nothing about conspiracy, not even hinted at it. Check out how the submission process works for either one of the APIs, it is certainly not a conspiracy in the least.

there has been no general outcry of foul! or unfair!

It isn't foul or unfair, it's how the process works. The IHVs are free to make submissions regarding the feature support of the APIs, have been as long as I can recall for either DX or OpenGL. With OpenGL the ARB takes a look at the submissions and decides which features to implement and what not to(not quite that straightforward as there are proprietary issues to work around on occasion). The same applies for DX except instead of the ARB board, you have MS making the call. This is the logical way to do things.

What if MS came up with the notion that 3D procedural textures were going to be the driving force behind DX10 but none of the IHVs were going to support it? Where would that leave MS's credibility regarding gaming APIs? Obviously they want to make certain that they are going to have a focal point that will be supported by the IHVs, and the IHVs can't wait for MS to decide what that is going to be. They need to have their engineers working on parts long before the DX specs are finalized and they need to figure out what they can and can't do in a limited amount of time and with whatever limitation on resources they have. If you look at DX9 there is another PS revision included, a logical extension of PS 2.0. If the low level shader support had been what now is PS 3.0 then all of the IHVs, and DX9 in general, would be quite useless for the most part as of now.

It is nothing like a conspiracy or some shady back door dealings. The IHVs submitting proposals on what should be implemented in to the APIs is the only reasonable way to handle it.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Originally posted by: Rogodin2
Genx

Pulling numbers out your arse again eh?

Post your source (and it better be a valid one) for the percentages.

rogo

Oh my bad...It appears Nvidia had 64% of the stand alone market.

http://www.xbitlabs.com/news/video/display/20030801102103.html

"As reported yesterday, NVIDIA has 64% of the total standalone GPUs for desktop computers market, flat with the Q1, just like ATI Technologies with its 28%. Other suppliers? shares obviously remained unchanged."

And they shipped 60% of the DX9 cards for the same qtr.

http://www.xbitlabs.com/news/video/display/20030731084600.html

NVIDIA?s share among all DirectX 9.0-supporting graphics processors shipped skyrocketed to 60%, obviously, thanks to 70% share in the Value DirectX 9.0-compliant GPUs market. Even though its dollars share is lower, units share seems to be very important for the company.





 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Genx87, try filtering the propaganda you cluelessly repost.

NVIDIA?s share among all DirectX 9.0-supporting graphics processors shipped skyrocketed to 60%, obviously, thanks to 70% share in the Value DirectX 9.0-compliant GPUs market.
ATi doesn't have a value DX9 card. ATi only sells DX9 cards that appear to perform at least acceptably in DX9 benchmarks/games. The 5200 ("value DX9 GPU") doesn't perform acceptably with DX9 shaders, thus, as I said, I'm not including it in the "DX9 market" just to prop up some marketer's numbers. And even the 5600 performs piss-poorly in current DX9 tests, so including it as a DX9 target developers should aim for is unrealistic.

So nV is shipping 60% of the DX9 cards, with ATi shipping 40%. Once you consider that nV must be shipping many, many more 5200's than it is 5600's and up (as usual, value cards far outsell $100+ cards), I'll wager ATi comes out on top in terms of realistically fast DX9 cards.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Genx87, try filtering the propaganda you cluelessly repost.

Is that all you fanATIc can do now? When presented with evidence cry to your mommy's? This is one of the more pathetic displays I have seen to date.

ATi doesn't have a value DX9 card. ATi only sells DX9 cards that appear to perform at least acceptably in DX9 benchmarks/games. The 5200 ("value DX9 GPU") doesn't perform acceptably with DX9 shaders, thus, as I said, I'm not including it in the "DX9 market" just to prop up some marketer's numbers. And even the 5600 performs piss-poorly in current DX9 tests, so including it as a DX9 target developers should aim for is unrealistic.

Business 101: If you own the largest marketshare of a new market. You just created an entry barrier for the competition. People always look to the market leader when purchasing items. Especially when you have a confusion in the market place.

Nobody outside of a small small % gives a rats behind how Nvidia's low end cards perform in DX9. But the simple fact Nvidia owns the largest marketshare of the new market will make it tougher for ATi to recover.

Speaking of clueless n00b.


So nV is shipping 60% of the DX9 cards, with ATi shipping 40%. Once you consider that nV must be shipping many, many more 5200's than it is 5600's and up (as usual, value cards far outsell $100+ cards), I'll wager ATi comes out on top in terms of realistically fast DX9 cards.

What you dont seem to grasp is prior to Q2 ATI basically had the entire market. So in 1 Qtr Nvidia managed to go from 0% of the DX9 market to 60%. Again you seem to confuse fastest card with the end all for a business. AMD has had faster processors than Intel for years. But that doesnt seem to help AMD make more money or gain marketshare

It is also hilarious to watch you attack the number based solely because of your belief the 5200 cards are not DX 9. I am sure if some Nvidia marketing manager was reading this he would be laughing all the way to the bank.

Why wont you comment on another glaring problem for ATI? Eventhough they appparently have had the fastest card in the channel for over a year they have yet to put a dent in Nvidia's stand alone market. 64% is pretty damn strong for a company who supposedly has had "crap" (as you fanATIcs like to put it) cards.

Ill await some more of your whining...................................................
 

jbirney

Member
Jul 24, 2000
188
0
0
PP for the NV3X under HL2 also showed the R9800 with a substantial lead while Halo has the 5900 ahead of the 9800.
And the one actual game that is out that people want to own at that list is as fast or faster on the 5900. That is what I can see with crystal clarity.

Not according to this: http://www.extremetech.com/article2/0,3973,1354541,00.asp Sorry but I no long trust any of anadtechs numbers anymore.

Refrast doesn't handle some of those properly. If refrast can't do it, then no way should any board be expected to.

PS2.0A is part of the standard.

No see PS2.0A results here:
http://www.tech-report.com/reviews/2003q4/geforcefx-5950ultra/index.x?pg=11

Notice no real difference. Also the issues with no some scores of the FX being 0 are labeled as not exposed in drivers yet. How many more months do they need?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Not according to this: http://www.extremetech.com/article2/0,3973,1354541,00.asp Sorry but I no long trust any of anadtechs numbers anymore.

Old drivers in that link. Just check the numbers on the TechReport bench you linked to. If you trust their numbers enough to post the ShaderMark numbers, I can only assume you trust them enough to look at their Halo numbers.

No see PS2.0A results here:
http://www.tech-report.com/reviews/2003q4/geforcefx-5950ultra/index.x?pg=11

Notice no real difference. Also the issues with no some scores of the FX being 0 are labeled as not exposed in drivers yet. How many more months do they need?

The no score issue is a good one and one that I wish more sites would look in to. Two of the tests that the FX scores a 0 on refrast can't render properly and one of the others exposes a bug MS says they intend to remove. With an obvious devotion to creating such an impartial benchmark it is a wonder that sites don't simply run ShaderMark and call it a day ;)
 

DaveBaumann

Member
Mar 24, 2000
164
0
0
With an obvious devotion to creating such an impartial benchmark it is a wonder that sites don't simply run ShaderMark and call it a day

Did wind you up Ben?

As for the comments on an impartial benchmark...
rolleye.gif
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
As for the comments on an impartial benchmark...

As soon as you start showing SplinterCell benches of 0 for the R3X0 core boards at the highest quality settings I'll consider your point valid.

Until then Dave, exposing a bug in DirectX and doing something refrast can't handle is OK to you? I know, bad comparison as one is a highly rated game while the other is a synthetic bench, everyone knows which is more important.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
Ben:

Hehe, try actually using it. It has hideous level of aliasing.
I did. In fact I used the R100's AF as well. Sure there were some things I didn't like about it but in 2000 when it made its debut the cards around it couldn't even run AF at playable levels while ATi was offering 64 tap anistropic for essentially free. Again you can disable it and gain performance, unlike nVidia's DXT1 which loses performance when enabled.

So far you have managed to say that 3DMark2K3 has cheats and that is the only thing.
So in otherwords there's no registry switch and you have no evidence at all to support your claim that nVidia's driver-hiding is not there to stop cheat detection from working.

You should have just said this to begin with instead of hiding behind the charade of dumb users.

So they are by your definition cheating, or nVidia is not. Either ATi and nV are both cheating or both of them aren't, take your pick.
No they aren't. If the application isn't requestion AF then it's up to the driver to determine how it it does it. ATi's drivers aren't overriding an application's request, they aren't using application detection and they're exhibiting clear and consistent behaviour across the board. nVidia's drivers are not and simply change things whenever it suits them.

Accusing ATi of cheating in this case is no different to accusing them of cheating because their AF is adaptive and doesn't filtering all surfaces.

What are you talking about? Are you talking about ATi not rendering certain effects properly as was recently discussed at the counterpart to ShaderDay, or something else?
No, I'm talking about nVidia's further attempts to mask their inconsistenties by changing the screen shots so they no longer look like what is being rendered, as Gabe reported they were doing. ATi's quasi-trilinear certainly isn't doing that and you can see its effects as plain as day in a screenshot.

If you take a screenshot of nVidia's forced bilinear it'll probably look trilinear. o_O

Your board wouldn't be able to run it.
And the FX can't run certain code either. Honestly what is the point of this useless tangent?

IPC when comparing things that don't have comparable clock speed is pretty much the defining characteristic of a Mac user.
Did you even read what I said? I said that ATi had both the IPC and the brute force advantage meaning that regardless of the clock speeds you can expect R3xx to come on top.

You compile the code with ATi's targetted compiler and it runs on a FX, do the same with the FX compiler and see what you get.
Huh? Valve used both nVidia's CG and Microsoft's compiler.

If that was true then it would be a good point.
It is true. Multiple benchmarks confirm it, multiple developers like Carmack and Gabe confirm it and Valve's presentation had a Microsoft-backing as evidence that the numbers weren't being cooked.

The reality is that smaller developers can't afford to support the utterly miniscule marketshare of high end DX9 parts as a focal point.
How does that prove that the FX isn't inferior to the 9xxx line?

How is it nonsenical to take issue with your repeated false statements that nV is destroyed in DX9 performance.
Answer the question.

If a Microsoft compiled, hand-optimised, mixed mode rendering path can't beat ATi's generic full precision path then why do you expect the full precision path to magically soar above the mixed mode path soley on the basis that Microsoft's compiler is used?

Answer the question and stop this charade.

This information has been posted hundreds of times around the web.
Yes it's also been posted that Halo is one of the worst console ports in the history of PC gaming and many people are returning the game because it runs so abysmally on even top-end PC hardware. That's hardly a title that reinforces your position.

Regardless, I seem to remember reading that Halo is using a similar, mixed mode, lowered precision path that Half Life 2 is using so that's hardly the basis for a sound comparison. And yet again you're ignoring the testimony from two key developers (Carmack and Gabe) and the half a dozen or so other DirectX 9 benchmarks/games that show the opposite of Halo.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
Just in case you missed Carmack's comments, you can find them here.

GD: John, we've found that NVIDIA hardware seems to come to a crawl whenever Pixel Shader's are involved, namely PS 2.0..

Have you witnessed any of this while testing under the Doom3 environment?


"Yes. NV30 class hardware can run the ARB2 path that uses ARB_fragment_program, but it is very slow, which is why I have a separate NV30 back end that uses NV_fragment_program to specify most of the operations as 12 or 16 bit instead of 32 bit."

John Carmack

So what's you take on that Ben? Is Carmack another incompetent programmer? Or has he been bought out by ATi as well?
rolleye.gif


Here's another.

?Precision has a pretty big impact on the current nVidia chips, so it?s important to use half precision data types and the nVidia specific 2.0a HLSL compiler when running on NV3X hardware. Keeping those limitations in mind, it hasn?t been too difficult for us to maintain an acceptable level of performance.?

James Loe

So in otherwords in order for the NV3x line to have any hope to compete with the R3xx it has to run at a dumbed-down precision, highlighting the fundamental flaws of the architecture that everyone except Ben seems to understand.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
BFG-

So in otherwords there's no registry switch and you have no evidence at all to support your claim that nVidia's driver-hiding is not there to stop cheat detection from working.

You make the paranoid accusations, you provide the proof. That is how it works.

No they aren't. If the application isn't requestion AF then it's up to the driver to determine how it it does it. ATi's drivers aren't overriding an application's request, they aren't using application detection and they're exhibiting clear and consistent behaviour across the board.

Just ignore that the application is requesting trilinear and you theory works fine. Unfortunately for your theory, ATi is in fact ignoring an application request explicitly.

No, I'm talking about nVidia's further attempts to mask their inconsistenties by changing the screen shots so they no longer look like what is being rendered, as Gabe reported they were doing.

You want me to start quoting the counterpart to ATi's PR monkey? Now that nV has had their PR day I can find quotes from developers saying the same type of things that ATi is cheating on. The difference is they use shipping products. Now we have a whole list of ATi 'cheats' to counterbalance the nVidia 'cheats'. What do you think about that? Reality is its all so much BS, but I don't expect that you could possibly realize that.

Huh? Valve used both nVidia's CG and Microsoft's compiler.

WTF? Where did you read anything like that? Valve did not use Cg, they used two different MS compilers. One was that which is optimized for ATi's part and one for nVidia's. The problem is that they did not use the same compiler for the different paths.

It is true. Multiple benchmarks confirm it, multiple developers like Carmack and Gabe confirm it and Valve's presentation had a Microsoft-backing as evidence that the numbers weren't being cooked.

Are you sure you want to go on this tangent?

How does that prove that the FX isn't inferior to the 9xxx line?

That doesn't, of course very few people think that it is anything close to that clear cut anyway. Taking a look at the games I play the FX comes out on top in damn near everyone of them.

If a Microsoft compiled, hand-optimised, mixed mode rendering path can't beat ATi's generic full precision path then why do you expect the full precision path to magically soar above the mixed mode path soley on the basis that Microsoft's compiler is used?

Answer the question and stop this charade.

I have answered this multiple times already. The point of it is that they were trying to show a larger performance rift then what there would be if it was done by any reasonably intelligent impartial developer. They were trying to fool people into thinking the FX had a larger performance rift then it actually did comparing optimized v non optimized code. That has been my singular point all along, apologies if stating it a dozen times wasn't enough to keep that clear.

Yes it's also been posted that Halo is one of the worst console ports in the history of PC gaming and many people are returning the game because it runs so abysmally on even top-end PC hardware. That's hardly a title that reinforces your position.

And I've tried to explain it to them, welcome to the world of pixel shader heavy titles. TRAoD performs considerably worse then Halo(staggeringly when all features are maxed) and doesn't come close to its visuals. I've told numerous people how to fix the performance issues, run the game in fixed function mode. You are a champion of pixel shaders, you should be using Halo as a rally point. It is a top tier game that uses shaders heavily, the only problem is that it doesn't favor your vid card.

Regardless, I seem to remember reading that Halo is using a similar, mixed mode, lowered precision path that Half Life 2 is using so that's hardly the basis for a sound comparison.

It is a title that uses DX9 shaders, the game has some levels that are covered in shaders. It is also shipping and we don't need to try and sift through a nV sponsored PR event to get to the truth of the matter.

And yet again you're ignoring the testimony from two key developers (Carmack and Gabe) and the half a dozen or so other DirectX 9 benchmarks/games that show the opposite of Halo.

There is one game on the market that has heavy PS useage that includes DX9 shaders that I have any interest in, and it runs faster on nV hardware. Obviously you are taking the stance that shipping games are nowhere near as important as synthetic benches. How many DX9 titles do you own BFG? List them all out.

GD: John, we've found that NVIDIA hardware seems to come to a crawl whenever Pixel Shader's are involved, namely PS 2.0..

Have you witnessed any of this while testing under the Doom3 environment?

"Yes. NV30 class hardware can run the ARB2 path that uses ARB_fragment_program, but it is very slow, which is why I have a separate NV30 back end that uses NV_fragment_program to specify most of the operations as 12 or 16 bit instead of 32 bit."

John Carmack

The NV30 has a lot of problems with shaders, I thought we were talking about current parts? INT12 has no advantage v running FP16 on the NV35, and this question was clearly posed as one about the NV30 in particular(Carmack explicitly names the part). You can dig up some more GD quotes if you'd like, I'll start quoting Tom's.

So in otherwords in order for the NV3x line to have any hope to compete with the R3xx it has to run at a dumbed-down precision, highlighting the fundamental flaws of the architecture that everyone except Ben seems to understand.

This is what you seem to be ignoring, and it is a key point. If there is no benefit to running FP24 then there isn't a big issue. You must have it drilled in to your head pretty hard that nVidia always has to be superior to ATi in every way, that isn't true. If running a higher level of precission provides no benefit, which I can quote both Carmack and Sweeney making comments about if you want to do the quoting game, then it is a waste to use it. Why do I keep harping on games instead of synthetic benches? Perhaps because looking at what types of shaders developers are talking about and what level of precission they are going to require overwhelmingly doesn't agree with what a few ATi driven benches show.
 

reever

Senior member
Oct 4, 2003
451
0
0
ust ignore that the application is requesting trilinear and you theory works fine. Unfortunately for your theory, ATi is in fact ignoring an application request explicitly.

You see that little box that says "application selection"? Try clicking it some time. ATI has said on numerous occasions that it will do its filtering method if you select it through the control panel. If you set it to app settins, and trilinear in the game, it will run trilinear. However, "application" on the Nvidia control panel will do nothing to your filtering, and no matter what options you put in the panel or in the game, the drivers will override your option, this is not what happens with Ati drivers

There is one game on the market that has heavy PS useage that includes DX9 shaders that I have any interest in, and it runs faster on nV hardware.

What game is that? Gun Metal? Aquamark?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
ATI has said on numerous occasions that it will do its filtering method if you select it through the control panel.

If the game doesn't have AF as an option then you are SOL, and that is the overwhelming majority of titles.

What game is that? Gun Metal? Aquamark?

Halo.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Genx87, try filtering the propaganda you cluelessly repost.
Is that all you fanATIc can do now? When presented with evidence cry to your mommy's? This is one of the more pathetic displays I have seen to date.

Um, what are you talking about? How does calling your post information without understanding equate to "crying to mommy?"

Business 101: If you own the largest marketshare of a new market. You just created an entry barrier for the competition. People always look to the market leader when purchasing items. Especially when you have a confusion in the market place.

Nobody outside of a small small % gives a rats behind how Nvidia's low end cards perform in DX9. But the simple fact Nvidia owns the largest marketshare of the new market will make it tougher for ATi to recover.

Speaking of clueless n00b.

Yes, I'm sure most people who wander into Best Buy know who has the largest marketshare of DX9 cards.

As for the game developers, they know which cards perform well with what settings, and they decide which cards to run DX9 shaders on. So far, I doubt anyone will recommend you run DX9 shaders on a 5200, unless you're still a fan of 640x480 gaming at under 30fps. As nV's claim to hold the lion's share of the "DX9 market" is predicated on including as slow a DX9 card as the 5200, I find it a misleading statistic. That's Common Sense 101.

What you dont seem to grasp is prior to Q2 ATI basically had the entire market. So in 1 Qtr Nvidia managed to go from 0% of the DX9 market to 60%. Again you seem to confuse fastest card with the end all for a business. AMD has had faster processors than Intel for years. But that doesnt seem to help AMD make more money or gain marketshare
Agreed, but ATi also had a bad rep that no doubt doesn't disappear instantly--something OEM's keep in mind. Also, ATi was only selling $200+ cards, not exactly prime movers. I'm not looking to excuse the fact that ATi didn't gain marketshare rapidly, but to explain it.

It is also hilarious to watch you attack the number based solely because of your belief the 5200 cards are not DX 9. I am sure if some Nvidia marketing manager was reading this he would be laughing all the way to the bank.
I'm approaching this from a gamer's perspective, not an economist's. That marketing manager will be laughing while his irate consumers are wondering why their DX9 cards can't run DX9 games (even ones marketed as guaranteed to run well on nV hardware, like TR:AoD) at anything approaching playable framerates.

Why wont you comment on another glaring problem for ATI? Eventhough they appparently have had the fastest card in the channel for over a year they have yet to put a dent in Nvidia's stand alone market. 64% is pretty damn strong for a company who supposedly has had "crap" (as you fanATIcs like to put it) cards.
Have they had the fastest card in the channel "for over a year?" You'll note that when the 5800U came out in Feb/Mar, some reviewers placed it on top of the 9700P. Sure, 3D fans may have known the 9700P was the superior card, but not everyone would have known it from reading things like Anand's or THG's 5900U reviews.

I like how you continually and oh-so-cleverly refer to me as a "fanATIc," and also to generalize that I refer to all of nV's cards as "crap." If you weren't so willing to paint me with the broad brush of "not agreeing with you, thus wrong," you'd know I'm mainly against the marketing of nV's GF4MX and 5200 lines. But, considering your non-response to every other rebuttal of your misguided posts in this thread, I suppose I shouldn't be surprised you'd try to bolster your argument by creating a strawman that's easier to hit. Have fun swinging.

Ill await some more of your whining...................................................
You're free to view an attempt at (increasingly argumentative) conversation based on fact "whining." You're also free to ignore any and all of the facts I've put before you to refute your previous posts. You've obviously gone ahead and done both--good show.
 

DaveBaumann

Member
Mar 24, 2000
164
0
0
As soon as you start showing SplinterCell benches of 0 for the R3X0 core boards at the highest quality settings I'll consider your point valid.

Well, Ben, for the reviews I don't do any comparisons - but If you really wanted then we we could try the full settings in TR:AoD, which include PS2.0 shadaws requireing float buffer support.... ;)

However, my comments was rather a wry one since, reading this thread, you appear to have this knee jerk reaction to any shader app that doesn't show what you like - its either biased flawed, poorly coded, irrelevent or motivated by money. I just found it a little amusing is all. :)

As for the use of Float support in Shadermark, that doesn't represent bias - I know of numerous developers that are quite vexed by this difference in support, the other elements hardly represent bias either.

And as for the comment about "leaving at that", well, thats just a bit off really.
 

reever

Senior member
Oct 4, 2003
451
0
0
There is one game on the market that has heavy PS useage that includes DX9 shaders that I have any interest in, and it runs faster on nV hardware.


http://www.hardocp.com/article.html?art=NTMzLDM=
http://www.hardocp.com/article.html?art=NTM3LDQ=
http://www.tech-report.com/reviews/2003q4/geforcefx-5700ultra/index.x?pg=10
http://www.tech-report.com/reviews/2003q4/geforcefx-5950ultra/index.x?pg=8
http://www.hothardware.com/hh_files/S&V/geforcefx_5950u(7).shtml
http://www.tomshardware.com/graphic/20031023/nvidia-nv38-nv36-32.html

But hey, I know your response, it runs better on YOUR computer
 

Johnbear007

Diamond Member
Jul 1, 2002
4,570
0
0
Originally posted by: jiffylube1024
Well, I just finished reading Anand's article (all 60 pages of it) and I'm extremely impressed with Anand (yet again). What a solid review.

I'm also impressed with Nvidia. It seems the 52.xx series is finally panning out as the touted "holy grail" it was supposed to be. Aside from a few image quality problems (on both cards), they run neck and neck in the majority of benchmarks.

Yet again Nvidia has stepped to the plate and delivered a huge % increse in most games it was struggling to keep up in. ATI's card must be commended, as it is either in first place, or right behind Nvidia in all the benchmarks. And they still have their trump card, Catalyst 3.8 to play.

yeah, anands articles sure beat the heck out of the biased poop spewing from THG
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Well, Ben, for the reviews I don't do any comparisons - but If you really wanted then we we could try the full settings in TR:AoD, which include PS2.0 shadaws requireing float buffer support.... ;)

If you included them all then that would be fair. It's not about trying to make one card look better then the other.

However, my comments was rather a wry one since, reading this thread, you appear to have this knee jerk reaction to any shader app that doesn't show what you like - its either biased flawed, poorly coded, irrelevent or motivated by money. I just found it a little amusing is all.

It doesn't have anything to do with not showing what I like, right now 3DMark2K3 is showing results that put the boards on a rather level playing field that does not change the fact that I still don't think it is a good bench.

As for the use of Float support in Shadermark, that doesn't represent bias - I know of numerous developers that are quite vexed by this difference in support, the other elements hardly represent bias either.

Why does refrast have issues rendering two of the tests? And also, they are reading and writing to textures at the same time in one of the tests, something that MS has said is a bug in DX and they intend to fix it. Those are three of the tests the FX won't run.

And as for the comment about "leaving at that", well, thats just a bit off really.

..? I'm a bit lost, which comment are you talking about?

Reever-

But hey, I know your response, it runs better on YOUR computer

No, how about their's. HardOCP didn't run a bench(why I'm not sure), the other sites show the FX parts beating out ATi's parts at the higher resolutions.
 

oldfart

Lifer
Dec 2, 1999
10,207
0
0
No, how about their's. HardOCP didn't run a bench(why I'm not sure), the other sites show the FX parts beating out ATi's parts at the higher resolutions.
Got some links? Looking at the links rever posted, the nV and Ati parts were pretty much equal in Halo without AA/AF with nV taking a bit of a backseat when AA and AF were used. I hope you dont mean the 1 FPS advantage that nV has over ATi in the unplayable 16 x 12 resolution. That would be really reaching.
There is one game on the market that has heavy PS useage that includes DX9 shaders that I have any interest in, and it runs faster on nV hardware.
The benches in these reviews do not support that statement. The 2 are pretty much neck and neck.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
Ben:
You make the paranoid accusations, you provide the proof. That is how it works.
You don't have a shred of evidence to support your theory that nVidia isn't cheating and all you've done is pulled theories out of thin air. I have evidence in the form of Unwinder's publically available comments and application that knows nVidia's cheating drivers inside out.

Your problem in this entire thread has been that you refuse to acknowledge anything that doesn't jive with your stance on the whole issue.

Just ignore that the application is requesting trilinear and you theory works fine.
Yes the application is requesting trilinear, not trilinear anisotropic. If you enable trilinear in an application do you expect anisotropic filtering to be enabled along with it? I sure as hell don't.

Unfortunately for your theory, ATi is in fact ignoring an application request explicitly.
The application isn't requesting AF and therefore they aren't ignoring anything.

You want me to start quoting the counterpart to ATi's PR monkey?
Right, so you're back to your comment about Gabe being a liar. Yet again you make outrageous claims without any proof and you expect to use them to form the basis of a solid argument for your case?

I suppose next you'll ironically claim that the burden of proof is on me to prove that Gabe is trustworthy?
rolleye.gif


Now that nV has had their PR day I can find quotes from developers saying the same type of things that ATi is cheating on.
Fire away. In fact I encourage you to use the other thread where they're being brought up and most of them have already been dismissed.

WTF? Where did you read anything like that?
On one the websites that had the intially leaked HL2 benchmarks.

The problem is that they did not use the same compiler for the different paths.
But they used it for the mixed mode one.

Are you sure you want to go on this tangent?
What tangent might that be? Making comments that are provable by widespread evidence? Of course I do.

Taking a look at the games I play the FX comes out on top in damn near everyone of them.
Of course, shader subsitution and other such cheats will do that. Both Anand and 3DCenter have verified such shader subsitution.

The point of it is that they were trying to show a larger performance rift then what there would be if it was done by any reasonably intelligent impartial developer.
But how can you claim that if they did everything in their power to make the fastest path on the nVidia cards on the basis that they didn't do the same for the slower and irrelevant path?

Do you deny the comments from dozens of developers and reviewers that the FX line has problems running full precision code? Do you deny that FX code has to massaged in a specific a way in order to attain reasonable performance? In my last post I linked to two developers that verified this for heaven's sake.

So again I'll ask, how is Valve's completely optimised mixed mode path creating an artificial performance rift?

It is a top tier game that uses shaders heavily, the only problem is that it doesn't favor your vid card.
I really have no issues with that Ben. If Halo runs better on FX cards then that's more power to them.

My issue is with you using Halo to "disprove" all other findings that don't support your claims. My other issue is that everything non-Halo to you appears to be irrelevent, badly coded or paid for by ATi.

It is a title that uses DX9 shaders, the game has some levels that are covered in shaders.
Do you deny that Halo is not running a mixed-mode, reduced precision path on FX cards, similar to HL2's path?

There is one game on the market that has heavy PS useage that includes DX9 shaders that I have any interest in, and it runs faster on nV hardware.
Great. Does that mean Carmack, Gabe, [insert all other developers] along with 3DCenter, [insert other reviewers here] are all wrong?

If there is no benefit to running FP24 then there isn't a big issue.
Except precision isn't the only issue here. The other issue is the architecture and how it relies on a ridiculously unrealistic methods of instruction scheduling in order to have any reasonable chance of competing.