New Drivers for NVIDIA GFFX are out.

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Rand
Originally posted by: chizow
No, the issue isn't whether or not 1.3 or 1.1 would've been the fallback, its that the scoring is weighted heavily in 1.4's favor b/c there is simply no 1.1. or 1.3 support. nVidia favors 1.3, but they still don't provide 1.1 as stated in their whitepapers.
Chiz



How is the scoring weighted in favor of PS 1.4?
Certainly PS 1.4 has it's advantages, but those advantages would carry over to any game that utilized PS 1.4 so I see nothing wrong with supporting PS 1.4.
Also, what do you mean there is no support for PS 1.1 or 1.3?
In every test that non DX9 compliant hardware can run there is a fallback to PS 1.1.

Another question, what do you mean in saying nVidia doesnt "provide" PS 1.1?
If your implying the drivers don't support it then that's most definitely incorrect.

Meant to say optimizations for 1.1, and 1.3 isn't supported, as FutureMark and nVidia have commented upon. FutureMark uses different shader code for different cards, but did not optimize coding for nVidia products. Granted, there is less need to optimize for ATI products, but again, this skews real-world performance because game developers DO optimize their drivers for both ATI and nVidia. I guess the best way to describe it is if there is 2 routes to get to the same destination equally fast, FutureMark only gives 1 set of directions (although 1 might be a shortcut).

The fact that 1.3 isn't supported in the DX8 tests clearly puts the GF4 series cards at a disadvantage, forcing them to render less efficiently than they would normally. The quality and performance difference between 1.3 and 1.4 is marginal, yet an 8500 (which gets stomped by a GF4 in real-life) scores higher? Again, clearly not the case in reality, and still won't be the case in any future game whether it supports 1.4 or 1.3. Its comparing apples to oranges (1.4 to 1.1), where one card is forced to run less efficiently than it would in reality.

As for the 2 games you mentioned that do support PS 1.4, I'm willing to wager the Ti4600 outperforms the 8500 in both, which further invalidates the claim that 3dmark2k3 is a valid benchmarking tool.

Chiz
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,956
126
I don't have time to join the discussion right now but I will say that while the drivers look good, 3DMark is largely a synthetic benchmark and I'll not be impressed until I see the results transfer into real games.
 

Duvie

Elite Member
Feb 5, 2001
16,215
0
71
I agree....I am not impressed or have been impressed with anything 3dmark tells me.... I want to see if those optimizations were done to just bump up this "synthetic score" yet all the same scores remain in the real world games....Ppl need to start looking to that more....
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
I don't really care about the increased performance either. In fact I was surprised the FX suffered before a recent driver change. Supposedly it was to be a superior DX9 part compared to the 9700, and the benchmark is run @ 1024x768 with no FSAA nor any AF, and the FX had been performing well up until then in such an area. 128 bit architecture really cut the FX off at the balls...
 

Spicedaddy

Platinum Member
Apr 18, 2002
2,305
75
91
Meant to say optimizations for 1.1, and 1.3 isn't supported, as FutureMark and nVidia have commented upon. FutureMark uses different shader code for different cards, but did not optimize coding for nVidia products. Granted, there is less need to optimize for ATI products, but again, this skews real-world performance because game developers DO optimize their drivers for both ATI and nVidia. I guess the best way to describe it is if there is 2 routes to get to the same destination equally fast, FutureMark only gives 1 set of directions (although 1 might be a shortcut).

The fact that 1.3 isn't supported in the DX8 tests clearly puts the GF4 series cards at a disadvantage, forcing them to render less efficiently than they would normally. The quality and performance difference between 1.3 and 1.4 is marginal, yet an 8500 (which gets stomped by a GF4 in real-life) scores higher? Again, clearly not the case in reality, and still won't be the case in any future game whether it supports 1.4 or 1.3. Its comparing apples to oranges (1.4 to 1.1), where one card is forced to run less efficiently than it would in reality.

As for the 2 games you mentioned that do support PS 1.4, I'm willing to wager the Ti4600 outperforms the 8500 in both, which further invalidates the claim that 3dmark2k3 is a valid benchmarking tool.

Chiz


Do you know what the differences between PS 1.1 through 1.4 are? I did a little research, here are a few quotes from a thread at beyond3d:

1.1, 1.2 and 1.3 are almost identical. 1.2 and 1.3 add the ability to have 12 instructions instead of 8. 1.3 adds the ability to modify the depth value from inside the shader, and a couple of extra texture unit instructions (that aren't that useful). 1.3 also adds the ability to use 3 texture coordinate sources on the same instruction. I don't think it has any other significant improvement over 1.1.

If you don't need any of these new functions, then there's no point using it because not all DX8 hardware necessarily supports 1.3, whereas ALL DX8 hardware must support 1.1. (Edit: if they support pixel shaders at all)

The main difference is that 1.4 contains the ability to perform two phases. Each phase consists of some texture sampling instructions followed by some ALU instructions. The texture sampling instructions in the second phase can be dependent reads based upon the ALU calculations in the first phase - which is what John Carmack needs in order to do his lighting algorithm in a single pass.

1.1-1.3 can also do dependent lookups, but in a very restrictive way - arbitrary ALU operations cannot be applied (which prevents JC single pass lighting). The 1.4 model for handling dependent textures is simpler - just 4 texture unit commands instead of about 20.

1.4 also has more instructions (14 - I think this is 14 instructions per phase and so actually 28 total instructions



So basically, 1.2 or 1.3 would not be any faster than 1.1 if you're not using any of its extra features, while 1.4 is faster because it can do it in a single pass. That's why they didn't include support for PS 1.3. There's a big performance difference between 1.3 and 1.4 unlike what you seem to think...
 

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
thank you Spicedaddy, i had read that before aswell but i am not an expert by anymeans and did not know of where i could find one off had to quote.

however it does go to show how much fud nvidia is spewing out in this situation in order to try and fool people into beliveing they are a victom. on top of that, word still has it that the gefocefx is not rendering the whole benchmark anyway which casts serious doubt on the validitiy of the benchmark scores, syntetic or not. oddly, i have found no one who actualy has a geforcefx to benchmark has bothered to confirm nor deny if this is an issue with the card, despite the fact that many of them have published benchmarks showing its superiority useing the drivers in question. does this not bother anyone else?
 

Spicedaddy

Platinum Member
Apr 18, 2002
2,305
75
91
I'm not an expert either, I'm just tired of seeing all the posts about nVidia being discriminated because there's no PS 1.3 support in Game Tests 2 & 3, and it's forced to use the slower 1.1 when in reality it wouldn't change anything.


And about PS 1.4 not being used in any games: Doom 3 has an R200 path which is there because the 8500 supports PS 1.4, hence it can do the lighting in a single pass. The Geforce 4 will use the NV20 path, which is essentially PS 1.1 & multipass... (doesn't mean the 8500 will be faster overall, but it will do the lighting faster)
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Spicedaddy

So basically, 1.2 or 1.3 would not be any faster than 1.1 if you're not using any of its extra features, while 1.4 is faster because it can do it in a single pass. That's why they didn't include support for PS 1.3. There's a big performance difference between 1.3 and 1.4 unlike what you seem to think...

I guess you missed my references to multi-passes or single-passes to render the same scene
rolleye.gif


Its all fine and good, but if the R200 still takes X amount of time to perform 1 pass and the GF4 takes X/.75 to make 3 passes, which one is faster? Also, since you seem to be quoting Carmack so much, he states VERY clearly that 1.4 is essentially UNNECESSARY for Doom3:

Maybe you should do a little more research from the site you chose to quote:

B3D-> Higher precision rendering. It appears that the GF3/GF4Ti clamps the results (including intermediate ones) when some part of the calculations goes over 1.0. The Radeon 8500, with up to 8.0 higher internal ranges, can keep higher numbers in the registers when combining, which allows for better lighting dynamics. How much will this have an impact in DOOM3's graphics?

Carmack->At the moment, it has no impact. The DOOM engine performs some pre modulation and post scaling to support arbitrarily bright light values without clamping at the expense of dynamically tossing low order precision bits, but so far, the level designers aren't taking much advantage of this. If they do (and it is a good feature!), I can allow the ATI to do this internally without losing the precision bits, as well as saving a tiny bit of speed.

B3D->Multiple passes. You mentioned that in theory the Radeon8500 should be faster with the number of textures you need (doing it in a single pass) but that the GF4Ti is consistently faster in practice even though it has to perform 2 or 3 passes. Could this be due to latency? While there is savings in bandwidth, there must be a cost in latency, especially performing 7 textures reads in a single shader unit.

Carmack-> No, latency should not be a problem, unless they have mis-sized some internal buffers. Dividing up a fixed texture cache among six textures might well be an issue, though. It seems like the nvidia cards are significantly faster on very simple rendering, and our stencil shadow volumes take up quite a bit of time.

Several hardware vendors have poorly targeted their control logic and memory interfaces under the assumption that high texture counts will be used on the bulk of the pixels. While stencil shadow volumes with zero textures are an extreme case, almost every game of note does a lot of single texture passes for blended effects.

Oh yah, the GF FX does the same thing, and it performs faster than the R9700, just to save your breath. Now that 3DMarketing2k3 is hackable, and a piss poor test of "future games", do you still think its a good benchmark?

Chiz
 

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
Originally posted by: chizow
Its all fine and good, but if the R200 still takes X amount of time to perform 1 pass and the GF4 takes X/.75 to make 3 passes, which one is faster?

well the r200 of course, the geforce4 would be takeing 33% longer with the numbers you gave above. but the number of passes is irrelenvent to the final product which is what the benchmark is intended to repersent.

Also, since you seem to be quoting Carmack so much, he states VERY clearly that 1.4 is essentially WORTHLESS and UNNECESSARY


actualy you copleatly pulled that one out of the air, he simply said that the 8500 on its path would not run beter than an nv20 on its path due to 1.4, that is a far cry from calling something worthess and unessacary. if you can't help me with a caclulus problem would you consider yourself worthess and unessacary?

Now that 3DMarketing2k3 is hackable, and a piss poor test of "future games", do you still think its a good benchmark

is any benchmark not hackable?!?

and a piss poor test of "future games"

according to who that has any atority to say such things? are you on the directx board of standards? what in the world do you know about programing videogames anyway? or who do you know that does who has anything so harsh to say about 3dmark?

do you still think its a good benchmark

just as much as it has always been, even moreso maybe, i am realy interested to see more on the immage quality tests included in the package. does immage quality not intrerest you any more than benchmarks based on the latest directx feature set chizow?
 

Spicedaddy

Platinum Member
Apr 18, 2002
2,305
75
91
Also, since you seem to be quoting Carmack so much, he states VERY clearly that 1.4 is essentially UNNECESSARY for Doom3:

LOL, then why the hell is he writing an R200 mode which sole purpose is to add PS 1.4 support?? :D

B3D-> Higher precision rendering. It appears that the GF3/GF4Ti clamps the results (including intermediate ones) when some part of the calculations goes over 1.0. The Radeon 8500, with up to 8.0 higher internal ranges, can keep higher numbers in the registers when combining, which allows for better lighting dynamics. How much will this have an impact in DOOM3's graphics?

Carmack->At the moment, it has no impact. The DOOM engine performs some pre modulation and post scaling to support arbitrarily bright light values without clamping at the expense of dynamically tossing low order precision bits, but so far, the level designers aren't taking much advantage of this. If they do (and it is a good feature!), I can allow the ATI to do this internally without losing the precision bits, as well as saving a tiny bit of speed.

B3D->Multiple passes. You mentioned that in theory the Radeon8500 should be faster with the number of textures you need (doing it in a single pass) but that the GF4Ti is consistently faster in practice even though it has to perform 2 or 3 passes. Could this be due to latency? While there is savings in bandwidth, there must be a cost in latency, especially performing 7 textures reads in a single shader unit.

Carmack-> No, latency should not be a problem, unless they have mis-sized some internal buffers. Dividing up a fixed texture cache among six textures might well be an issue, though. It seems like the nvidia cards are significantly faster on very simple rendering, and our stencil shadow volumes take up quite a bit of time.

Several hardware vendors have poorly targeted their control logic and memory interfaces under the assumption that high texture counts will be used on the bulk of the pixels. While stencil shadow volumes with zero textures are an extreme case, almost every game of note does a lot of single texture passes for blended effects.

1. What does higher precision rendering have to do with this? :confused:

2. I never said 8500 would be faster, just that PS 1.4 enabled it to do 1-pass rendering while GF4 had to do multipass since it lacked PS 1.4. (in other words, if 8500 didn't have PS 1.4, it'd be much slower)


 

Spicedaddy

Platinum Member
Apr 18, 2002
2,305
75
91
Oh yah, the GF FX does the same thing, and it performs faster than the R9700, just to save your breath.

It does the same thing as what? If you mean fall back to PS 1.1, then no it doesn't since it's DX9 and supports all PS versions up to 2.0. And yes, it's faster.


Now that 3DMarketing2k3 is hackable, and a piss poor test of "future games", do you still think its a good benchmark?

What do you mean by hackable? (driver optimizations or result submissions?)

As for being a poor test of future games, we'll see in a year or two, just like we did with 3DM 2001. (which proved to be useful IMO)
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Spicedaddy
LOL, then why the hell is he writing an R200 mode which sole purpose is to add PS 1.4 support?? :D

Maybe he's a nice guy and doesn't want all those R200 owners out there to have a worthless card in their box when Doom3 is released. As I mentioned earlier, 3dMiss2k3 chooses to use PS 1.4, yet there is ONLY ONE card that truly benefits from it, R8500. PS 1.4 time came and went with little support, its been replaced, and due to the fact that all DX 8 parts are able to fall back to PS 1.1, that will be the default; DX 9 parts will default to PS 2.0. Its pretty obvious why he's optimizing for every GPU family out there: 1) He's one hell of a code writer, and 2) He wants it to be compatible with as many cards as possible, after all, it was made to play and sell copies, unlike some software out there.
rolleye.gif



1. What does higher precision rendering have to do with this? :confused:
Your words: So basically, 1.2 or 1.3 would not be any faster than 1.1 if you're not using any of its extra features, while 1.4 is faster because it can do it in a single pass.

Doom 3 has an R200 path which is there because the 8500 supports PS 1.4, hence it can do the lighting in a single pass. The Geforce 4 will use the NV20 path, which is essentially PS 1.1 & multipass... (doesn't mean the 8500 will be faster overall, but it will do the lighting faster)

Seems relevant to me.

2. I never said 8500 would be faster, just that PS 1.4 enabled it to do 1-pass rendering while GF4 had to do multipass since it lacked PS 1.4. (in other words, if 8500 didn't have PS 1.4, it'd be much slower)

So a feature that was enabled to enhance the performance of a crippled and overambitious part makes it a good benchmark for the future? Your statement also contradicts what you were saying about PS 1.4 being faster b/c it can render in a single pass, which clearly isn't the case.

Regardless, the whole point of why 3dmark2k1 is BS is b/c its well known there are different methods of rendering b/c of vendor specific code. The problem lies in that 3dmark only paints half the picture by either:

1) Optimizing the benchmark code to allow PS 1.4 parts to run faster than PS 1.1 parts which in reality still outperform said part (which is what nVidia claims)

or

2) Arbitrarily assigning an artificially high 3dmark point value to a 1.4 part; or by the same tolken assigning an arbitrary penalty to non-PS 1.4 compatible parts.

Since I don't have a GF4 or 8500 to test with (and few results are available for viewing off of ORB), I'd be interested to see what the actual FPS results are for either card. My thoughts are that the GF4 will have lower scores than the 8500 b/c 3dmark optimizes the code for PS 1.4. Again, this is BS since its VERY clear that this would not happen in the real-world, in either current or future games.

If the GF4 has higher FPS marks than the 8500, then its clear that there is some arbitrary penalty for not supporting PS 1.4 (the 8500 outscores the GF4 by 1000 pts or so).

Chiz
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Spicedaddy
It does the same thing as what? If you mean fall back to PS 1.1, then no it doesn't since it's DX9 and supports all PS versions up to 2.0. And yes, it's faster.

Yes, it supports PS 2.0, but it still uses vendor specific code/optimizations (NV30 path) which shows that PS 1.4 and/or 2.0 are NOT inherently more efficient or a good basis for future gaming performance. The classic and most obvious argument is Intel vs. AMD, IPC vs. MHz. Both use different paths to accomplish the same thing, but a test can easily be designed to exploit/suppress one method vs. the other. 3dmark does just that.

What do you mean by hackable? (driver optimizations or result submissions?)

There are early indications of both, there's a link over in the THG 3dMark guide.

As for being a poor test of future games, we'll see in a year or two, just like we did with 3DM 2001. (which proved to be useful IMO)

I highly doubt it. I expect a patch within a month, or it'll just lose all credibility as "The Gamer's Benchmark" (if it hasn't already).

Chiz

Edit: I'm gone for the night......time to turn on my new Screensaver: |3SMarketing2k3: The Vaporware Benchmark.
 

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
Originally posted by: chizowYes, it supports PS 2.0, but it still uses vendor specific code/optimizations (NV30 path) which shows that PS 1.4 and/or 2.0 are NOT inherently more efficient or a good basis for future gaming performance.

well maybe not on nvidias cards but that realy is not anyones problem but nvidias now is it? i mean it is not like someone left them out of the look when the agreed directx standerds were put in place now was it? besides do are you realy even thinking about what effect this would have on the gameing grapics industry at all? we have this standards in place soly to prevent monoplisation of the industry, by haveing standards set in stone like ps1.4 and ps2 it alows game developers to code their games for the wideist audence possable and also for new grapics card manufatures to rise up in the industry. what you are basicly preaching is the abolishmet of such wonderful things chizow, why in the world would you want that?!?

The classic and most obvious argument is Intel vs. AMD, IPC vs. MHz. Both use different paths to accomplish the same thing, but a test can easily be designed to exploit/suppress one method vs. the other. 3dmark does just that

ipc and mhz are not paths by any sence of the definition and if you are acuseing futruemark of swaying the scales to cast an unfair light you should present some real arguments to back your claim instead of contenualy slandering them with rhetoric. i am not even going to get into who is trying to exploit what here because it should be blatently obvious to anyone who is paying attention at all. the qeustion is; why chizow, are you still hopeing to make up some lost stockvalue from nvidia or just upset about the fact that the geforce4 does not have the advanced rendering methods that ati has been putting into their hardware lately?


ohh ya and sence quoteing John Carmack has become so popular lately and sence it backs up my above agument please note his comments here:

Doom has dropped support for vendor-specific vertex programs
(NV_vertex_program and EXT_vertex_shader), in favor of using
ARB_vertex_program for all rendering paths. This has been a pleasant thing to
do, and both ATI and Nvidia supported the move. The standardization process
for ARB_vertex_program was pretty drawn out and arduous, but in the end, it is
a just-plain-better API than either of the vendor specific ones that it
replaced. I fretted for a while over whether I should leave in support for
the older APIs for broader driver compatibility, but the final decision was
that we are going to require a modern driver for the game to run in the
advanced modes. Older drivers can still fall back to either the ARB or NV10
paths.

that is from his plan which can be found at blue's.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Well here's my take on this so far. 3dmark 2k3 doesnt seem so hot right now, and certainly being the "gamer's benchmark" is a stretch, without question, but I think it still will be acceptable for one thing: comparing video cards with the same level of Pixel/Vertex shader capabilities, ie the GF FX and the Radeon 9700. A fair comparaison between the R8500 and GF 3/4 isn't really possible, however.

The weighting system (they way they give an arbitrary "weight" to each game, and also screw any non-DX9 (PS/VS 2.0) by getting a 0 on the fourth game test, and getting screwed on other ones), makes scored totally incomparable with cards from different generations. The whole PS 1.1 vs PS 1.4 thing is really getting blown out of proportion (perhaps rightly so, if the R8500 is indeed beating the Ti4600, which Carmack specifically states never happens in the real world with the D3 engine, not to mention doesn't happen in any other game without AA/AF being turned on).

Regardless, 3dmark 2k3 will probably become a semi-"valid" (loosely used terminology) test once we get top-to-bottom DX9 support from each manufacturer. Until then, it will only be a good way to compare DX9 cards to one another (and even then it has mostly DX8 stuff). So, it will earn it's place as a useful way to test if your system is performing up to snuff only if you have a Radeon 9700/9900 / GF FX / any future DX9 card from either manufacturer.

3dmark 2k1 still isn't exactly the be-all-end-all test, but it's alright at making sure that something isn't grossly wrong with your system configuration. In this sense, with a DX9 card, I think 3dmark 2k3 will also be a good test to make sure something isn't totally amiss with your performance.

I really couldn't care less if nVidia's next "value" card does or doesn't support DX9, that's their problem. They will definately be depriving themselves of sales from the "savvy" purchaser, who will probably be able to pick up cheaply priced 9700's by then. I don't think this test is descriminatory towards nVidia or ATI, just discriminatory against cards without PS/VS 2.0 .
 

dbal

Senior member
Dec 6, 2001
395
0
0
www.facebook.com
A little irrelevant but close to the initial topic-Does anyone know why nVidia has sooooo long to publish an officcial new driver in their site (since Dec.3rd) and when are they going to save us from web leaked versions?
 

Lonyo

Lifer
Aug 10, 2002
21,939
6
81
Our recommendations for correct benchmarking are the following:

Use game benchmarks when you want to find out how fast a certain game runs on your computer; Use 3DMark2001 for a comparable overall performance measurement of DirectX 7 or first generation DirectX 8 compatible hardware; Use 3DMark03 for a comparable overall performance measurement of DirectX 9 compatible hardware

From 3D Mark 2003 people.


Some comments here (from ATi and 3D Mark 2003 people)
 

Spicedaddy

Platinum Member
Apr 18, 2002
2,305
75
91
/\
||
||
From the article above:


Why Do Game Tests 2 And 3 in 3DMark03 Only Use Pixel Shader 1.4 or 1.1?

According to the DirectX 8 specification, there are 4 different pixel shader models. In order to do a fair benchmark, you want any hardware to do the minimum number of shader passes necessary to render the desired scene. We analyzed all 4 shader models and found that for our tests Pixel Shader 1.2 and Pixel Shader 1.3 did not provide any additional capabilities or performance over Pixel Shader 1.1. Therefore we provided two code paths in order to allow for the broadest compatibility.

A good 3D benchmark must display the exact same output on each piece of hardware with the most efficient methods supported. If a given hardware supports pixel shader 1.4, like all DirectX 9 level hardware does, then that hardware will perform better in these tests, since it needs less rendering passes. Additionally, 1.4 shaders allow each texture to be read twice (total 4 texture lookups in 1.1, but 12 (=6*2) in 1.4 shaders). This is why, not only Futuremark, but also game developers can only implement single pass per light rendering using a 1.4 pixel shader, and not using a 1.3 or lower pixel shader. A 2.0 pixel shader would not have brought any advantages to these tests either. Note that the DirectX design requires that each new shader model is a superset of the prior shader models. Therefore all DirectX 9 hardware not only supports pixel shader 2.0, but also Pixel Shader 1.4, 1.3, 1.2, and 1.1.



 

Lonyo

Lifer
Aug 10, 2002
21,939
6
81
Originally posted by: keysplayr2003
From an article on THG targeted at the new 3Dmark '03 benchmark. Has severely increased the performance in most areas, even doubled in some. Well, I guess Nvidia is going to save its A$$ with repeated releases of improved drivers. Kudos to Nvidia for the speedy release of the new drivers. Like they had any choice in the matter. :)

The graph shows the 9700 pro against the GFFX 5800U with v42.63 drivers and v42.68 drivers. BIG difference.

Click Here for the article on THG

I thought they pretty much ONLY improved performance in 3D Mark, according to some comment or other about them.
You'll notice that no actual game tests were done to compare the two sets of drivers, although that's understandable since the article wasn't about general game performance.
nVidia do seem to like 3D mark benchies, and many of their drivers have done little but improve scores.
 

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
oddly everyone who has done reviews posting the impressive scores have made no statements regarding any rendering issues at all, despite repeated questioning. actually no one with a gefocefx has said a word, yet it is well reported that other the newest beta drivers geforce cards omit a a wide variety of what is rendered on official drivers and produce a better score while doing so. i find the whole situation rather troubleing. how do you all feel about this?
 

CheapTOFU

Member
Mar 7, 2002
171
0
0
G4ti 4600: about 1500 pts
R9700pro: about 4500 pts

In any games with AA and AF turned on, R9700pro 3x faster..

I never liked ATI.. but Nvidia lost..

Also Doom3 was introduced with R9700, not G4ti4600
Ppl who tried leaked Doom3 demo told me that R8500 plays better than G4ti.. I think it has something to do with P.s 1.4
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: TheSnowman
oddly everyone who has done reviews posting the impressive scores have made no statements regarding any rendering issues at all, despite repeated questioning. actually no one with a gefocefx has said a word, yet it is well reported that other the newest beta drivers geforce cards omit a a wide variety of what is rendered on official drivers and produce a better score while doing so. i find the whole situation rather troubleing. how do you all feel about this?

Snowman, I honestly dont think I know anyone who has the GFFX Ultra card yet. But I think that the GFFX 5800 (non-ultra) will be widely available soon (providing that Nvidia and TSMC can get the yields up soon). So these new drivers would benefit the non-ultra card as well I guess.

How do I fell about all this?
Well, I would like to see new gaming benchies run from Anand and THG with the new drivers or the next more mature set of drivers. I have no intention of buying the NV30 in any flavor. I just like to see the competition unfold. Competition with these 2 companies is what keeps the consumers pockets from completely draining. Like Intel/AMD, Nvidia/ATI, Intel/SiS/VIA etc. etc.

I have always owned Nvidia cards. The last one I bought was a GeForce2 Ti 64MB DDR. Great card. But I just ordered a Radeon 9500 pro 128 MB for my upgrade choice. Many reasons. 1. The price was excellent for the performance. 2. AGP 8x support 3. DirectX 9 support (future proof) 4. FSAA/AA superiority. 5. Same core and core speed of the 9700(non-pro) albeit 128 bit memory bus.

So I feel that the 2 companies should continue brawling tooth and nail for the top. And if the crown teters back and forth? so what? better for all of us gamers.

Any comments?

Keys