• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Farcry 1.2 patch recalled

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: SickBeast
NVidia tried to do that with NV30 and failed miserably. Instead of making a DX9-compliant card, they made one that complied with their "CineFX" technology and "C for Graphics". We all know how that situation turned out.

So many people forgot - NVidia's first PC graphics hardware-accelerator, was called the NV-1. (I have one.) This was released before DirectX, and indeed, it actually supported not only a proprietary API, but had its own proprietary way of rendering 3D scenes, using quadratic patches rather than triangle meshes. It was bundled with several Sega Saturn game ports, and also had dual Saturn-compatible digital-joystick ports, and a 64-voice music synthesizer besides. (Yes, it was a "multimedia accelerator", both video and sound, and also game input.)

Needless to say, DirectX was released (actually, Direct3D), it was triangle-based, most of the game developers were switching from DOS to 32-bit Windows platforms at the same time anyways, and most were tired of supporting half-a-dozen proprietary 3D APIs for their games. (If you thought that it was bad that game developers had to release patches because of ATI or NV issues, imagine how bad if you had to test and issue patches for six different API standards? It truely was a hell.)

Anyways, NV-1 was totally incompatible with Direct3D. Sales tanked, and it almost, literally, killed the company. From then on, NV has always tried to fully support as many "open" 3D standards as they can, avoiding tying their hardware to any particular proprietary standards. This is probably also a reason why their OpenGL support is also so good - in case MS, for whatever reason, decides to take DirectX/D3D totally proprietary, or kill it off completely, it won't kill off NVidia in the process.

CineFX was more of a marketing technology, and "C for Graphics" isn't exactly dead, in fact, newer versions of DirectX are now going to sport some sort of HLSL compilier themselves, which IMHO is a good thing for developers.

I don't really think that either one of those initiatives was really an attempt to move developers to support a proprietary API rather than a "standard" one, but rather a way to get them to prefer NV-specific enhancement technologies over a competitors. Granted, this distinction may be rather slim, but it means that the devs can still fall back to the "standard", and be compatible, but with the tradeoff that they will "look better" or otherwise have an advantage on NV-based hardware, if the devs choose to spend the additional time supporting them.
 
Originally posted by: Rollo
I've got $5* Paypal waiting for you if you've purchased more ATI cards for personal use than I have in the last five years. Here are the ones I've purchased:
MAXX, VIVO, 32DDR, 8500, 9700Pro, 9800Pro- all retail versions, new from store at launch price, money in ATIs pocket. I think I've got over $1700 worth of cards there.

How about you Sick Beast? You think ATI wants you as a customer more than me?

Rollo, I apologize for saying you only buy nVidia. In any event, you can keep the $5, since I have not bought more ATi cards for personal use than you have over the years.

-3D Xpression + PC2TV
-R7000
-R7200
-R8500
-R9700PRO

I bought 5, you bought 6. You've indeed made your point.

To discuss your nVidia-only API, 3DFX WAS the king of the videogame world back in the days of Quake 1. If your card couldn't run Glide, then you couldn't play 75% of the 3D games that were available with hardware acceleration. You should realize that if nVidia had an API that only ran on their cards, and people stopped developing for Direct3D/OpenGL, nVidia would have a monopoly on the gaming market. You wouldn't want to run your games in software mode, now would you? The last game I saw with software acceleration was Unreal.
 
Originally posted by: Rollo


The problem Old Fart, is that I wasn't "nVidia biased" then or now. I've always bought the cards by both companies. If ATI would have actually produced something new this year chances are pretty good I would have bought one, as well as a nV40, at some point.[/b]

That's horse manure. Even if you did buy another ATI card this generation, it would have been just like last year where you lamented your purchase of a 9700 Pro (or was it a 9800 Pro) and pined for an FX5800. You b!tched endlessly about their drivers, blah blah. Then, when you finally got one of several FX5800 cards you started an evangelical quest to prove that the GF FX series wasn't "that bad," and that had it had a 256-bit memory interface it would have been quite competitive.

You are right that I seemingly reversed my position on DX9 from over a year ago, and now consider DX9c a compelling reason to buy nVidia this generation.
There weren't any games last year that offered reason to prefer ATI for DX9 last year?
There is one big game (Far Cry) out now using SM3, and 11 more in development, some of which will be out within the next year.

Just like how Half Life 2 was supposed to be out "within the year" last year for DX 9.0 support?

Beyond that, I want to be part of an installed user base that drives the industry forward, rather than just buying another R300 core and saying in 3dfx fashion, "They'll give us those features when we need them"?

Just like how you didn't want to be part of the full DX 9.0-support installed user base last year and instead opted to go with NV3x hardware, with incomplete DX 9.0 compliance?

I hate to be so venemous, but Rollo, come on here. You have, quite conveniently, changed your position completely; on driver optimizations, on deals graphics cards make with developers for big bucks, on buying 3d hardware with the most complete feature set at the time...

It's not rocket science - you've changed your stance on almost everything, and it quite conveniently matches the shift in technology this generation from ATI to Nvidia holding the cards.
 
Originally posted by: VirtualLarry
I don't really think that either one of those initiatives was really an attempt to move developers to support a proprietary API rather than a "standard" one, but rather a way to get them to prefer NV-specific enhancement technologies over a competitors. Granted, this distinction may be rather slim, but it means that the devs can still fall back to the "standard", and be compatible, but with the tradeoff that they will "look better" or otherwise have an advantage on NV-based hardware, if the devs choose to spend the additional time supporting them.

I see what you're saying, but I still disagree with the 3D companies doing this sort of thing. It's just like ATi's truform or 3DC; they should stick to the open standards IMO. If they want something like C for Graphics that badly, they should lobby Microsoft to include it in the next edition of DirectX. Otherwise they should focus on making their graphics cards better at supporting the standard ways of doing things. These sorts of things create an unbalanced playing field and are a blatant attempt to stiffle competition.
 
Originally posted by: SickBeast
Originally posted by: Rollo
I've got $5* Paypal waiting for you if you've purchased more ATI cards for personal use than I have in the last five years. Here are the ones I've purchased:
MAXX, VIVO, 32DDR, 8500, 9700Pro, 9800Pro- all retail versions, new from store at launch price, money in ATIs pocket. I think I've got over $1700 worth of cards there.

How about you Sick Beast? You think ATI wants you as a customer more than me?

Rollo, I apologize for saying you only buy nVidia. In any event, you can keep the $5, since I have not bought more ATi cards for personal use than you have over the years.

-3D Xpression + PC2TV
-R7000
-R7200
-R8500
-R9700PRO

I bought 5, you bought 6. You've indeed made your point.

To discuss your nVidia-only API, 3DFX WAS the king of the videogame world back in the days of Quake 1. If your card couldn't run Glide, then you couldn't play 75% of the 3D games that were available with hardware acceleration. You should realize that if nVidia had an API that only ran on their cards, and people stopped developing for Direct3D/OpenGL, nVidia would have a monopoly on the gaming market. You wouldn't want to run your games in software mode, now would you? The last game I saw with software acceleration was Unreal.


Apology accepted, I was just trying to point out this "nVidia Rollo" stuff isn't true. That's six retail ATI cards I've bought in the last five years, I've actually bought several more in total.

I do try to be impartial, and definitely have a history of buying cards other than nVidia. (I just happen to have bought and liked most of theirs as well)
 
I'ma callin' you out, Rollo. 😛

The nV30 was more advanced than the R300 in some ways, less in others.
True.
Carmack couldn't even use an R300 on Doom3 due to it's feeble instruction limits. These days, no one can use an X800 to develop, because why would you code in crappy partial precision DX9b when you could use DX9c /SM3 and do your job much easier? Yet again, ATI leaves developers hang with their primitive 2002 tech, while nVidia gives them tools that can produce games that will be relevant in the next couple years. Might have something to do with why "TWIMTB" is on most game boxes and I've yet to see a GITG logo?
Not true.

That last quote is basically all people need to call you a nVidia "fanboy" (and obviously the world consists entirely of fanboys, as this forum proves time and again :roll😉. It's probably not worth Fisking, but I can't help but try to right frivolous wrongs....

Carmack was bumping into R300's instruction limits for his next-gen development, AFAIK. And given the speed at which that gen of cards runs D3, did you really want him to stick longer shaders in there?

Yes, why would a dev be so stupid as to code on a crappy SM2.0 card when the vast majority of the DX9 card ownerbase is limited to SM2.0? More seriously, how the heck can you call FP24 "crappy partial precision" when nV still uses FP16 for most of its shading? Are you just trying to get a rise out of people who know what they're talking about?

As "primitive" as ATi's 2002 tech was, so will nVidia's 2004 tech by the time we see games that take advantage of its feature set. Rollo, are you secretly a game dev? Because your arguments mean nothing to consumers.

And TWIMBTP, as Epic says everytime they're asked, is a marketing agreement, not a development partnership. nV has to make sure its cards run games well regardless of who they're advertising with.

You were doing so well until that misguided manifesto. Really, all I needed to do was to point you to ixbt-labs' benchmarks of Far Cry v1.2 force-compiled to PS2.b (and then using Crytek's own PS2.b path) to show you the error of your "2002 tech" ways. Check it out once it's translated to English and posted on Digit-Life.

Anyway....
 
Jiffy:
I don't know how I've offended you, but:

That's horse manure. Even if you did buy another ATI card this generation, it would have been just like last year where you lamented your purchase of a 9700 Pro (or was it a 9800 Pro) and pined for an FX5800. You b!tched endlessly about their drivers, blah blah.
You won't find any bitching about ATI in my posts, I have nothing but good things to say about the R300 core in 2002 and 2003.

Just like how Half Life 2 was supposed to be out "within the year" last year for DX 9.0 support?
I got lucky with my hunch last year. Time will tell if I'm right this year, or will look as foolish as the "9600Pro is better than a 5900U!" crowd next year.

Just like how you didn't want to be part of the full DX 9.0-support installed user base last year and instead opted to go with NV3x hardware, with incomplete DX 9.0 compliance?
I was part of that "installed DX9 user base" for 16/20 months? And had nothing bad to say about those cards? (with the possible exception of they have too much performance loss in DX9- which is true)

I hate to be so venemous, but Rollo, come on here. You have, quite conveniently, changed your position completely; on driver optimizations, on deals graphics cards make with developers for big bucks, on buying 3d hardware with the most complete feature set at the time...
My position on driver optimizations is the same: I don't care much one way or another as long as I don't notice them. I don't mind pointing out trylinear, though, mainly because brilinear was thrown at me so many times and I'm not above saying,"Pot>kettle>black" to remind the people who used that issue last year to try and make nVidia seem like thieving war criminals.
As far as the deals go, if they result in better performance and stability, I'm for them. As an old school gamer I recognize some games run better on some cards and buy accordingly.
As far as complete features, I got lucky in my guess last year that nVidia's blended 16/32 precision wouldn't be an issue for that generation.

It's not rocket science - you've changed your stance on almost everything, and it quite conveniently matches the shift in technology this generation from ATI to Nvidia holding the cards
My opinion has more to do with ATI leaving me know choice by producing the same core I've already paid $800 for. ($400 in 2002, $400 in 2003) Sorry, they're just going to have to come up with something new to get another $400 from me. A review of my purchases in the other response will show you I've never bought the same core more than twice, I don't intend to as long as I can have a viable alternative.
If you were me, would you pay ATI $1200 for the R300 core 3X in 24 months?
 
Originally posted by: Pete
I'ma callin' you out, Rollo. 😛

The nV30 was more advanced than the R300 in some ways, less in others.
True.
Carmack couldn't even use an R300 on Doom3 due to it's feeble instruction limits. These days, no one can use an X800 to develop, because why would you code in crappy partial precision DX9b when you could use DX9c /SM3 and do your job much easier? Yet again, ATI leaves developers hang with their primitive 2002 tech, while nVidia gives them tools that can produce games that will be relevant in the next couple years. Might have something to do with why "TWIMTB" is on most game boxes and I've yet to see a GITG logo?
Not true.

That last quote is basically all people need to call you a nVidia "fanboy" (and obviously the world consists entirely of fanboys, as this forum proves time and again :roll😉. It's probably not worth Fisking, but I can't help but try to right frivolous wrongs....

Carmack was bumping into R300's instruction limits for his next-gen development, AFAIK. And given the speed at which that gen of cards runs D3, did you really want him to stick longer shaders in there?

Yes, why would a dev be so stupid as to code on a crappy SM2.0 card when the vast majority of the DX9 card ownerbase is limited to SM2.0? More seriously, how the heck can you call FP24 "crappy partial precision" when nV still uses FP16 for most of its shading? Are you just trying to get a rise out of people who know what they're talking about?

As "primitive" as ATi's 2002 tech was, so will nVidia's 2004 tech by the time we see games that take advantage of its feature set. Rollo, are you secretly a game dev? Because your arguments mean nothing to consumers.

And TWIMBTP, as Epic says everytime they're asked, is a marketing agreement, not a development partnership. nV has to make sure its cards run games well regardless of who they're advertising with.

You were doing so well until that misguided manifesto. Really, all I needed to do was to point you to ixbt-labs' benchmarks of Far Cry v1.2 force-compiled to PS2.b (and then using Crytek's own PS2.b path) to show you the error of your "2002 tech" ways. Check it out once it's translated to English and posted on Digit-Life.

Anyway....

Pete:

What I was trying to say with all of the above is that nVidia has consistently given developers the tools to code games, no matter how slow they run on the current cards. Is this a bad thing somehow? (remember the birth of hardware T&L?)
As far as 16 vs 24 vs 32 bit precision, would you disagree that 24 bit was an interim step on the way to 32 bit that games in the future will be coded in?
Or that the SM3 features of the nV40 aren't preferable to the DX9b (2002 era) features of current ATI cards?

On one hand, to me this comes down to "better to have than not". On the other, I truly think in this calendar year will have true Sm3 games (as well as Far Cry and Painkiller patched) and that Doom3 and it's licenses will be compelling reasons to own nV40s. We'll know if I was right in 6 months?
😉

I haven't looked into the ATI hacked Far Cry thing yet, I don't have a compelling reason to as I only own 4 nVidia cards now: GF2 GTS, GF3, FX5800U, and 6800NU. I will try to check it out though, as it's obviously relevant.
 
Carmack couldn't even use an R300 on Doom3 due to it's feeble instruction limits.
What a load of nonsense.

These days, no one can use an X800 to develop, because why would you code in crappy partial precision DX9b when you could use DX9c /SM3 and do your job much easier?
Partial precision? Partial precision is FP16 and word on street is that nVidia is encouraging develepers to use it in their shaders for NV40 cards.

Might have something to do with why "TWIMTB" is on most game boxes
Actually it has nothing at all to do with it.
 
So, I broke down and bought FarCry. I must admit, I was wrong about the game. It is very nicely done. The demo did it no justice. I left my PC as is and loaded the game up. All settings on default very high and 4x/8x AA/AF 1024x768 and 56.72's DX9.0b. I start the campaign. the very first scene in the game is where your looking through a type of sewer drain. I didn't move a muscle and just looked at the FRAPS reading in the upper left hand corner and sighed. 25fps. YUCK. Continued to play and it never went below 25 and went as high as 65 to 70.

So I said the hell with it. Installed FW 61.76's, DX9.0c, the hotfix for DX9.0c, the registry edit for DX9.0c, and the FarCry 1.2 patch. I saw a lot of improvement on a 5950U in Tom's FarCry 1.2 article so I decided to try all this stuff.

Low and behold, I started the game and was looking down the same sewer pipe. There was lighting there that wasn't before. Like sunbeams coming through crack above. I didn't move a muscle and observed FRAPS again. 32fps. Shadows were now cast correctly when they were a unrendered white before the patch. I know my 5900U does not support SM3.0. but I am benefitting from all the other fixes in the patch. I am very pleased that my Mid range card on my mid range PC can play this game comfortably thanks to crytek's patch.

Just had to share. If anyone wants benchies for any reason with my setup, let me know.

Keys
 
Originally posted by: BFG10K
Carmack couldn't even use an R300 on Doom3 due to it's feeble instruction limits.
What a load of nonsense.
Yeah, he probably meant something else when he said:
http://www.clanbase.com/finger_cache.php?plan=johnc,John+Carmack">http://www.clanbase.com/finger_cache.php?plan=johnc,John+Carmack
For developers doing forward looking work, there is a different tradeoff --
the NV30 runs fragment programs much slower, but it has a huge maximum
instruction count. I have bumped into program limits on the R300 already.
:roll:


These days, no one can use an X800 to develop, because why would you code in crappy partial precision DX9b when you could use DX9c /SM3 and do your job much easier?
Partial precision? Partial precision is FP16 and word on street is that nVidia is encouraging develepers to use it in their shaders for NV40 cards.
Like Carmack, I think Tim Sweeney knows more abnout it than you BFG, and said it best
Only a marketing person would call 24 bit full precision
Of course, the great BFG 10K knows more than Carmack and Sweeney, they only write the code he talks about.
 
Keys, those are very high settings for a 5900 card, judging by this:

http://www.digit-life.com/articles2/digest3d/0604/itogi-video-fc-wxp-aaa.html

I ran Farcry on my 9800 at 1280x1024 8xAF, but then again I am very picky about slowdown.

Sounds perfectly fair. But you say "in FarCry" and then "a game". Did you mean just Far Cry? Or all games currently available. Not to nitpick, just wanted to know if you base your advice to others on purchasing X800's on Far Cry only.
Put it this way: I'll be more interested in SM3 when:

A. SM3 games are actually released
B. SM3 games are good
C. SM3 games actually run faster on the 6800s in SM3.

It doesn't have to be Farcry, any SM3 game can do. Until the above is met, what use is it besides marketing BS? Going from a 4 cylinder to a 6 cylinder for free is great and all, but not so much when there's an 8 cylinder available.


What do you do when you have to upgrade you CPU rollo? It must absolutely kill you to buy an x86 cpu again and again and again.
 
Hmm. So what can SM3 do that SM2.0b or whatever cannot?

It's unfortunate that they did not test at 1600x1200. In their tests, the XT gained 11.5% performance, not far behind the 12.75% gain of the 6800 Ultra on anands tests.
 
Yeah, he probably meant something else when he said:
Show me where your quote matches what Carmack said. Where did he say "I couldn't use a R300 in Doom III?"

You comment was simply a nonsensical troll, much like the rest of the pro-nv drivel you constantly post, Mr "collector".

Carmack couldn't even use an R300 on Doom3 due to it's feeble instruction limits.
Yeah? And how fast do you think an NV30 would be running when executing instruction counts that exceed an R300's? Or have you changed your tune so that performance now doesn't matter but paper features do?

Only a marketing person would call 24 bit full precision
Then I guess Microsoft, the designers of Direct3D, are just marketers. And if FP24 is partial precision then how would you describe nVidia's encouragement of developers to use FP16 on the NV40? What do you think Mr Sweeney would say given he doesn't appear to like FP24?

of course, Rollo, ATI enabled feature=hack, Nvidia enabled feature=TWIMTBP
Of course. Rollo is the biggest nv fanboy currently in this forum and the only thing that has changed since his 5800U fetish is that he's become more blatant as the days pass.
 
Why would Crytek say anything about this as they are currently under contract with Nvidia? It's sad really, the 6800 is strong enough that Nvidia doesn't need to resort to something like this. But then again 2 years of 5xxx might change a companies' way of doing business.
 
Originally posted by: Rollo
I'll believe this when the Crytek guys say it's true.

Speaking of hacks, how do you enable SM3.0 support in Far Cry without hacking the Nvidia driver *inf file and using an unreleased version of DX? IMO, it is nice to have the option for us enthusiasts, but what's your take on enabling SM3.0 support in Far Cry for Nvidia cards?
 
Originally posted by: SickBeast
Originally posted by: VirtualLarry
I don't really think that either one of those initiatives was really an attempt to move developers to support a proprietary API rather than a "standard" one, but rather a way to get them to prefer NV-specific enhancement technologies over a competitors. Granted, this distinction may be rather slim, but it means that the devs can still fall back to the "standard", and be compatible, but with the tradeoff that they will "look better" or otherwise have an advantage on NV-based hardware, if the devs choose to spend the additional time supporting them.

I see what you're saying, but I still disagree with the 3D companies doing this sort of thing. It's just like ATi's truform or 3DC; they should stick to the open standards IMO. If they want something like C for Graphics that badly, they should lobby Microsoft to include it in the next edition of DirectX. Otherwise they should focus on making their graphics cards better at supporting the standard ways of doing things. These sorts of things create an unbalanced playing field and are a blatant attempt to stiffle competition.

If you really want all graphics hardware to be *exactly the same*, without any advanced features or differentiation, then I won't stop you from holding that opinion. Personally, I think optional "enhancements", are a good thing, they differentiate products, and they advance technology at the same time. Clearly, they increase competition, not stifle it.

And as a matter of fact, I believe that there is going to be an officially-MS-sanctioned version of HLSL in an upcoming DirectX version. ("C for Graphics" being NV's 'proprietary' implementation of an HLSL, although they did make their spec publically-available, AFAIK, so that anyone to write to it.)

I was just trying to make the distinction that these additional features, do not (generally speaking), lock someone into a particular vendors graphics hardware totally. It's not an "either/or" proposition like it was back in the "bad old days" of purely proprietary 3D APIs.
 
Originally posted by: Rollo
I'll believe this when the Crytek guys say it's true.
A much longer ATi thread had a quote from the v1.2 readme, IIRC, that listed 2.b's benefits as basically the same as 3.0 (single-pass lighting [though only 3, not 4, lights] and instancing).
 
Back
Top