Originally posted by: Pete
Gamingphreek, you're wrong about the OP's quote being biased one way or the other, you're wrong about nV not having released a "pipeline deficient" card in the past, you're wrong about the the FX's "microarchitecture" and compilers being the reason it lags so far behind in DX9 games, you're wrong about nV using FP32 with the FX in most games, you're wrong about nV always having had the edge in AA, you're wrong about nV having had worse AF until the GF6.
You're just wrong on so many counts, it'd be kind of incredible if you weren't doing this on purpose. My suggestion--at least with respect to nVidia's recent history and this thread in particular--is to post less and read more.
Since when is it biased to think ATI can cram more pipelines in with a smaller process than nVidia? Seems like common sense. Also seems like common sense that they may have yield problems with this new process, at least initially.
Surely synthetic benchmarks numbers and the 3DM03 fiasco taught you that nV must/will use FP16 precision to maintain reasonable performance with their cards?
Surely nV worked their butts off to optimise "DX9" games to their FX architecture. Why is it that Far Cry and HL2 and the like run so much worse on the FX series? In fact, why does the four-pipe FX series run about as fast as the four-pipe 9600 series in those games? It's possible game devs are inept and don't want to maximize FX owners' IQ with their cards. Or it's possible the FX just has some (micro)architectural flaws that can't be reasonably expected to be overcome in the real world of deadlines and "cross-platform" development.
ATI seriously debuted AF with the 8500 *after* nV did with the GF3. ATI's AF had almost no performance hit--compared to the GF3's rather large one--because ATI just used bilinear filtering and angle-dependency, while nV used trilinear filtering and "fully" filtered all angles. nV also went above and beyond with its filtering quality, whereas ATI stuck to the D3D bare minimum. 3DCenter had a good article on this, IIRC, which you should search for.
nV basically had "average" AA until the GF6 series, as did ATI until the 9700 series. 3dfx was first out of the gate with excellent (but at a huge hit) AA with the Voodoo 4/5 series. The 9700 moved ATI from supersample to multisample AA, which improved performance; adapted a jittered grid, which improved quality; integrated gamma-"correction", which also improved quality; and allowed for up to three passes, which allowed for higher max quality at a reasonable speed. The GF3 moved to MSAA, but kept the ordered grid for 4x, so it wasn't that hot (especially compared to 3dfx's pseudo-temporal rotated grid). The FX merely improved speed. The GF6 finally brought rotated grid to both 2x AND 4x, closing the gap to ATI's 2x and 4x modes considerably. Again, 3DCenter had a good article on this, IIRC, which you should search for, but most initial 9700P and 6800U p/reviews should have concise write-ups on their respectively improved AA.
No disrespect intended, but so much of what you posted is wrong. It's been pointed out by other people, but maybe spelling it out in detail will clarify your errors. Don't take it personally, just learn and help the next guy out, like the rest of us try to.
Edit: My first two paragraphs seem more inflammatory than I mean them to be. I'll leave my post as is, just know that I didn't mean to come across so, well, mean. We all have to learn somehow, and I hope you learned more from my post than that I'm easily exasperated.