Dave-
"High resolution provides more detail, while anti-aliasing provides more accuracy. I'm not sure where you got the idea that either reduces accuracy and that RGSS does it more than OGSS. This is simply not the case. There is no way to lose accuracy when dealing with 4x the original data. (unless you do something screwy)"
Sampling proximity(same thing that gives RGSS an edge in eliminating aliasing artifacts creates more noticeable FSAA artifacts ie, haloing and blurring). Perhaps I should have used detail instead of accuracy, it may be better expressed that way.
"Actually, on B3D there wasn't ever really a conclusion on this"
Near the end of the thread it seemed to be agreed upon that on a mathematical basis at least, higher res could be proven to be superior(though I don't recall who was still left in the discussion, it had dwindled by that point).
"In CAD apps, it blows things away. If you note benchmarks, you'll note that there is like a 10x performance increase with GF boards over software T&L. But also take note that the 10x performance increase is not present in games."
They are still CPU limited. Using MDK2 or TD6 as examples(covers both major APIs) while my FPS are quite a bit better with hardware T&L, I am completely CPU bound by game code. I have tested extensively and the edge that my GF1 provides increases on a percentage basis when I overclock the CPU, though they don't budge when I OC or UC the core. Both games, particularly TD6, are composed nearly entirely of static vertices, so that of course does have an impact. With TD6, I'm seeing nearly a 400% increase using hardware T&L over software, and I have an Athlon550. Upping it to 600MHZ using FSB(I don't have a GF) it increases to over 400%. The GF T&L unit is still scaling.
For MDK2, if you run hardware mode on a GF compared to a V5 at low res(just to leave it strictly T&L and eliminate fillrate), the GF/GF2 is quite a bit faster then the V5, in the order of ~250% using
Rev's numbers with older drivers. That's a ~850MHZ CPU, although the comparison unfortunately relies on the OpenGL drivers of each board, both of which have improved significantly in the past four months since those numbers were run.
ICyourNipple-
"honestly, with no bias whatsoever, what CPU is a GeForce 2 T&L engine equivalent to in a real game situation assuming the game is heavily optimized for the T&L engine?"
By my benching I would say at least a 2.4GHZ Athlon for games looking at the GF1, though with that high of geometry complexity you may well be fill limited before you are able to have the T&L unit stretch its' legs. DMZG is an example of this, with the amount of overdraw raw fillrate limits you before the T&L unit will become a bottleneck. Unfortunately we don't have enough games to get a better idea of exactly what level of performance is possible.