Ben:
regarding MBTR - very cool stuff there, I like that. that is one of the games that makes me want to grab a GTS-U, that game REALLY needs anisotropic, wish the Ultra could do better than 8-tap, which is just a tease!
also gotta figure out what anand was talking about with the 24-bit thing, I was getting ~30 fps @ 1024x768x32 w/24-bit textures checked, and a few other things checked beyond default (can't remember off-hand)
I really wonder which driver they selected in the driver area of MBTR, that made a pretty good difference, IIRC
regarding Giants - does it REALLY run that slow??? gadzooks! enable all the options in MBTR, and it'll slow EVERYTHING to a crawl, and Giants is SLOWER??? yikes....
does the Kyro do OGL or D3d in UT with the compressed textures? I've heard OGL looks better than D3d w/the compressed textures, and lemme tell ya, that looks DAMN GOOD!
as far as future proof is concerned, never said it was more future proof than the GTS, I don't think anyone claimed that. I was just a bit taken aback from when we were arguing about the MX being more future-proof than the 5500.
as far as HW lighting in MDK2, it does look a bit better, but I didn't think it was anything spectacular. In MDK2, I found 1024 w/2xFSAA looked much better than anything I could get on the GTS Pro I had.
As far as compromised benchmarking and 3dMark2000, well, you have your opinion as to why they used 16-bit as a default. I have a different one.

Notice how much closer the scores are @ 32-bit. the GTS is still ahead, but the difference is MUCH MUCH less. I've always bitched about HW T&L using too great of the memory throughput of a card that is already throughput limited. Remember that discussion?
Soo...how can you get past that? Why, enable 16-bit, of course.
as far as Q3 HQ, I hear what you're saying, but the lodbias slider makes a far greater visual impact than enabling trilinear filtering in Q3, by a long shot, so that doesn't worry me too much, although it would certainly make benchmarking a pain, and I agree, it's stupid that trilinear wasn't an option in multitexturing mode.
and as far as having an asterisk, how about not registering at all? (MX @ 1600x1200x32) I just noticed that. <G>
Now then, if SOMEONE could PLEASE explain the FSAA results of the MX in Serious Sam vs. the 5500, I'd be very interested.
We all know the MX sucks in FSAA. Seeing the regular benchmarks indicate that the 5500 is slightly faster (Very slightly) in SS, yet the MX whups up on the 5500 in FSAA??
PUHHHHHLEEZE!
talk about BS. sounds to me like nvidia's infamous "loosely defined FSAA" took over yet again.