. Interestingly enough, the XT specs on that site seem to be equal to or better than the Ti4400 specs so why the lower performance? I aslo have previosly noticed that the standard 8500 is clocked at 275/550 while the Ti4200's clocked at 225/500. What exactly gives Nvidia the edge here?
Look deeper then specs, I don't have much time or I'd go into more detail but suffice to say that nVidia's memory controller is quite a bit more advanced then that of ATi's at the moment. With similar theoretical bandwidth figures, I'd go so far as to say the nVidia board would have roughly 2GB/s more real world bandwidth to play with.
Of course the memory controller is only one aspect in two architectures that difern in many ways, but an aspect that carries great weight in terms of 3D Gaming performance.
Base specs alone don't always reveal terribly much.
I want to revise my above opinion though, I've been doing some tests with the newer .6071's and it seems they've improved performance more then I initially realzed... boosting the Retail 275/275 64MB R8500 to performance just below that of the Ti4200 generally and occasionally matching it.
Revised opinion: 8500XT should outperform the Ti4200 in most games, and in a very small handful of games it may outperform the Ti4400 by ~2FPS or so.
If it still uses the out-dated SuperSampling AA then I wouldn't bother
I'll take ATi's pseudo super-sampling over nVidia's multisampled AA any day.
Much better image quality, and no issues with antialiasing alpha textures as in the case with nVidia's implementation.
And no ridiculous blur filter like nVidia applies on their much hyped QuinCunx AA.
Granted ATi's implementation leads to slightly more bandwidth usage, and significantly greater texture storage in DRAM which leads to a slower performing AA algorithym... but personally I'm more then willing to sacrifice performance for image quality.
Often times it takes anisotropic + FSAA from nVidia to match ATi's pure AA in image quality so I MUCH prefer ATi's implementation.
Multisampling has potential, and in theory I think it capable of nearly matching supersmapling in image quality, but nVidia's present implementation leaves a lot to desire IMHO.
Ironic...
ATi FSAA: Image quality over Speed.
nVidia FSAA: Speed over image quality.
Yet when we go to their respective anisotropic filtering algorithym it becomes the opposite.
ATi: Anisotrophy: Speed over image quality.
nVidia: Image quality over speed.
Totally off topic for a monet....
I'll be irritated if ATi names the R300 anything other then Radeon 9XXX. Their present naming scheme is excellent, easily understandable and realistic.
7XXX=DirectX 7.
8XXX=DirectX 8.
Last three numers equate to relative performance within that generation.
Much nicer then nVidia's GF4 MX's in which you've got MX420's being beaten by GF2 GTS cards.
GF1 DDR's beating GF2 MX's. etc etc.
nVidia's naming scheme is confisuing as hell to consumers and makes it quite difficult for the uninformed to learn what is faster then what, and what offers a better feature set.
The rumours that R300 may be R1000, and R250 may be R9000 is quite disappointing.