what does that have to do with comparing like sampling patterns?
I'm sorry, where did I claim it did?
Somebody asked why ATi didn't ship SSAA on their cards and I was merely responding with what I had heard from one of their reps.
The games that desperately need SSAA are the older titles-
Nonsense - you're blowing this way out of proportion. 8xS is a nice option but it's not a must-have by any means. If you're referring to real old games like X-Wing, well, they don't even use transparency.
their R9600 non pro has enough power to handle SSAA in those games.
Sure, if you like gaming at low/middling resolutions. Try 8xS on a 6800U with GLQuake @ 1920x1440 and you'll be getting around 75 FPS average in the timedemos. Nothing wrong with that mind you, but let's not get delusional as to how many titles 8xS can actually be used in.
but even at low resolutions 6xAA on a R9800Pro is utterly unplayable on anything remotely new also.
Right...and yet you're expecting SSAA to be perfectly usable on a 9600 Pro and lambasting ATi for not offering it.
To make the situation worse- on the older games where it is useable it still has horrific aliasing due to the heavy utilization of alpha textures.
I'm not sure what older games you're playing so I can't really comment but at 1920x1440 alpha aliasing isn't really a problem for my old games so xS AA is just a nice extra.
I am currently running ATi hardware and you can not disable their shader replacements.
ATi in no way gives you a choice- you get inflated 'cheating' bench scores all the time
Sure you can. Disable the AI in the CCC and it's gone.
I am still waiting for you, btw, to jump into every ATi thread and bash them for doing the exact same thing nVidia has been doing.
I've already commented multiple times I don't like either vendor detecting applications for performance reasons but of course I'm going to be more lenient to a vendor that allows it to be disabled.
I mean if I wanted to really have a go at nVidia I'd comment that my Radeon
7000's 16x bilinear AF exhibits less shimmering and mip transitions than a 6800U with 16x trilinear + all optimizations enabled. Of course nVidia's optimizations can be disabled so there's no reason to bring it up.
So why wouldn't it be on under OpenGL then?
I'm not answering your question directly but I will point out that MacOS X's platform OpenGL structure and implementation is quite different to Windows.