Originally posted by: BenSkywalker
Using RGSS you get four improper images then blended to get a fifth highly inaccurate image.
I would disagree with this assessment, and I think ATi and nVidia both would too given both use RGSS for multi-GPU modes and RGSS/SGSS for TrAA/AAA modes. Furthermore, there is nothing ?improper? about the images.
A whole bunch of different effects. Get it running with Bioshock as an example. Post processing effects won't work, several shadowing techniques won't work, certain shader/lighting interactions won't work. The way they do it breaks the rendering pipeline- not adds steps to introduce a filter- breaks it.
RGSS won't work with a whole bunch of games, it breaks the rendering pipeline. RGMS as utilized in D3D would work fine, but not RGSS.
This is plain false, so please provide evidence of your claims. Specifically, demonstrate to us where Bioshock fails with a RGSS scheme but doesn?t elsewhere.
The fact is the multi-GPU RGSS modes are exactly the same as single GPU AA modes as far as the application is concerned, so anywhere regular AA works they will work too (barring driver issues of course).
Anyway, traditional MSAA in a traditional pipeline doesn?t even work in a ton of modern games (including Bioshock) and requires driver workarounds behind the applications? back, so I?m not even sure what your point is. And anywhere there?s a driver workaround for regular MSAA, RGSS will work too.
Dev can use a jittered camera offset, render four frames to texture, blend the textures- output image with nothing broken. It is very easy to do for developers if they want to.
Again we aren?t asking for the dev to do anything, we?re asking for the IHV to provide the facility.
Please do
The burden of proof is with you in this case because your statements are flying in the face of commonly accepted fact and knowledge.
I went over this with the original 3dfx team that wrote the white paper on how RG was so superior. I went on and on about how horribly they were going to hose their LOD settings as a general example- they linked all sorts of information to demonstrate how that wasn't the case. Took a couple months before the egg on their face piled up so much they had to admit the fact that they weren't just filtering out edge noise- they were filtering out texture contrast. RGSS does not differentiate between them in any whatsoever- it's the reason it is a blur instead of what is normally considered a proper form of AA.
Please provide credible proof of your claims. In particular, please provide proof from a credible source demonstrating that RGSS is inferior to OGSS with specific examples.
You bring up the eye noticing aliasing the most along near horizontal and vertical edges- it notices contrast along those same angles too- and RGSS destroys BOTH without correction of any sort.
Contrast, as in the difference between an edge and no edge? Well yeah, that?s the whole point.
As for interior texture detail, please provide proof RGSS is worse than OGSS in this regard. I can post evidence that demonstrates less detail with an OGSS implementation than a RGSS implementation, but again until you provide evidence of your claims there?s no need for me to waste my time.
Additionally, if you take screenshots of current nVidia and ATi AF implementations, the ATi cards actually look superior because the image is sharper, has more contrast, and more detail than nVidia?s image. Of course this is totally deceptive since in actual gaming the nVidia card has hands-down more accurate AF and IQ.
Talking about 16x OGSS as compared to 4x OGSS- yes- it removes more contrast- but with 16x all of your filtering and rendering effects can still be rendered entirely accurately(edit- actually, thinking about this it isn't true either, just not as problematic as RGSS).
Again please provide proof of your claims. I can provide evidence that 2x2 demonstrates a sharper image than 2x1 despite 2x2 containing 2xRGSS as well as 4xOGSS. But again there?s no need for me to post anything until you provide proof of your claims.
Noone has made any real attempt at doing stochastic. ATi tried to come up with a messed up alternating pattern to improve performance while increasing IQ, there is no way to do true stochastic in a real time pipeline, too many things would get broken in the process.
Actually I was checking this before and it turns the Radeon 8500 could alter the sample pattern at the
pixel level in a pseudo-random fashion. That?s virtually stochastic, is it not?
In light of this evidence, I would now have to say the Radeon 8500 beats the Voodoo 5 in AA quality and would therefore have to be added to my original list for being the only consumer part to ever deliver stochastic.
I don't even need to use OpenGL- SplinterCell under D3D. It isn't hard to do at all
Wohoo, one title. So that makes it even with 3dfx?s motion blur given there was a patch for Quake 3 to enable it.
