nVidia Supports SSAA NOW!

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
Originally posted by: Gstanfor
I could go on and on, but I think you start to get the picture. ATi relies on their optimizations and lack of user choice to maintain their speed advantage. Take away the optimizations and hardware shortcuts and they would fall flat on their face.
Right, so according to you nVidia could just optimize and dominate? Or are you suggesting the company has moral standards?
 

Pr0d1gy

Diamond Member
Jan 30, 2005
7,774
0
76
Originally posted by: bunnyfubbles
Originally posted by: Gstanfor
I could go on and on, but I think you start to get the picture. ATi relies on their optimizations and lack of user choice to maintain their speed advantage. Take away the optimizations and hardware shortcuts and they would fall flat on their face.

Right, so according to you nVidia could just optimize and dominate? Or are you suggesting the company has moral standards?


Yeah, nVidia ships their crappy cards to reviewers with crappy drivers & the reviewers respect ATi's wishes that nVidia cards' optimizations be turned off for testing..........Why do people still believe this? nVidia is holding me back Padme!!! Please. Such a lame excuse & poor defense it makes me scoff at those foolish enough to suggest it. As an nVidia card owner for the past 2 generations, even i know this is total BS.
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0


Originally posted by: bunnyfubbles
Originally posted by: Gstanfor
I could go on and on, but I think you start to get the picture. ATi relies on their optimizations and lack of user choice to maintain their speed advantage. Take away the optimizations and hardware shortcuts and they would fall flat on their face.
Right, so according to you nVidia could just optimize and dominate? Or are you suggesting the company has moral standards?

Yes, I believe nVidia does have moral standards. They had a short lapse around the introduction of nV30 to be sure, but they have more than made up for that.

SPeaking of morals, would you say that inciting websites to criticize your competition for using filtering optimizations while claiming you don't, when in reality you are also optimizing (3dmurk/trylinear) is moral? How about claiming on TV your product can fully utilize certain memory types (DDR II) when in reality the memory is being used in a compatability mode (DDR II memory has a DDR I compatability mode available)?

That just two quick and fun examples. There are plenty more if you want to discuss morals.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
I'm not understanding what you are saying here- with identical samples MSAA will smooth poly edges the exact same as SSAA(in terms of IQ,
ATi's 6xMSAA compared to nVidia's 8xS; the former will handle polygon edges much better. Also another advantage of MSAA over SSAA is that it doesn't blur the image like SSAA does.

I find it hard to believe the lack of SSAA is just ATI being jerks.
I think I read an ATi rep say they don't like shipping unusable features and SSAA is unusable because it's too slow.

I personally prefer 6xAA over 8xS; 8xS is too slow and can really only be used in some old games. OTOH everywhere where I use 4xAA (which is a significant portion of my gaming library) I could basically be using 6xAA instead if I had an ATi card. 6xAA is remarkable in that it looks great but it still runs damned fast.

nVidia, by contrast, gives you an enormous amount of choice and doesn't force you into doing things one way only.
I wouldn't go that far; for some things you get more choice but in in others you have no say in the matter.

How for example does one disable nVidia's application detection and shader subsitution? You can disable Catalyst AI through the control panel.
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
First you have to demonstrate that nVidia actually is doing shader substitution in its current drivers - something that noone has managed yet.

Then you need to consider that ATi has a replacement Doom 3 shader (shader substitution anyone???) and ATi were the people (yet again) who caused a stink about how nVidia might be doing shader substitution in the first place!
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
First you have to demonstrate that nVidia actually is doing shader substitution in its current drivers
Carmack's plan files mention this.

And we know nVidia is doing application detection because there's no way to remove the default profiles (another choice nVidia doesn't give you, unlike ATi).

Then you need to consider that ATi has a replacement Doom 3 shader (shader substitution anyone???)
I agree completely. The thing is you can disable it by turning off Catalyst AI.
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
Carmacks plan files mention that he wrote the shader to allow nVidia (or anyone) to use partial precision, so nVidia is doing nothing wrong. (not at home or I would quote him directly).

There is a way to remove the default profiles BTW. nHancer allows this...

Edit: here is the carmack quote I referred to above:
The quote is from me. Nvidia probably IS "cheating" to some degree, recognizing the Doom shaders and substituting optimized ones, because I have found that making some innocuous changes causes the performance to drop all the way back down to the levels it used to run at. I do set the precision hint to allow them to use 16 bit floating point for everything, which gets them back in the ballpark of the R300 cards, but generally still a bit lower.

Removing a back end driver path is valuable to me, so I don't complain
about the optimization. Keeping Nvidia focused on the ARB standard paths
instead of their vendor specific paths is a Good Thing.

The bottom line is that the ATI R300 class systems will generally run a
typical random fragment program that you would write faster than early NV30 class systems, although it is very easy to run into the implementation
limits on the R300 when you are experimenting. Later NV30 class cards are
faster (I have not done head to head comparisons with non-Doom code), and the NV40 runs everything really fast.

Feel free to post these comments.

John Carmack

Note: the italicised bit refers directly to nV30 (and nV31, nV34) I don't believe it applies to nV35 and beyond (nV35 had refined FP capabilities over nV30, but partial precision is still important for performance).
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Matthias, I've also read interviews of ATI people saying they wouldn't implement SSAA for Windows b/c it was too much work for too little benefit. While I don't know how hard it would be, I'm sure people would make use of it. I'm not sure it's *entirely* accurate to say they took it away or hid it from users, though. After all, SSAA may be working in Mac OS, but that's just for OGL. How do we know it isn't more complicated to adapt "aftermarket" SSAA to D3D? Or would the method of enabling SSAA in D3D be the same as they're doing with their SuperAA modes?

As for performance, if cards offered only 2x2 ("4x") SSAA before, why would ATI think people wouldn't use it on the much faster hardware of today?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
ATi's 6xMSAA compared to nVidia's 8xS

Ordered v rotated grid.... what does that have to do with comparing like sampling patterns?

I think I read an ATi rep say they don't like shipping unusable features and SSAA is unusable because it's too slow.

Apologizing for ATi now? The games that desperately need SSAA are the older titles- their R9600 non pro has enough power to handle SSAA in those games.

I personally prefer 6xAA over 8xS; 8xS is too slow and can really only be used in some old games. OTOH everywhere where I use 4xAA (which is a significant portion of my gaming library) I could basically be using 6xAA instead if I had an ATi card.

Have to say I haven't had a chance to try out the X850XT PE but even at low resolutions 6xAA on a R9800Pro is utterly unplayable on anything remotely new also. To make the situation worse- on the older games where it is useable it still has horrific aliasing due to the heavy utilization of alpha textures.

How for example does one disable nVidia's application detection and shader subsitution? You can disable Catalyst AI through the control panel.

No, you can't. Unlike you, I am currently running ATi hardware and you can not disable their shader replacements. I am still waiting for you, btw, to jump into every ATi thread and bash them for doing the exact same thing nVidia has been doing. You claimed in the past it wouldn't make a difference to you which company does it, very clearly it does.

And we know nVidia is doing application detection because there's no way to remove the default profiles (another choice nVidia doesn't give you, unlike ATi).

ATi in no way gives you a choice- you get inflated 'cheating' bench scores all the time.

After all, SSAA may be working in Mac OS, but that's just for OGL. How do we know it isn't more complicated to adapt "aftermarket" SSAA to D3D?

So why wouldn't it be on under OpenGL then?
 

Emultra

Golden Member
Jul 6, 2002
1,166
0
0
Originally posted by: Matthias99
AA of any type slightly softens the image (either edges with MSAA, or any contrasting areas with SSAA). However, this is actually more accurate (in a mathemetical/sampling theory sense). Aesthetically, some folks don't like it, and would rather have well-defined (but jagged) edges. Meh.

If text/GUI elements are getting hit with AA, the result can be pretty ugly (since these things are already been drawn as sharp as possible, trying to use AA on them does actually just blur them). This is usually because of a bug/flaw in the game engine or its interaction with the driver, NOT a problem with the way antialiasing works. The game *should* draw any 2D text/GUI elements *over* the rendered image, after it has been processed with AA/AF.

I guess you're right, but picture this: you have a metal ball on a pole. Without AA the ball is less round, but what it does have is razor sharp edges; a feature that is true of its nature. With AA, it is rounder (which it should be), but blurred as if it was "fogged" somehow. But being a ball, i should be sharp.

However, I find the tradeoffs with AA to outdo its disadvatages. Aliasing is so noticable sometimes - even during CS:S multiplay - that I'd rather correctomathematize (a stretch but still a verb) the "stairsteps". ;)
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Originally posted by: BenSkywalker
After all, SSAA may be working in Mac OS, but that's just for OGL. How do we know it isn't more complicated to adapt "aftermarket" SSAA to D3D?

So why wouldn't it be on under OpenGL then?
Beats me. Like I said, I'd certainly like to have it as an option, even if just for OGL titles. I guess that since OGL titles aren't that plentiful in Windows, ATI isn't in any rush. I still think it'd be a nice touch for prosumers to note and use and tout. They can think of it as an underground marketing tactic.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Originally posted by: Ronin
600,000 Textures @ 4x AA = 9.4MB
Can someone explain this to me? I wasn't aware of texture size directly relating to AA memory usage.
 

trinibwoy

Senior member
Apr 29, 2005
317
3
81
Originally posted by: Pete
Originally posted by: Ronin
600,000 Textures @ 4x AA = 9.4MB
Can someone explain this to me? I wasn't aware of texture size directly relating to AA memory usage.

Don't think you'll be getting that explanation anytime soon :laugh:
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
what does that have to do with comparing like sampling patterns?
I'm sorry, where did I claim it did?

Apologizing for ATi now?
Somebody asked why ATi didn't ship SSAA on their cards and I was merely responding with what I had heard from one of their reps.

The games that desperately need SSAA are the older titles-
Nonsense - you're blowing this way out of proportion. 8xS is a nice option but it's not a must-have by any means. If you're referring to real old games like X-Wing, well, they don't even use transparency.

their R9600 non pro has enough power to handle SSAA in those games.
Sure, if you like gaming at low/middling resolutions. Try 8xS on a 6800U with GLQuake @ 1920x1440 and you'll be getting around 75 FPS average in the timedemos. Nothing wrong with that mind you, but let's not get delusional as to how many titles 8xS can actually be used in.

but even at low resolutions 6xAA on a R9800Pro is utterly unplayable on anything remotely new also.
Right...and yet you're expecting SSAA to be perfectly usable on a 9600 Pro and lambasting ATi for not offering it.

To make the situation worse- on the older games where it is useable it still has horrific aliasing due to the heavy utilization of alpha textures.
I'm not sure what older games you're playing so I can't really comment but at 1920x1440 alpha aliasing isn't really a problem for my old games so xS AA is just a nice extra.

I am currently running ATi hardware and you can not disable their shader replacements.
ATi in no way gives you a choice- you get inflated 'cheating' bench scores all the time
Sure you can. Disable the AI in the CCC and it's gone.

I am still waiting for you, btw, to jump into every ATi thread and bash them for doing the exact same thing nVidia has been doing.
I've already commented multiple times I don't like either vendor detecting applications for performance reasons but of course I'm going to be more lenient to a vendor that allows it to be disabled.

I mean if I wanted to really have a go at nVidia I'd comment that my Radeon 7000's 16x bilinear AF exhibits less shimmering and mip transitions than a 6800U with 16x trilinear + all optimizations enabled. Of course nVidia's optimizations can be disabled so there's no reason to bring it up.

So why wouldn't it be on under OpenGL then?
I'm not answering your question directly but I will point out that MacOS X's platform OpenGL structure and implementation is quite different to Windows.
 

jasonja

Golden Member
Feb 22, 2001
1,864
0
0
Originally posted by: trinibwoy
Originally posted by: Pete
Originally posted by: Ronin
600,000 Textures @ 4x AA = 9.4MB
Can someone explain this to me? I wasn't aware of texture size directly relating to AA memory usage.

Don't think you'll be getting that explanation anytime soon :laugh:


texture size has nothing to do AA memory usage. Only the flip chain (primary and back buffer), Z buffers size, and AA samples impact AA memory usuage. BTW, 10mb is enough when you're talking about the typical consoles resoluion on 640x480.
 

Ronin

Diamond Member
Mar 3, 2001
4,563
1
0
server.counter-strike.net
Texture size has everything to do with memory usage, since how ever many textures directly impact how much memory is used. However much memory is used by the textures x4 for 4x AA = total texture size/memory usage. Your assessment is completely off base. Since you're doing 4 passes on the textures, what did you propose would be happening?
 

trinibwoy

Senior member
Apr 29, 2005
317
3
81
Originally posted by: Ronin
Texture size has everything to do with memory usage, since how ever many textures directly impact how much memory is used. However much memory is used by the textures x4 for 4x AA = total texture size/memory usage. Your assessment is completely off base. Since you're doing 4 passes on the textures, what did you propose would be happening?

You are wrong. The resolution and quantity of textures impacts memory usage. It has nothing to do with AA or the number of passes. So your 600,000 textures @ 4xAA statement above makes no sense.

Framebuffer usage = AA level x Storage format size (usually 4 bytes, 8 for FP16) x vertical resolution x horizontal resolution. Then multiply by two to get the total for the front+back buffers, then there's stencil buffer usage etc etc.
 

Ronin

Diamond Member
Mar 3, 2001
4,563
1
0
server.counter-strike.net
No, I'm not wrong, but you can continue to argue with me if you wish to continue spouting misinformation. I had this exact discussion at E3 with nVidia.

Take your misplaced information elsewhere, because that's what it is.

Or do some more research about it, and perhaps you might understand how it works.
 

trinibwoy

Senior member
Apr 29, 2005
317
3
81
Originally posted by: Ronin
No, I'm not wrong, but you can continue to argue with me if you wish to continue spouting misinformation. I had this exact discussion at E3 with nVidia.

Take your misplaced information elsewhere, because that's what it is.

Or do some more research about it, and perhaps you might understand how it works.

Is that what you do when you're wrong :roll: I have provided a formula for calculating framebuffer space usage. You have yet to explain your cryptic 600,000 textures @ 4xAA statement. If it is correct then surely you must have an explanation.

I assume you know how AA works. If you do then you shouldn't mind explaining how the number of textures is relevant to the AA process and how that relationship impacts memory usage.

I'll restate my formula again - feel free to explain to me why it is wrong and why we should be using your magical equation.

Framebuffer usage = #samples per pixel x storage size (4 bytes, 8 for FP16) x no of pixels in frame (vertical x horizontal resolution)

Simply put, for each pixel on the screen, we use 4 bytes e.g A8R8G8B8 8 bit-per-color + 8-bit alpha. Then for each pixel we have x number of samples according to the level of AA.

The only time textures come in here is when the lookup is done to fetch the color of the pixel. Now I'm sure you will now demonstrate to me why I'm wrong.....
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Ronin, if you understand how it works, why are you misapplying the term "texture?" Why not say subpixel color and depth value? Framebuffer size--by that I mean the color depth and pixel count, not texture size--determines MSAA memory usage. Higher-resolution textures don't affect buffer sizes, which is what you seem to be implying when you say "textures" yield memory usage.

Can you explain the math behind 600,000 textures @ 4xAA = 9.7MB? What units are you using for each of the first two variables? And how did you arrive at 600,000? 1280*720~=900,000. 720*480=~350,000. Did you just divide 9.7MB by 4Bpp and then 4 samples to arrive at 600k? That leaves "pixel" as the unit for 600k, not "texture." I assume you mean to say that, according to your formula, a 10MB buffer can only support 600k pixels?

It appears to me that you're misusing your terminology. OTOH, maybe you're just using more old-school terminology, when textures were the main component of pixel color (but, even in that case, you'd probably want to say "texel" rather than "texture").

I'm curious, are you a developer or reviewer, or are you simply, like many of us, an interested amateur?
 

trinibwoy

Senior member
Apr 29, 2005
317
3
81
Originally posted by: PeteI assume you mean to say that, according to your formula, a 10MB buffer can only support 600k pixels?

It appears to me that you're misusing your terminology. OTOH, maybe you're just using more old-school terminology, when textures were the main component of pixel color (but, even in that case, you'd probably want to say "texel" rather than "texture").

I thought that's what he meant at first then I read his reference to the number of passes, texture size and the "number of textures". I can understand if he is misusing "textures" in place of "texel" or "pixel" but the no. of passes bit is strange. Even so, I don't think there's anybody that would confuse a texture with a pixel.

 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
BFG-

OTOH having a higher MSAA level smooths out polygon edges much better.

ATi's 6xMSAA compared to nVidia's 8xS; the former will handle polygon edges much better. Also another advantage of MSAA over SSAA is that it doesn't blur the image like SSAA does.

I'm sorry, where did I claim it did?

How am I supposed to read your comments? I stated that SSAA smooths out edges just as well as MSAA with like sampling positions which is true- what were you trying to say when talking about MSAA smoothing out edges better?

Somebody asked why ATi didn't ship SSAA on their cards and I was merely responding with what I had heard from one of their reps.

Which said another way is parroting ATi's PR. They don't want to look bad in benches- the same reason nVidia 'hides' 4xS in their drivers.

Nonsense - you're blowing this way out of proportion. 8xS is a nice option but it's not a must-have by any means. If you're referring to real old games like X-Wing, well, they don't even use transparency.

4xS is quite useable in a wide variety of titles and 8xS is too for games that don't allow high resolutions.

Sure, if you like gaming at low/middling resolutions.

Do you think of 1920x1440 as high res? Games that can't be played at reasonable resolutions are those that benefit the most from 8xS.

Try 8xS on a 6800U with GLQuake @ 1920x1440 and you'll be getting around 75 FPS average in the timedemos. Nothing wrong with that mind you, but let's not get delusional as to how many titles 8xS can actually be used in.

So you run it @2048x1536 w/4xS.

Right...and yet you're expecting SSAA to be perfectly usable on a 9600 Pro and lambasting ATi for not offering it.

So your stance is now that the R9600Pro is slower then a Voodoo5 or GeForceDDR? Trying to make sense of what you are saying here.

I'm not sure what older games you're playing so I can't really comment but at 1920x1440 alpha aliasing isn't really a problem for my old games so xS AA is just a nice extra.

Try Anachronox- 1280x960 is the max resolution allowed.

Sure you can. Disable the AI in the CCC and it's gone.

Where exactly? It hasn't been in a single driver revision that I have seen.

I've already commented multiple times I don't like either vendor detecting applications for performance reasons but of course I'm going to be more lenient to a vendor that allows it to be disabled.

Even if they don't allow it to be disabled. I have heard the tale of how you can 'disable' the optimizations in ATi's drivers numerous times and it may work to the die hard nVidia faithful who aren't running ATi, I am- you can't shut them off in any driver revision I have seen.

I mean if I wanted to really have a go at nVidia I'd comment that my Radeon 7000's 16x bilinear AF exhibits less shimmering and mip transitions than a 6800U with 16x trilinear + all optimizations enabled.

That's a pretty sad state given how poorly you see mip transitions to start with(I was shocked at how slap in the face bad they were on both R3x0 parts after hearing you speak as if perforamance AF was actually useable). Still waiting for NV2X quality AF on any new part(I'd drop the $1K for a SLI setup that could make it playable at decent settings). One of the reasons I don't have a big urge to upgrade.

I'm not answering your question directly but I will point out that MacOS X's platform OpenGL structure and implementation is quite different to Windows.

I understand you weren't answering my question but there is nothing stopping them from enabling SSAA except they don't want to look bad in reviews. At the very least they should have it a 'hidden' option like 4xS is for nV.
 
Jun 14, 2003
10,442
0
0
Originally posted by: Gstanfor
With nVidia, 2x AA is 2 sample rotated grid MSAA. This is true right back to GF3, which pioneered MSAA. Prior to that AA on nVidia GPU's was OGSS (ordered grid super sampling).

Quincunx is 2x MSAA with a blur filter applied in the ROP's

Strangely enough, 4x AA on nVidia GPU's has always been ordered grid, regardless of MSAA or SSAA (OGMS, OGSS). This only changed with nV4x (GeForce 6 series).

HRAA: High-Resolution Antialiasing through Multisampling (pdf)

that explains it, i thought 2xQ was like 4xAA wit out the performance hit, but menus were blurry as hell.

its not really usable IMO so they should ditch it