What is Nvidias problem?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
AFAIK the NV40 is using pretty much identical algorithms for both AA and AF as the R3xx/R420 boards do so IQ and performance hits should be a wash there, though nVidia may have less texture shimmering because they use more bits than ATi for their texture sampling. The only two situations where they aren't a wash:

  • nVidia's 8x mode uses SSAA which will invariably take a large performance hit, though it can AA textures which MSAA can't. In this case you're trading speed for extra IQ.
  • In shader intensive titles nVidia should take a larger hit when running AF because their shaders share the texturing work. On the ATi boards the texture units are decoupled from the shaders and can operate at the same time. Using SM 3.0 on nVidia cards can narrow that gap because it can reduce the time the shaders are occupied.

In some of the benchmarks, the performance hit from AF was completely minimal, while in others (Far Cry and UT2004, for example), it nearly halved the speed.
Answered in my second point above.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
AF isn't really usable on the GF4, unfortunately, and 16X ATI AF looks fine compared to 4X "GeForce 4 quality" AF.

"Looks fine" = serious texture aliasing and screaming mip transitions in almost every game(and that's using their best quality AF part ever)- you know, the things AF is supposed to get rid of(well, not mip transitions but certainly texture aliasing).

I haven't sat there nitpicking the differences, but I do notice the difference of 8X/16X AF on my 9800 Pro.

Textures get sharper and aliasing increases. You can get sharper textures on a Voodoo3 by adjusting the LOD bias without a huge hit, ATi's and now nV's implementations are better then that- how much is debateable.

but the problem isn't the software, and not something driver can change

Drivers can help cover it up. nV can force brilinear all the time and use a detection method to cover their tracks when someone checks for it and reduce their performance hit a considerable amount, comparable to ATi.
 

CaiNaM

Diamond Member
Oct 26, 2000
3,718
0
0
Drivers can help cover it up. nV can force brilinear all the time and use a detection method to cover their tracks when someone checks for it and reduce their performance hit a considerable amount, comparable to ATi.

heh.. but that's not fixing anything, rather covering it up... reducing iq to increase performance is not an "improvement" imo... but hey, apparenlty most ppl don't care as af quality is getting progressively worse each gen :(
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
If ATI's is supposedly less acurate, and uses MSAA unstead of supersampling and peole cant realy notice a difference then what is the point of Super Sampling. Yeah i think its good that Nvidia is trying their best to improve image quality but if its generally looks the same why do it?
Also how many transisitors would it take to ad an ALU for one purpose (AA and AF). I am guessing way too many as it only has two. Also is there any reason why Nvidia didn't make their shader programmable, it seems to have its benefits, though i emagine it would be a bit more complicated.

-Kevin
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
If ATI's is supposedly less acurate, and uses MSAA unstead of supersampling and peole cant realy notice a difference then what is the point of Super Sampling.

In terms of texture filtering, ATi is less accurate in terms of implementation and particularly in terms of blending. The latter isn't really arguable, they clearly and provably are. SSAA is easy to tell apart from MSAA in older titles, where the technology is meant to be used. Fire up CounterStrike and load up the Italy map using 6x AA on any ATi part and you will see serious aliasing all over the railings. I like to use that particular example as it still a heavily played game and one that has numerous alpha textures for simulating geometry. SSAA is useful and useable in that particular example, and there really is no reason for ATi not to support it except it looks bad in benchmarks(user requests be d@mned).

Yeah i think its good that Nvidia is trying their best to improve image quality but if its generally looks the same why do it?

It doesn't look the same, mainly the legally blind or close to it are the ones that think so ;) For those that rather sacrifice IQ and want speed, which seems to be most people, that is the way to go. Unfortunately nVidia is taking notice and reducing their IQ considerably to match ATi's and gain the speed that goes with it. There are numerous people on these boards that defend bilinear filtering at this point which I thought was intollerably poor six years ago, let alone today. It doesn't look close to the same, but since most people 'don't notice it'(apologizing for their vendor of choice- nV's brilinear and ATi's tri/bi) the IHVs continue to lower quality. At the rate we are going we'll be back to point filtering by the time the R700/NV70 roll around :p
 

rbV5

Lifer
Dec 10, 2000
12,632
0
0
Originally posted by: Naustica
Originally posted by: rbV5
at any rate there's little to complain regarding the nv40, or r420 for that matter, other than perhaps their availability (or lack thereof).

You can probably complain a little about lack of feature support :)


Yeah I'm still waiting on Nvidia to release official drivers for 6800 series. Not to mention driver support for video encoder chip. The built-in encoding chip was the major reason I went with Nvidia this round along with superior multi-monitor support. It also helped that I was able to get in on the $299 BFG 6800 GT deal. :D

But seriously the performance of these cards are so good and similar that feature set and price was more important to me than raw FPS speed. But I haven't heard or seen anything about the video encoder since the launch press release. I just want some kind of demonstration that it works! :frown:

It doesn't work yet from what I can see (PVP), and its rarely mentioned anymore. My EVGA card has a brief mention on the side of the box about smoother video playback and HD playback and thats it. Both ATI and Nvidia claim WMV acceleration as a feature, and neither actually have it enabled it appears. I'm not holdng my breath about SM3.0 or 3dc support because it also requires developer support, but it was April when these cards launched. Nvidia doesn't even have the new gen demos available on their site yet.

I don't understand the hold up for the encoder support, any more than the lack of reviewers questioning it. The cards are here now and we're using them.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
What exactly is 3dc? (Hey everyone is allowed one noob question every once in a while :p )

Also are there any other types of AA not listed? Doesn't Matrox use a different technique?

-Kevin
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
3Dc is new compression format for a normal map (which is a type of detail texture, as you can see with ATi's examples shown in the X800XT/Pro reviews).
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Like DXT1 and 32bit, and stuff right. Is it a lot more effective or is it marginally more effective,i guess this is hard to say because i doint think anything supports it yet.

-Kevin
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
If ATI's is supposedly less acurate, and uses MSAA unstead of supersampling
They're different AA tyoes so to call it less accurate isn't the best way to describe it. Also nVidia's <8 modes are MSAA just like ATi's.

and peole cant realy notice a difference then what is the point of Super Sampling.
SSAA can AA textures which MSAA can't do. SSAA has the disadvantage of being slower and potentially blurring things.

Yeah i think its good that Nvidia is trying their best to improve image quality but if its generally looks the same why do it?
If the hardware supports (which it does) and the customers want it (which they do) then the vendor should really deliver.

Also ATi's first response was that they can't gamma correct SSAA and then they changed their tune to that they don't provide options that most customers wouldn't want to use (i.e. because SSAA is too slow). In the end neither really helps the customer who wants the feature.

Also how many transisitors would it take to ad an ALU for one purpose (AA and AF).
Usually these days the pixel shaders do both AF and AA.

Also is there any reason why Nvidia didn't make their shader programmable,
Huh?
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Supersampling touches not just textures but also shaders, so it's only going to get slower as games become more graphically demanding.
 

stnicralisk

Golden Member
Jan 18, 2004
1,705
1
0
Originally posted by: Gamingphreek
I know about temporal AA, Nvidia should really get that, but i thought Nvidia just updated the way that they use AA. Dont they use rotated grid now or is that what ATI uses. I mean every benchmark you see Nvidia hardware begins to fail when you turn on AA and or AF.

-Kevin

I thought the recent 6800 vs x800 nvidia was winning with 4x8x enabled?

Also (correct me if im wrong) but I thought Temporal AA is the thing that people have been complaining about lately - it makes it easy to see the mipmap borders or something.
 

stnicralisk

Golden Member
Jan 18, 2004
1,705
1
0
Originally posted by: Gamingphreek
What are you talking about!
Just for the record i have no degrees (yeah im still in high school)

Anyways, im not calling them stupid, but why dont they fix this!?!? This has been a problem since AA and AF came out. Nvidia has always had trouble implementing these two as effectively as ATI. If all they have to do is shrink the die add some stuff what is possibly so hard? Granted this is out of my league as i dont know what is involved in die shrinks and stuff (getting into the highly technical forum there).
I know im asking for more or less an almost perfect Video Card but what is so hard about all the stuff listed?

-Kevin

Intel's processor ~140m transistors
NV40 ~ 222m transistors
.09nm processor gave Intel a hard time - they had to delay twice while trying to perfect it IIRC
Intel's resources > Nvidia's resources.

Do you see the problem now?