depth compression and depth clear questions

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
1. Nvidia once said their lossless depth buffer compression was 4:1, but how can it be 4:1 always? I was under the impression that lossless compression varied depending upon what's being compressed. Wouldn't 4:1 actually be the maximum ratio for lossless depth buffer compression (or at least was the maximum when the LMA in Geforce 3/4(?) first started it)?

2. GPUs didn't used to have fast depth clear. Does fast depth clear reduce the quality and it's just barely perceivable, or does it not reduce quality at all? I was thinking it reduced quality in at least a few cases.

I'm at a loss as to what it is exactly that AMD does that makes their zrange appear shorter than nvidia's. Maybe they use a hybrid compression scheme, maybe they clear their zbuffer too fast, or maybe it's something else. I read that they planned to make early z occlusion culling more aggressive which is why I'll never buy an AMD GPU again. They've cut way too corners on IQ (their colors didn't look as good; some were overvibrant others were washed out and while i heard they improved their texture filtering, I still don't know if it's as good as nvidias) and they were lacking some features in games that nvidia did have. Since they didn't work with and pay the game developers to include those features (some are actually good like JC2's water simulation) their products aren't worth buying IMO. That was a really stupid decision IMO, although they may have fixed it for the future. I also heard that their reference designs didn't always use ball bearing fans and the 6870 performs worse than the GTX 560 Ti yet runs hotter.

Anyway, answer whatever you can, I know the compression questions should be easy for people who know all about data compression.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
Lets play spot the differnces:

Anisotropic filtering in the theory: (computerbase de)

**edit:

to me atleast, it looks like AMD AF > Nvidia's AF.



580: (Nvidia GF110 - AF-Tester 16xAF)
61.png





7970: (AMD Tahiti AF-Tester - 16xAF)
59.png



AMD has this problem and also adopted the revised weighting of the AF samples that have provided on Cayman & Co. for a strong flickering. In the older generation, this is a bug in the texture units, the same number of samples (and thus computational complexity) as the GCN TMUs use, distribute them wrong.

And there is quick to recognize that the problem (finally!) has been solved: The anisotropic filtering on the GCN hardware flickers much less than the Radeon HD 6000 and Radeon HD 5000 cards. In the default driver barely a flicker can be seen.
AMD stopped their flickering issue.... with these new cards :) thats good.
I absolutely hate flickering... which sadly my 5870 does alot >_>


Here are the HQAF pictures:


580: (Nvidia GF110 - AF-Tester 16xHQAF)
62.png




7970: (AMD Tahiti AF-Tester - 16xHQAF)
60.png
 
Last edited:

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
Big deal right?

AMD has better image quality inside AFtester.... thats just theory... show us some pictures from games! sure:



Again here I think it looks best for AMD.... but their so close its like... really? you can spot a differnce?

Computerbase de, tested like 3 games and gave nvidia a tiny edge in 2 of the 3 games in HQAF mode, and 1 of them to AMD. However the differnces are so minute, you have to seriously get out a mircoscope to spot them.

Also its more or less subjective which you think looks best at this point, Computerbase de loves nvidia :p

Im hard pressed to find any differnces, inside games.


**edit:

z-buffer, is used for Z-culling right? when a object infront, is shown and one behinde it is hidden? like this gun in the serious sam 3 pic. Why dont you study the 2 pictures, look at the guns and make up your own mind about better zbuffering techniques ect.


Serious Sam 3:


580: (Nvidia GF110 - SS3 16xAF)
67.jpg


7970: (AMD Tahiti - SS3 16xAF)
57.jpg
 
Last edited:

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
My point is just... I dont buy the Image quality arguement.

The flickering is a bitch though.... apperntly its around nvidias level with the newer cards, which means almost non excistant.

If I buy a new card, Im going with a 7xxx card, cuz the 6xxx and 5xxx both have flickering and its madning
(for myself atleast, though its not appernt in all games, in some it is).
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
Here is Half-life 2:

Nvidia GF110 - Half-Life 2 16xHQAF:
64.jpg


AMD Tahiti - Half Life 2 16xHQAF:
54.jpg







And also Skyrim:


Nvidia GF110 - Skyrim 16xHQAF:
66.jpg


AMD Tahiti - Skyrim 16xHQAF:
56.jpg





Again hard to spot any differnces between AMD and Nvidia.

They've cut way too corners on IQ (their colors didn't look as good; some were overvibrant others were washed out and while i heard they improved their texture filtering, I still don't know if it's as good as nvidias) and they were lacking some features in games that nvidia did have.
AMD has more vibriant colours, Nvidias are usually darker... agree with that.
I prefer them that way, subjective what you think looks best.

Washed out? where? dont see that. Flickering is more or less gone with the 7xxx series, so thats around same level too now.

lacking features? like physX? meh.



I also heard that their reference designs didn't always use ball bearing fans and the 6870 performs worse than the GTX 560 Ti yet runs hotter.
On newegg you can get a 6870 for ~140$, while the 560 ti costs around 229$.
You would *hope* at that price differnce its a little bit faster right?

The 560ti is about ~5% faster (going by Techpowerup's chart's), and costs 89$ more
(~50% more cost for ~5% performance) (going by card prices on newegg.com)

And yes the 560 ti reference design, is both a little less noisy, and a little bit cooler temps dureing load, than the 6870 refernce designs.

You have to ask if you want less noise/hot card vs save 50% costs, before picking between one of them.
 
Last edited:

BFG10K

Lifer
Aug 14, 2000
22,709
3,007
126
1. Nvidia once said their lossless depth buffer compression was 4:1, but how can it be 4:1 always? I was under the impression that lossless compression varied depending upon what's being compressed. Wouldn't 4:1 actually be the maximum ratio for lossless depth buffer compression (or at least was the maximum when the LMA in Geforce 3/4(?) first started it)?
Lossless compression can’t guarantee a compression level (lossy can, you just chuck away more stuff to make it fit). What you’re looking at is the best case scenario.

2. GPUs didn't used to have fast depth clear. Does fast depth clear reduce the quality and it's just barely perceivable, or does it not reduce quality at all? I was thinking it reduced quality in at least a few cases.
It’s lossless. Instead of manually writing zeros across the VRAM to fill the z-buffer each time, the GPU just gets a single command and clears it internally.

I'm at a loss as to what it is exactly that AMD does that makes their zrange appear shorter than nvidia's.
From our conversation about this last time, it was concluded that you had absolutely no evidence to back your claims other than “I’ve seen it”. It’s best you stop referring to this statement as fact but rather as your opinion.
 

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
Lossless compression can’t guarantee a compression level (lossy can, you just chuck away more stuff to make it fit). What you’re looking at is the best case scenario.


It’s lossless. Instead of manually writing zeros across the VRAM to fill the z-buffer each time, the GPU just gets a single command and clears it internally.


From our conversation about this last time, it was concluded that you had absolutely no evidence to back your claims other than “I’ve seen it”. It’s best you stop referring to this statement as fact but rather as your opinion.
Thanks for answering my questions:) Anyway, AMD probably does it because it can save a shitload of transistors/performance and few people can tell the difference in IQ. MDK 2 shows the difference quite well. Play MDK2 on a 5770 then play it on an nvidia GPU and you'll see that the colors are not only unnatural on AMD, but the zrange is definitely compressed. It can't be my imagination.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,007
126
Anyway, AMD probably does it because it can save a shitload of transistors/performance and few people can tell the difference in IQ.
I’ve no idea what you're talking about.

MDK 2 shows the difference quite well. Play MDK2 on a 5770 then play it on an nvidia GPU and you'll see that the colors are not only unnatural on AMD, but the zrange is definitely compressed. It can't be my imagination.
Can you please tell us what you mean by "z-range is definitely compressed"? What are the differences you're seeing, exactly?

And if the colors are different, that's a driver issue, configuration issue, or a game issue. That's not AMD trying to save transistors.

At stock driver settings (i.e. no color adjustments) I’ve never seen any differences between the two vendors' colors, except in rare cases where the game was doing something different for each (e.g. HDR in the original Far Cry).
 

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
I’ve no idea what you're talking about.


Can you please tell us what you mean by "z-range is definitely compressed"? What are the differences you're seeing, exactly?

And if the colors are different, that's a driver issue, configuration issue, or a game issue. That's not AMD trying to save transistors.

At stock driver settings (i.e. no color adjustments) I’ve never seen any differences between the two vendors' colors, except in rare cases where the game was doing something different for each (e.g. HDR in the original Far Cry).
Nvidia's colors look more natural IMO. It's just like how 3dfx looked different in general from nvidia and AMD.

AMD does fewer zixels per clock (they have fewer z/stencil units), so that saves transistors.

It's hard to describe "compressed z-range", but there is definitely something AMD does that makes the scene look not as big. If you try MDK2 on each vendor you may or may not notice a difference between the two but I sure do notice a difference:)