1. Nvidia once said their lossless depth buffer compression was 4:1, but how can it be 4:1 always? I was under the impression that lossless compression varied depending upon what's being compressed. Wouldn't 4:1 actually be the maximum ratio for lossless depth buffer compression (or at least was the maximum when the LMA in Geforce 3/4(?) first started it)?
2. GPUs didn't used to have fast depth clear. Does fast depth clear reduce the quality and it's just barely perceivable, or does it not reduce quality at all? I was thinking it reduced quality in at least a few cases.
I'm at a loss as to what it is exactly that AMD does that makes their zrange appear shorter than nvidia's. Maybe they use a hybrid compression scheme, maybe they clear their zbuffer too fast, or maybe it's something else. I read that they planned to make early z occlusion culling more aggressive which is why I'll never buy an AMD GPU again. They've cut way too corners on IQ (their colors didn't look as good; some were overvibrant others were washed out and while i heard they improved their texture filtering, I still don't know if it's as good as nvidias) and they were lacking some features in games that nvidia did have. Since they didn't work with and pay the game developers to include those features (some are actually good like JC2's water simulation) their products aren't worth buying IMO. That was a really stupid decision IMO, although they may have fixed it for the future. I also heard that their reference designs didn't always use ball bearing fans and the 6870 performs worse than the GTX 560 Ti yet runs hotter.
Anyway, answer whatever you can, I know the compression questions should be easy for people who know all about data compression.
2. GPUs didn't used to have fast depth clear. Does fast depth clear reduce the quality and it's just barely perceivable, or does it not reduce quality at all? I was thinking it reduced quality in at least a few cases.
I'm at a loss as to what it is exactly that AMD does that makes their zrange appear shorter than nvidia's. Maybe they use a hybrid compression scheme, maybe they clear their zbuffer too fast, or maybe it's something else. I read that they planned to make early z occlusion culling more aggressive which is why I'll never buy an AMD GPU again. They've cut way too corners on IQ (their colors didn't look as good; some were overvibrant others were washed out and while i heard they improved their texture filtering, I still don't know if it's as good as nvidias) and they were lacking some features in games that nvidia did have. Since they didn't work with and pay the game developers to include those features (some are actually good like JC2's water simulation) their products aren't worth buying IMO. That was a really stupid decision IMO, although they may have fixed it for the future. I also heard that their reference designs didn't always use ball bearing fans and the 6870 performs worse than the GTX 560 Ti yet runs hotter.
Anyway, answer whatever you can, I know the compression questions should be easy for people who know all about data compression.
