Radeon so fast in 32 bit color because it uses a 16 bit zbuffer?

Doomguy

Platinum Member
May 28, 2000
2,389
1
81
HaVoC deserves all credit this idea he posted in another thread. Dosent it make sense since the performance drop from 32 bit to 16 bit is so small. Post what you guys think about this.
 

SSP

Lifer
Oct 11, 1999
17,727
0
0
I don't know much about this, but could they be using a 32bit z buffer and filter it down to 16 bit color? Again, I have no idea what I'm saying here... just a though.
 

Czar

Lifer
Oct 9, 1999
28,510
0
0
I think its more that in 32bit it shows the cards true power but in 16bit its all drivers, and ATI makes bad drivers.
 

lordneo99

Member
Jul 17, 2000
31
0
0
very possible , the S3 Savage4 , used a 16 zbuffer when doing
32 bit output..
it also had simialar results, very small drop in performace.
but wouldnt it possiblity also have that crappy
dithering in 16 bit.

ati all in wonder pro
ati rage 128

experienced.

bunk in 16 color.
nice in 32 bit

only in intel though
Black screen in my athy
have to use 2 years old drivers to get ti to work in my athy
yes updated
 

Doomguy

Platinum Member
May 28, 2000
2,389
1
81
czar: The radeon just dosent have enough memory bandwidth to not have a huge drop in performance when going to 32 bit color. If it does use a 16 bit zbuffer than the radeon is a piece of crap for image quality.
 

SSP

Lifer
Oct 11, 1999
17,727
0
0
Then with better drivers... the 32bit performance should increase too.


EDIT - If the picture quality of the raedon sucked, I'm sure all the reviewers out there would notice it.
 

Doomguy

Platinum Member
May 28, 2000
2,389
1
81
What dont you understand ssp? We're not talking about the drivers. We're talking about if the radeon uses a 16bit zbuffer to falsy promote its 32 bit speed like the savage 2k did. With its memory bandwidth there is no way it could have 32 bit performance that fast.
 

SSP

Lifer
Oct 11, 1999
17,727
0
0
Doomguy, I was replying to Czar.

If ATI does use this trick, we'll know soon enough.
 

Doomguy

Platinum Member
May 28, 2000
2,389
1
81
Still its not enough to make 32 bit that fast. You can overclock your gf2's memory bandwidth and it still wont come close to the radeon.
 

Goi

Diamond Member
Oct 10, 1999
6,771
7
91
There's no conclusive evidence of either argument yet. However, consider the fact that the Radeon does reduce scene overdraw, which eats up memory bandwidth, by employing their HyperZ(a Z-buffer lossless compression algorithm), and by their Fast Z clear(self-explanatory). By saving memory bandwidth used by the Z-buffer, more memory bandwidth can be applied to rendering 32-bit pixels.

According to SharkyExreme, who supposedly disabled the HyperZ feature, a performance loss of around 30% was incurred in WinBench2000.

Now before you guys start bashing Sharky and Winbench 2000 for credibility, consider that this does somewhat makes sense, and at least should account for SOME of the 32bit performance.
 

Fenix793

Golden Member
Jan 17, 2000
1,439
0
76
Does it have more bandwidth? I didnt think so. Anyway an overclocked geforce still beats out an overclocked radeon. especially those 5ns ones.
 

KarlHungus

Senior member
Nov 16, 1999
638
0
0
"The 183 MHz core and memory speeds provide the Radeon with a raw fill rate of about 1.1 gigatexels per second, and 366 megapixels per second. With a drop of about 100 megatexels per second and 34 megapixels per second from the announced solution, the shipping Radeon cards are also 500 megatexels and 434 megapixels slower than the GeForce 2 GTS.

ATI is betting that this will not matter, once again due to memory bandwidth limitations. ATI claims that even if you?re using 200MHz DDR SDRAM (effectively 400MHz), you?re limited to a 300 megapixels per second fill rate at 32-bit color with a 32-bit Z-Buffer, so adding more pixel pipelines would not?t help them, which is why they focused on having three texture units per pipeline. To see if this is the truth or not, we will have to turn to the benchmarks, which are given later in this review."


Ok, in a perfect world where there is infinite bandwidth the fillrate differences between 16 and 32 bit would be minor at best. A good example of this is at low resolutions where bandwidth is not being stressed (though neither is fillrate at that point). The odd thing is that Radeon seems to exhibit the same minor differences at all resolutions. I think it can be explained by the second paragraph.

First take the bandwidth that 200 MHz DDR would provide - 6.4 GB/s. Now take a look at what 183 MHz DDR would provide - 5.856 GB/s, but factor in the effect of using HyperZ - 7.613 GB/s (5.856 x 1.3). Now using ATI's formula from above, i.e. a 300 Mpixel/s fillrate requires 200 MHz DDR, we can get a rough estimate of how many MPixels/s the 183 MHz DDR + HyperZ can push. It should be a simple ratio: fillrate/300 = 7.613/6.4. Thus the theoretical fillrate that can be supported is 357 MPixels/s at 32 bit everything. Judging from this number we should expect 32 and 16 bit performance to be similar at all resolutions.
 

lordneo99

Member
Jul 17, 2000
31
0
0
yup the ATi card uses 5.5 DDR , 64 megs of it...
very fast , yes the gainward/cardex was revised , they are using
6 ns. currently.
yes it is faster in 32 bit, yes there is a banding problem
at 16 bit, even anand said so.
yes the hyperZ does work, but does it also cause graphical glitchs
for games that might not like that.

Still the only reason i wouldnt get the ati card is the
poor drivers, they are obviously spending more $$ on hardware
then driver develpoment, trying to keep up/surpass.

and well only 2 games of mine like run in 32 bit.
hell im still playing Privateer 1.. (remember that DOS game)
from time to time..

For me... a poor guy... A geforce MX.. overclocked to ~200/~200
is great.
hell i wont spend more then 250 on a cpu.
im not spending 300 on something that will be obsolete in 6 months
geforce DDR. (another 6 months to live till NV20)

hell what is the price that ati is planning on chager for this card.
399 for the 64 megs version.... no way.
279 for the 32 megs version.... geforce2 GTS is cheaper.

im one of those value (cheap) people





 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
Considdering this is what they did with the Rage128 and the drivers supposedly havent changed all that much since, it doesnt sound too unrealistic.
But I say why dont we all simply wait a little and some site is bound to find out.
 

Robor

Elite Member
Oct 9, 1999
16,979
0
76
Why is everyone saying the 64MB DDR Radeon is $399? I've seen it posted for $285 from Egghead. In fact, I picked one up myself today for $289 plus shipping. Pricy but for all this card offers I think it's something that will last a while.

Rob
 

mrbios

Senior member
Jul 13, 2000
331
0
0
I don't think that the Radeon is using a 16 bit zbuffer. I think that HyperZ thing really does help, and in all the driver pictures I've seen, its pretty much just like Nvidia supposed "32-bit Zbuffer", which it really is not. The Nvidia uses a 24-bit Zbuffer and 8-bit stencil buffer, and the ATI drivers let you pick between 16, and 16,24 bit Zbuffer, plus an 8-bit stencil buffer. That's my understanding of this, but I could be wrong.

Russell "Mr.Bios" Sampson
 

HaVoC

Platinum Member
Oct 10, 1999
2,223
0
0
Thanks for the props, Doomguy. But my suspicious seem to be laid to rest thanks to some sound reasoning courtesy of Goi and KarlHungus.

The Hyper-Z technology, if it is really working here, is some impressive stuff. We will no doubt see more tricks to optimize available bandwidth because we have hit the memory technology wall.

Again, as I always say, I hope that the emphasis shifts to triangle counts and model realism rather than 1000FPS at 1600x1200x32.
 

Goi

Diamond Member
Oct 10, 1999
6,771
7
91
The HyperZ does seem to be doing its job, but its only the first step in reducing memory bandwidth. There's pixel overdraw and then there's polygon/triangle overdraw. I don't know if the HyperZ gets rid of, or at least reduces, 1 or the other, but I don't think its both. Once we've got rid of both, we'll have plenty of memory bandwidth left to render 32/64bit pixels in the future(for a while anyway)