Help me understand 128-bit floating-point color precision...

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Jun 18, 2000
11,197
769
126
Originally posted by: ViRGE
My understanding is that the framebuffer is stored at the same depth as all calculations are done(so if you have a 16bit buffer, it's all 16bit), hence our confusion. Textures can be independant, but the buffer is locked to the rendering depth.
But why would that be the case? The only time you need to access the framebuffer is when doing multipass or writing out final color values. Why should the framebuffer care what precision the pixels are calculated at?
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
You hit the nail on the head; because of multipass. Besides, it makes things simplier for the programmer, and faster for the user.
 
Jun 18, 2000
11,197
769
126
Originally posted by: ViRGE
You hit the nail on the head; because of multipass.
How often does multipass become a problem in most games? With the maximum shader length getting longer and longer, multipass will become less of an issue.
Besides, it makes things simplier for the programmer, and faster for the user.
Explicit type conversions isn't exactly complicated programming. And faster for the user, how? :confused: How would rendering in all 32-bit be faster for the user?
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Faster in that in 16bit mode, it's really all 16bits.
 
Jun 18, 2000
11,197
769
126
Originally posted by: ViRGE
Faster in that in 16bit mode, it's really all 16bits.
The Kyro I/II cards render at 32-bit internal regardless of the external framebuffer precision. I'm trying to find out whether you can manually specify a different internal precision within the API.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
Some card's don't perform 2x faster in 16bit anymore, only 30% faster etc, however people playing in 16bit normally have slightly older cards.
I don't care about older cards, nor am I interested in the above comment which seems largely irrelevant to the statement you quoted.

What possible benefit can a "postfilter" do to INCREASE the image quality of an already heavily dithered image.
Do you know how the postfilter works? Because if you did you wouldn't be asking such a question. Like I said before, it's very similar to FSAA except it doesn't change the size of the image and it uses a slightly different algorithm to achieve its colour sampling.

Rendering in 32 bit colour and then sampling it down at the end is utter lunacy and no-one in their right mind would ever do this except maybe in a compatibilty situation where the target display can't handle that much colour precision. If you've already used the resources to render in 32 bit colour you haven't really achieved any savings at all and you may as well just output the unaltered image.

3dfx's scheme was so popular because it provided 22 bit colour from 16 bit colour for free. There was no performance hit from using the postfilter.

No 3D card has EVER offered 8bit only colour, it's ALWAYS been 16bit, the Voodoo 1 was 16bit, the Verite 1 was 16 bit.
Can you read? Where did I say 8-bit only? I said 8-bit as an option.

I can assure you for the next 3+ generations of cards 16bit WILL be an option,
It might be an option for purely compatiblity reasons but most of the internal rendering will probably be done at a higher precision anyway. It's much better and cheaper to optimise the hardware to do higher precision and then just scale it down rather than develop full pathways for all colour depths and try to optimise them all.

Totally true, but it will still be an option - and if it gives even 15% more of an increase in speed - some people will use it (like me) if they can't afford a top end card.
At some stage there won't be any performance difference at all and like I said before, it'll just be there for compatibility reasons. Even now some cards are faster in 32 bit mode than they are in 16 bit mode because their HSR schemes stop working in 16 bit mode, plus the crossbar controllers aren't being utilised to their full capability.

*I* don't notice it frequently, and this particular topic is OPINION so you can't really argue it - there's no fact to be found here.
Carmack and Sweeney are demanding 128 bit for good reasons, reasons which I agree with 100%. Whether you can see a difference or not is largely irrelevant.

Sure don't, ain't gonna stop people using it though.
It is when they see it in 128 bit colour.

The Kyro I/II cards render at 32-bit internal regardless of the external framebuffer precision.
True, though there still is a small performance difference between 16 bit and 32 bit mode because it will load 16 bit textures when 16 bit colour is requested.
 

AbRASiON

Senior member
Oct 10, 1999
861
4
81
Originally posted by: BFG10K
Some card's don't perform 2x faster in 16bit anymore, only 30% faster etc, however people playing in 16bit normally have slightly older cards.
I don't care about older cards, nor am I interested in the above comment which seems largely irrelevant to the statement you quoted.

Seems completely relvant to me, this entire thread has been discussing the viable use of 16bit by users since my first comment in it.





What possible benefit can a "postfilter" do to INCREASE the image quality of an already heavily dithered image.
Do you know how the postfilter works? Because if you did you wouldn't be asking such a question. Like I said before, it's very similar to FSAA except it doesn't change the size of the image and it uses a slightly different algorithm to achieve its colour sampling.

You haven't answered the question - how exactly is a filter going to get something from nothing? - sounds like a waste of gpu time to me.
See my original analogy.
You can re-encode a 128kbit mp3 into 384 all you like, but your source was still a 128kbit one, there's little that can be done to increase the quality.


Rendering in 32 bit colour and then sampling it down at the end is utter lunacy and no-one in their right mind would ever do this except maybe in a compatibilty situation where the target display can't handle that much colour precision. If you've already used the resources to render in 32 bit colour you haven't really achieved any savings at all and you may as well just output the unaltered image.

Hence rendering at 16bit internally for speed purposes.
Plus as someone mentioned only posts ago, there can be speed benefits to doing this.


3dfx's scheme was so popular because it provided 22 bit colour from 16 bit colour for free. There was no performance hit from using the postfilter.


No 3D card has EVER offered 8bit only colour, it's ALWAYS been 16bit, the Voodoo 1 was 16bit, the Verite 1 was 16 bit.
Can you read? Where did I say 8-bit only? I said 8-bit as an option.

Can you read?
Name ANY 3d card that offered 8 bit output.....


I can assure you for the next 3+ generations of cards 16bit WILL be an option,
It might be an option for purely compatiblity reasons but most of the internal rendering will probably be done at a higher precision anyway.

optionally not 100% 128bit only.... so as I originally said, the next 3+ gens 16bit will be an option.
Perpahs by the 3'rd gen in 18->24 months games will have THAT many passes that 16bit really is truely horrible, but right now it's NOT "truely horrible" to the point of EVERYONE having to switch to 32bit..

I think that 32bit and 16bit will still be an option, be it feed it 16 / 32bit textures and process in 128bit internally or feed it 16bit / 32bit textures and process in 32bit / 16 bit fully.



It's much better and cheaper to optimise the hardware to do higher precision and then just scale it down rather than develop full pathways for all colour depths and try to optimise them all.

sounds logical to me.



Totally true, but it will still be an option - and if it gives even 15% more of an increase in speed - some people will use it (like me) if they can't afford a top end card.
At some stage there won't be any performance difference at all and like I said before, it'll just be there for compatibility reasons. Even now some cards are faster in 32 bit mode than they are in 16 bit mode because their HSR schemes stop working in 16 bit mode, plus the crossbar controllers aren't being utilised to their full capability.

I've never seen a card which is faster in 16bit colour than 32bit colour.
I'd much rather have full detail 16bit than most detail 32bit - ie: some users may well turn all the stuff "off" and stay in 32bit, whereas I would prefer to keep it on and drop to 16.




*I* don't notice it frequently, and this particular topic is OPINION so you can't really argue it - there's no fact to be found here.
Carmack and Sweeney are demanding 128 bit for good reasons, reasons which I agree with 100%. Whether you can see a difference or not is largely irrelevant.

It's completely bloody relevant, if I get more frames from 16bit, I'll switch to it- whether or not they petition manu's to include it or not.
Re-read what you just quoted... note the word in capitols..... it starts with o and ends in pinion
Therefore 16bit is still a viable rendering method - if you or carmack like it or not, some people will use it.
(example) I purchase a R9700 now.
12months time Sof 3 comes out using the D3 engine and 10x more detail than ID put in to Doom 3, it runs ass slow in 128bit on the card, ass slow in 32bit on the card and just runs ok in 16bit.
I'll damn well run it in 16bit until I can afford a new card.




Sure don't, ain't gonna stop people using it though.
It is when they see it in 128 bit colour.
No it's not - as I said before this is O.P.I.N.I.O.N hence if *I* feel I'm getting a smoother experience in 16bit I'll damn well do it.
P.S I'm not the only one that doesn't care about 32 / 16 bit differences, half the people at the lans I go to with MX cards whine about low frames - I drop them to 16 bit's and they are like "wow smooth" and don't notice any image quality differences
Sure future games might make more and more differences with the dithering in smoke and fog to a point where these people do notice it - then they might start using 32bit instead of 16.!!!



Like I said it's opinion, you can't dictate what the users can / can not notice - there's always going to be someone out there who doesn't notice or care.
(a good analogy of this would be users who don't notice 60hz flicker on monitors)

The Kyro I/II cards render at 32-bit internal regardless of the external framebuffer precision.
True, though there still is a small performance difference between 16 bit and 32 bit mode because it will load 16 bit textures when 16 bit colour is requested.[/quote]

 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
Seems completely relvant to me, this entire thread has been discussing the viable use of 16bit by users since my first comment in it.
This thread was discussing why we need 128 bit colour then you stepped in and started saying that you don't need anything more than 16 bit mode. We do need it, regardless of what you think you can or can't see.

32 bit mode is not enough, especially if you want cinematic quality rendering.

You haven't answered the question - how exactly is a filter going to get something from nothing?
How does FSAA get something from nothing? It either makes the source image bigger, colour samples then scales down. Or it samples the colour values on a rotational value.

The postfilter uses pixel sampling in much the same way except it doesn't change the size of the image, nor does it use rotational sampling. And while FSAA algorithms generally specifically target edges to smooth them over, 3dfx's algorithm specifically avoided edges to keep the image sharp and to avoid blurring.

You can re-encode a 128kbit mp3 into 384 all you like, but your source was still a 128kbit one, there's little that can be done to increase the quality.
That example is completely worthless in this situation.

Hence rendering at 16bit internally for speed purposes.
Uh yeah, 3dfx are rendering in 16 bit internally. That's exactly what I'm saying but you keep going on about how they're doing in 32 bit mode. Your scenario is utterly preposterous.

Name ANY 3d card that offered 8 bit output.....
Some of the very old cards (circa Voodoo1) could run GLQuake in 8 bit mode though they were still using 16 bit mode internally. This exactly what I'm saying will eventually happen to 16 bit and 32 bit modes; everything will be done in 128 bit mode but 16 bit and 32 bit mode will just be there for compatibility reasons.

I've never seen a card which is faster in 16bit colour than 32bit colour.
It's possible on most GF3/GF4 cards to come up with such a scenario. I'm not sure about the Radeon series because I believe Hyper-Z still keeps working in 16 bit mode.

It's completely bloody relevant, if I get more frames from 16bit, I'll switch to it- whether or not they petition manu's to include it or not.
It'll still be there for compatibility reasons, just expect the performance gap between 16/32 bit mode and 128 bit mode to keep getting smaller and smaller until it's worthless to use the smaller bit depths.

A great example of this is the Kyro based series. Sure, you can run under 16 bit mode if you like but the performance increase is so tiny that no-one would bother. Everything is done in 32 bit mode internally on the tile and the only small gain you get is from using 16 bit textures instead of 32 bit textures.
 

AbRASiON

Senior member
Oct 10, 1999
861
4
81
Originally posted by: BFG10K
Seems completely relvant to me, this entire thread has been discussing the viable use of 16bit by users since my first comment in it.
This thread was discussing why we need 128 bit colour then you stepped in and started saying that you don't need anything more than 16 bit mode. We do need it, regardless of what you think you can or can't see.


My original post QUOTED with relevant sections in bold........


16 bits looks fine to me in most games,.................

but on occassion I can see dithering,.....

I beleive 128bit precision and 128 bit per channel or 32per channel or however it's done - maximum quality as it's being processed stops dithering internally completely - the final output though should only be in 32bits max - you just won't see any difference from 32bit -> 48 / 64 / 128 bit (external image)
HOWEVER a properly done SIXTEEN BIT external image can still look damned good too,.. although most disagree.

The final output could be 22bit like 3dfx pushed or even 16bit and it would look fine.

the problem with 16bit now is it's optomised for 32bit so it makes 16 bit look worse than it is.


I'd much rather have 70fps in 16bit than 35 in 32bit - but it's a long huge argument - if I was running at 75 fps 32bit and 95 in 16 i'd take 32
It depends if you have a budget card or what - for example I play War 3 in 16bit 1024 on a Radeon ORIGINAL and it's bloody smooth - I was impressed


Still not getting the gist of my comment about O.P.I.N.I.O.N are you? - you're new at this "internet debating thingmo!" aren't you?



32 bit mode is not enough, especially if you want cinematic quality rendering.

Don't throw round bull$#!t marketing words from Nvidia's marketing grunts please, I feel queasy.
If you mean "32 bit mode is not enough, especially if you want to exceed 20 passes per scene" then I can understand what you're saying, but "cinematic quality rendering" please..............




You haven't answered the question - how exactly is a filter going to get something from nothing?
How does FSAA get something from nothing? It either makes the source image bigger, colour samples then scales down. Or it samples the colour values on a rotational value.

The postfilter uses pixel sampling in much the same way except it doesn't change the size of the image, nor does it use rotational sampling. And while FSAA algorithms generally specifically target edges to smooth them over, 3dfx's algorithm specifically avoided edges to keep the image sharp and to avoid blurring.


Seems like an incredible waste of cycles to get a dithered down image to look actually better - they may as well have rendered in 32bit in the first place.
This certainly fascinates me as it's not logical.
Also FSAA may give you something from nothing but it also blurs the image like crazy too (see textures with fine writing on them)
I've never seen this postfilter in action but I'll assume it actually works.






You can re-encode a 128kbit mp3 into 384 all you like, but your source was still a 128kbit one, there's little that can be done to increase the quality.
That example is completely worthless in this situation.


It's not worthless what-so-ever, you are attempting to improve the quality of something without increasing the output size or re-encoding, just running a "filter" over it in an attempt to get more out of it.



Hence rendering at 16bit internally for speed purposes.
Uh yeah, 3dfx are rendering in 16 bit internally. That's exactly what I'm saying but you keep going on about how they're doing in 32 bit mode. Your scenario is utterly preposterous.

I was under the impression they actually rendered the 16bit textures with 32bit's of space to avoid dithering which roughly acheived 22bit mode - obviously I'm incorrect.
However I didn't out and out say they are rendering entirely in 32bit mode.



Name ANY 3d card that offered 8 bit output.....
Some of the very old cards (circa Voodoo1) could run GLQuake in 8 bit mode though they were still using 16 bit mode internally. This exactly what I'm saying will eventually happen to 16 bit and 32 bit modes; everything will be done in 128 bit mode but 16 bit and 32 bit mode will just be there for compatibility reasons.

I was not aware of that, very interesting - you may well be correct then - although I'd like to see more information on if they genuinely rendered 16bit internally or 8bit.



I've never seen a card which is faster in 16bit colour than 32bit colour.
It's possible on most GF3/GF4 cards to come up with such a scenario. I'm not sure about the Radeon series because I believe Hyper-Z still keeps working in 16 bit mode.

Great it's possible to come up with such a scenario, but if we get back to the real world and use maximum bandwidth of the card (b/w limited scene / game) then there's no disputing that 16bit can and will use less bandwidth which will ensure crappier looking (as I've been happy to admit all along) but ultimately faster images - people make the trade off - their choice.
(and again, many many people simply don't notice it.......)


It's completely bloody relevant, if I get more frames from 16bit, I'll switch to it- whether or not they petition manu's to include it or not.
It'll still be there for compatibility reasons, just expect the performance gap between 16/32 bit mode and 128 bit mode to keep getting smaller and smaller until it's worthless to use the smaller bit depths.

and when this occurs I'll go to 32bit mode etc.
I've never said 16bit is the be all and end all to rendering, I was pointing out something as lowly as 16bit rendering to SOME eyes can still be sufficient and acheive higher frames.
NOTE: you didn't quote much of my previous post, it's missing many comments.......


 

ProviaFan

Lifer
Mar 17, 2001
14,993
1
0
Originally posted by: AbRASiON
Originally posted by: BFG10K
Seems completely relvant to me, this entire thread has been discussing the viable use of 16bit by users since my first comment in it.
This thread was discussing why we need 128 bit colour then you stepped in and started saying that you don't need anything more than 16 bit mode. We do need it, regardless of what you think you can or can't see.
My original post QUOTED with relevant sections in bold........
16 bits looks fine to me in most games,.................

but on occassion I can see dithering,.....

I beleive 128bit precision and 128 bit per channel or 32per channel or however it's done - maximum quality as it's being processed stops dithering internally completely - the final output though should only be in 32bits max - you just won't see any difference from 32bit -> 48 / 64 / 128 bit (external image)
HOWEVER a properly done SIXTEEN BIT external image can still look damned good too,.. although most disagree.

The final output could be 22bit like 3dfx pushed or even 16bit and it would look fine.

the problem with 16bit now is it's optomised for 32bit so it makes 16 bit look worse than it is.

I'd much rather have 70fps in 16bit than 35 in 32bit - but it's a long huge argument - if I was running at 75 fps 32bit and 95 in 16 i'd take 32
It depends if you have a budget card or what - for example I play War 3 in 16bit 1024 on a Radeon ORIGINAL and it's bloody smooth - I was impressed
Still not getting the gist of my comment about O.P.I.N.I.O.N are you? - you're new at this "internet debating thingmo!" aren't you?
32 bit mode is not enough, especially if you want cinematic quality rendering.
Don't throw round bull$#!t marketing words from Nvidia's marketing grunts please, I feel queasy.
If you mean "32 bit mode is not enough, especially if you want to exceed 20 passes per scene" then I can understand what you're saying, but "cinematic quality rendering" please..............

You haven't answered the question - how exactly is a filter going to get something from nothing?
How does FSAA get something from nothing? It either makes the source image bigger, colour samples then scales down. Or it samples the colour values on a rotational value.

The postfilter uses pixel sampling in much the same way except it doesn't change the size of the image, nor does it use rotational sampling. And while FSAA algorithms generally specifically target edges to smooth them over, 3dfx's algorithm specifically avoided edges to keep the image sharp and to avoid blurring.
Seems like an incredible waste of cycles to get a dithered down image to look actually better - they may as well have rendered in 32bit in the first place.
This certainly fascinates me as it's not logical.
Also FSAA may give you something from nothing but it also blurs the image like crazy too (see textures with fine writing on them)
I've never seen this postfilter in action but I'll assume it actually works.

You can re-encode a 128kbit mp3 into 384 all you like, but your source was still a 128kbit one, there's little that can be done to increase the quality.
That example is completely worthless in this situation.
It's not worthless what-so-ever, you are attempting to improve the quality of something without increasing the output size or re-encoding, just running a "filter" over it in an attempt to get more out of it.
Re-encoding a MP3 at a higher bitrate is absolutely dumb and purposeless. It is lossy and reduces apparent quality. 3dfx's algorithm (like the "PhotoEnhance" and other algorithms used by inkjet printers) is somewhat lossy (AFAIK), but it increases apparent quality. If you don't see the difference, then arguing any more on this point is worthless.
Hence rendering at 16bit internally for speed purposes.
Uh yeah, 3dfx are rendering in 16 bit internally. That's exactly what I'm saying but you keep going on about how they're doing in 32 bit mode. Your scenario is utterly preposterous.
I was under the impression they actually rendered the 16bit textures with 32bit's of space to avoid dithering which roughly acheived 22bit mode - obviously I'm incorrect.
However I didn't out and out say they are rendering entirely in 32bit mode.
Name ANY 3d card that offered 8 bit output.....
Some of the very old cards (circa Voodoo1) could run GLQuake in 8 bit mode though they were still using 16 bit mode internally. This exactly what I'm saying will eventually happen to 16 bit and 32 bit modes; everything will be done in 128 bit mode but 16 bit and 32 bit mode will just be there for compatibility reasons.
I was not aware of that, very interesting - you may well be correct then - although I'd like to see more information on if they genuinely rendered 16bit internally or 8bit.
I've never seen a card which is faster in 16bit colour than 32bit colour.
It's possible on most GF3/GF4 cards to come up with such a scenario. I'm not sure about the Radeon series because I believe Hyper-Z still keeps working in 16 bit mode.
Great it's possible to come up with such a scenario, but if we get back to the real world and use maximum bandwidth of the card (b/w limited scene / game) then there's no disputing that 16bit can and will use less bandwidth which will ensure crappier looking (as I've been happy to admit all along) but ultimately faster images - people make the trade off - their choice.
(and again, many many people simply don't notice it.......)
It's completely bloody relevant, if I get more frames from 16bit, I'll switch to it- whether or not they petition manu's to include it or not.
It'll still be there for compatibility reasons, just expect the performance gap between 16/32 bit mode and 128 bit mode to keep getting smaller and smaller until it's worthless to use the smaller bit depths.
and when this occurs I'll go to 32bit mode etc.
I've never said 16bit is the be all and end all to rendering, I was pointing out something as lowly as 16bit rendering to SOME eyes can still be sufficient and acheive higher frames.
NOTE: you didn't quote much of my previous post, it's missing many comments.......
If you're using a TNT2 video card, then of course you'll need to run 16 bit to get decent frame rates, but if you have a relatively modern video card (Radeon 8500 / 9700 or Geforce 3 / 4), you will have frame rates up the wazoo when rendering in 32 bit color that there is absolutely no reason whatsoever to render in 16 bit color.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
Still not getting the gist of my comment about O.P.I.N.I.O.N are you? - you're new at this "internet debating thingmo!" aren't you?
There are a few errors in your comments but I don't have time to wade through that whole quote.

We do need 128 bit colour/precision so let's just leave it at that.

Don't throw round bull$#!t marketing words from Nvidia's marketing grunts please, I feel queasy.
Bullsh*t? I think not.

Final Fantasy looks pretty darned good to me and you can bet that they aren't using plain old 32 bit colour for it. In fact, most 3D films like that are using FP16 (64 bit mode).

Seems like an incredible waste of cycles to get a dithered down image to look actually better - they may as well have rendered in 32bit in the first place.
It doesn't use any core cycles at all, though there is a small delay while the postfilter processes the image.

This certainly fascinates me as it's not logical.
Yes it is - you get the perfomance hit and resource requirement of 16 bit rendering while your final output is close to 22 bit quality. It seems prefectly logical and rational to me.

Also FSAA may give you something from nothing but it also blurs the image like crazy too (see textures with fine writing on them)
Agreed and the postfilter isn't perfect either. Images do lose sharpness and definition though not as much as with FSAA because the algorithm works differently. But banding effects are significantly reduced and alpha blending is much smoother. I never did say that the postfilter doesn't come with a price.

I've never seen this postfilter in action but I'll assume it actually works.
Of it does and contrary to public belief it's not just a marketing gimmick. In fact, it sounds like something that would be right up your alley since you seem so keen on using 16 bit colour. I'm surprised that you've never explored that avenue..

It's not worthless what-so-ever, you are attempting to improve the quality of something without increasing the output size or re-encoding, just running a "filter" over it in an attempt to get more out of it.
It is worthless because we are talking about colour sampling for graphics while I have absolutely no idea how you'd even begin to try to do the same with sound as performing any kind of sampling like that would change the sounds you're actually hearing.

Oh heck, I don't care what happens with sound. We're talking about graphics here, not sound.

I was under the impression they actually rendered the 16bit textures with 32bit's of space to avoid dithering which roughly acheived 22bit mode - obviously I'm incorrect.
No, the rendering pipelines are 16 bit precision though the core can process 32 bit values for the duration of the on-chip calculations.

I was not aware of that, very interesting - you may well be correct then - although I'd like to see more information on if they genuinely rendered 16bit internally or 8bit.
The pipelines were 16 bit precision but they allowed for things like 8 bit textures and values for compatibility reasons. A lot of early games converted to 3D mode came from 8 bit world worlds and that's how their engines operated.

Great it's possible to come up with such a scenario, but if we get back to the real world and use maximum bandwidth of the card (b/w limited scene / game) then there's no disputing that 16bit can and will use less bandwidth which will ensure crappier looking (as I've been happy to admit all along) but ultimately faster images - people make the trade off - their choice.
The point my friend is that the performance gap is shrinking with each generation of cards and soon it'll be pointless to use the lower bit depths except if you're really short of VRAM and you want lighter texture storage loads on the card.