16 bit or 32 bit help

blclay23

Member
Mar 23, 2000
198
0
0
been playing games like Q3, UT, & UT2K3, anyways, whats' the difference between 16/32bit(color)? Thanks
 

notfred

Lifer
Feb 12, 2001
38,241
4
0
16 bit color is actually 15 bit color. Windows calls it 16 bit color because "16 bit" is more of an acceptable number than 15 bit.

15 bit color is 5 bits per channel for each of the Red Green, and Blue channels. That gives you 2^5 = 32 colors per channel. You get 32 different shades of red, 32 different shades of blue, and 32 different shades of green. When you combine them all, you have a total of 32,768 colors that you can draw.

32 bit color is actually 24 bit color, similar to the above, but with 8 bits per channel, this gives you 256 levels of color for each channel, and a total number of 16,777,216 renderable colors.

With '32' bit color, you get a much wider pallette to choose color from, and it makes more accurate images.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Where did you find out that 16-bit was really 15-bit, normally 16-bit is also called 65k colours (which fits with 16-bit being 16-bit)
Also, most things are "even" numbers of bits, in multiples of 2, so 2, 4, 8, 16, 32, 64, 128. Never heard of Windows being 15-bit.
 

notfred

Lifer
Feb 12, 2001
38,241
4
0
There is a 16 bit color format, which is 5 bits for red, 6 bits for green, and 5 bits for blue, which does double the amount of colors to 65,536 colors. However, that's really not very helpful unless you deal with a whole lot of green in your images. The 16 bit format is really odd, and I have no idea why they decided to create it.

However, I've never seen a windows machine actually identify anything as 15 bit, and I know that many video cards do operate in 15 bit, rather than 16 bit, color.

I'm pretty sure quake engine games actually use 15 bit, rather than the 16 bit that they indicate. (I've heard of them failing to run on linux machines set at 16 bit color, and only running in 15 bit color, which linux tends to report correctly).

You're right, you usually so see things in even numbers of bits. You usually see things in powers of 2, also. That doesn't mean that everything has to, or should be, exactly an even power of two bits wide.
 

notfred

Lifer
Feb 12, 2001
38,241
4
0
This is not windows, but if you look at this link you can see that the "16 bit" format that OS X uses is really only 15 bits of color data, with one empty bit.

this page refers to devices that indicate 64k colors from a 16 bit pixel as counting one left over bit as significant, instead of only counting the correct 15 bits. I know that this isn't always true due to the odd 16 bit color format I mentioned above, but it is often true that "16 bit" is really 15 bits of image data.

this page dealing with microsoft's direct draw from directX indicates that direct draw supports both 15 bit and 16 bit image data under the label of "16 bit color depth" depending on software and drivers.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
been playing games like Q3, UT, & UT2K3, anyways, whats' the difference between 16/32bit(color)?
Better image quality in the form of smoother colours and blending effects.

16 bit color is actually 15 bit color.
No it isn't.

Windows calls it 16 bit color because "16 bit" is more of an acceptable number than 15 bit.
Utter rubbish. Windows uses 16 bit colour in the form of 5/6/5 (studies have shown that eyes are more sensitive to shades of green).

However, that's really not very helpful unless you deal with a whole lot of green in your images.
Irrelevant. Also that's quite a change from "there's no 16 bit" to "yeah but it's only useful for greens".

The 16 bit format is really odd, and I have no idea why they decided to create it.
Uh, because of the fundamental principle that computer arithmetic operates on binary and hence powers of two. 16 is 2 ^ 4; how the heck do you arrive at something as dumb as 15 bit colour?

Apple's 15 bit colour, now that's retarded as it goes against the entire computing industry.

This is not windows, but if you look at this link you can see that the "16 bit" format that OS X uses is really only 15 bits of color data, with one empty bit.
You're right, it's not Windows so stop projecting Apple's idiocy to the entire computing industry to make it sound like common practice. Apple is and always has been the exception to the norm, not the standard.

I suppose next you'll be saying that computers don't ship with floppy drives on the grounds that Apple's computers don't? Apple != computing industry practice.

this page dealing with microsoft's direct draw from directX indicates that direct draw supports both 15 bit and 16 bit image data under the label of "16 bit color depth" depending on software and drivers.
And? The point is that the 16 bit format is fully supported under Windows, unlike on Apple's platforms. Hell, you could limit your colour pallete to just black and white and use 24 bits to repesent it if you like but that in no way implies that you don't support 24 bit colour and that you're only limited to 1 bit.

I'm pretty sure quake engine games actually use 15 bit, rather than the 16 bit that they indicate.
Are you making this stuff up as you go along or something?

Firstly 2D colour depths are completely unrelated and separate to 3D colour depths and they both use the numeric precision in very different ways.
Secondly, 15 bit colour does not even exist in 3D rendering. 16 bit colour is the closest to it, which gives you a 4444 RGBA format.
 

notfred

Lifer
Feb 12, 2001
38,241
4
0
Does windows ONLY use 5/6/5 16 bit color? The link I posted above about direct draw seems to indicate that it groups 5/6/5 16 bit color and 5/5/5 15 bit color under the same heading of "16 bit color". Windows doesn't label anything as 15 bit color, so either it doesn't support 15 bit color at all, or it supports it under the label of "16 bit color".

I guess maybe I'm wrong about windows using 15 bit color and labeling it as 16 bit color, maybe when it says "16 bit color" it's usually 16 bit color, however it seems that at least sometimes when windows says "16 bit color" it's really referring to a 5/5/5 15 bit color model, at least if direct draw is any indication. maybe it's usually 16 bit color and only 15 bit color on rare occasions, I'm not sure. I thought it was usually 15 bit color labeled as 16.

You're right, the fundamental principle of computer arithmetic is based in binary. 15 bit color is 2^15 colors. It's very much a power of two. It's not 15 colors, it's 15 bits. Based on your idea, the only valid maximum values for any scale in a base 10 number system are 10, 10^10, 10^100, 10^1000, etc. The reason that 16 bits is so common is because it's the length of a two-byte word, not because 2^16 is "more binary" than 2^15. Yes, apple passes an extra bit around that's not being used when dealing with "16 bit" color. It's far from the only time that bits are left unused like that.

I really have no experience with 3D rendering. 4444 RGBA makes a lot of sense for things like textures, but in the end, it's all rendered down into a 2D image to display on that screen. What color space is used for that? 5/6/6 16-bit RGB?
 

Barnaby W. Füi

Elite Member
Aug 14, 2001
12,343
0
0
I thought 16-bit was generally 5551 RGBA?

Also, the 8 extra bits in 32-bit color are for the alpha channel. (8888)
 

Barnaby W. Füi

Elite Member
Aug 14, 2001
12,343
0
0
Originally posted by: BFG10K
You're right, it's not Windows so stop projecting Apple's idiocy to the entire computing industry to make it sound like common practice. Apple is and always has been the exception to the norm, not the standard.

I would argue that windows is the exception to the norm. Even apple uses unix now; pretty much every computer in use uses a unix-derived or -work-a-like OS, except windows machines. Obviously Apple has and does do some whacko, non-standard things, but so do ALL computer companies. Microsoft "embrace[d] and extend[ed]" ldap and kerberos for kicks it seems, CIFS to the best of my knowledge had to be reverse-engineered, NTFS is unwritable by any other OS, and all of its advanced features are unusable. It practically takes a degree in teeth pulling to turn an outlook mailbox into a useful (STANDARD) format. IE6 is languishing with half-ass CSS support, and IE7 will not show up until the next windows release, and only *with* said windows release. IIS and IE conspire to ignore standard TCP and do their own thing, at the cost of data integrity, and lowered performance for non-MS software.

Windows may be the norm for most people, but it is hardly the standard.
 

ProviaFan

Lifer
Mar 17, 2001
14,993
1
0
Originally posted by: BingBongWongFooey
IE6 is languishing with half-ass CSS support, and IE7 will not show up until the next windows release, and only *with* said windows release.
Windows may be the norm for most people, but it is hardly the standard.
Very true. I'm experiencing IE6's very faulty support of CSS1 (despite MS's claims to the contrary) with a site I'm working on right now. Unfortunately, I fear that MS will never fully support standards, and will likely attempt to move farther and farther away from said standards to help solidify their monopoly.

Anyway, I do notice a difference in quality between 16 and 32 (really 24) bit color, and the performance difference is not noticeable these days, so I always run 32bit color (heck, I would run 48 bit color if it were supported on PC hardware, instead of just on obscure SGI workstations).
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
Apple Quickdraw API actually takes color specifications in 48-bit format.

It's up to the video driver what that gets downgraded to.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
Does windows ONLY use 5/6/5 16 bit color?
No, like you yourself said it can take a wide variety of formats under the 16 bit label. However the standard is 5/6/5.

The reason that 16 bits is so common is because it's the length of a two-byte word, not because 2^16 is "more binary" than 2^15.
Exactly and when it's byte-aligned it's faster to deal with. Also 15 bit colour still allocates 16 bits anyway so why would you not want to use all of them?

What color space is used for that? 5/6/6 16-bit RGB?
It'll be the same as standard 2D so yes, it'll be 5/6/5. AFAIK you simply don't need an alpha channel once all of the blending has been completed.

I thought 16-bit was generally 5551 RGBA?
In 2D or 3D? Remember there's currently a big difference between the two and 2D is generally more flexible because it depends on the program you're using.

Also, the 8 extra bits in 32-bit color are for the alpha channel. (8888)
For 3D yes, for 2D not always.

I would argue that windows is the exception to the norm.
I wouldn't. Explain to me how allocating 16 bits for a value and then constantly leaving one bit empty is a logical thing to do. The bit is there so why not use it? Why waste space? Wasting resources is a cardinal sin in computer science.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
Hey Notfred, I've just read my original post over again and I realise that it did sound a bit harsh, which was not my intention.

I'm sorry if I offended you in any way. :(
 

notfred

Lifer
Feb 12, 2001
38,241
4
0
It's cool, I do it quite often when other people are wrong. ;)

Anyway, to add to your last post, 32 bit color doesn't only exist in 3D rendering, it's used in many 2D image formats as well. The 8 bits of alpha is used for blending more than one image together (See transparent PNG images on web pages, or layers in photoshop documents). It's also used by operating systems that do Alpha blending to make windows semi transparent.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
Anyway, to add to your last post, 32 bit color doesn't only exist in 3D rendering, it's used in many 2D image formats as well. The 8 bits of alpha is used for blending more than one image together
Yes, I know and I never said otherwise.
 

Barnaby W. Füi

Elite Member
Aug 14, 2001
12,343
0
0
Originally posted by: BFG10K
I would argue that windows is the exception to the norm.
I wouldn't.
Well I was definitely speaking in a more general sense, not really with regard to color modes.

Explain to me how allocating 16 bits for a value and then constantly leaving one bit empty is a logical thing to do.
I dunno, it's probably not. As you mentioned, the eye is more sensitive to green, so it makes sense to throw an extra bit to green. I think it was Canon that developed a new sensor that uses 4 colors: red, green, blue, and emerald, for use in digital cameras. Emerald is closest to green, and the increased sensitivity in the greeney range of colors apparently makes a difference in photo quality.

The bit is there so why not use it? Why waste space? Wasting resources is a cardinal sin in computer science.

TCP Header Format

0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Source Port | Destination Port |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Sequence Number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Acknowledgment Number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Data | |U|A|P|R|S|F| |
| Offset| Reserved |R|C|S|S|Y|I| Window |
| | |G|K|H|T|N|N| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Checksum | Urgent Pointer |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Options | Padding |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| data |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

TCP Header Format

Note that one tick mark represents one bit position.
(I realize that fusetalk absolutely raped that, but you get the point)

Padding?!?! WTF! What were those idiots thinking! I dunno. Wasting resources is a cardinal sin in computer science, but most software is not written from a computer science standpoint, it's written from a "we need this app yesterday with features A B and C, get on it, bitches" standpoint, or various other points of view which generally are less concerned with proper technique, and more concerned with features or aesthetics or popularity or user-friendliness or whatever.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
The padding is there to keep it byte-aligned which is the point I raised earlier. However in this case there's nothing else that could use that space, unlike the extra green bit which is useful for colours.

If we have something to use that extra space then we should most certainly use it since it's been allocated anyway.
 

Barnaby W. Füi

Elite Member
Aug 14, 2001
12,343
0
0
Maybe when the decision was first made to use 5/5/5, it wasn't common knowledge that the eyes were more sensitive to green, or maybe that knowledge just wasn't common within the computing industry.
 

ProviaFan

Lifer
Mar 17, 2001
14,993
1
0
Originally posted by: BFG10K
I wouldn't. Explain to me how allocating 16 bits for a value and then constantly leaving one bit empty is a logical thing to do. The bit is there so why not use it? Why waste space? Wasting resources is a cardinal sin in computer science.
Then why did 32bit color "waste" 8 bits (at least until recently, when OSes became aware of alpha-blending)?
 

notfred

Lifer
Feb 12, 2001
38,241
4
0
Originally posted by: jliechty
Originally posted by: BFG10K
I wouldn't. Explain to me how allocating 16 bits for a value and then constantly leaving one bit empty is a logical thing to do. The bit is there so why not use it? Why waste space? Wasting resources is a cardinal sin in computer science.
Then why did 32bit color "waste" 8 bits (at least until recently, when OSes became aware of alpha-blending)?

It didn't, it simply wasn't used when alpha wasn't needed. 24 bit color was used.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
Maybe when the decision was first made to use 5/5/5, it wasn't common knowledge that the eyes were more sensitive to green, or maybe that knowledge just wasn't common within the computing industry.
Quite possibly. Of course now that I think about it it doesn't really matter anyway as 16 bit colour blow chunks and everyone uses 32 bit colour instead.

Then why did 32bit color "waste" 8 bits (at least until recently, when OSes became aware of alpha-blending)?
It didn't - 2D programs have been using the extra 8 bits for alpha for many years. Like I said before, for 2D operations it depends a lot on how the program chooses to handle the colour precision.