• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

The tech behind binary.

Circlenaut

Platinum Member
This has been bothering me for a long time. How is it possible that a image or sound or anything else can be seen using 0s and 1s? I'm asking for the tech behind binary.
 
I think somebody asked this question on here a few weeks back.. but I'll explain again.


Sound is sampled at a given rate (ie, cd quality is at 44.1khz), and each sample contains a specified bit (8=bit or 16-bit sound). 8-bit sound contans 2^8 = 256 levels, and 16-bit contains 2^16 = 65535 levels of quantization. Lets say you read audio data off a cd. CD's are recorded at 16-bit, 44.1khz. Each sample of data is two-bytes and is sampled every 1/44100 seconds. If you feed that through a filter and a digital-to analog converter (which takes those 16-bits and produces an equivalent analog voltage), you will get a signal that is almost the same as the original signal, but not quite the same because of the roundoff errors due to quantizing.

As for image and video, I'm not really sure how it works, but I am guessing that each pixel is divided into the three primary colors, red, green and blue. And for each primary color for each pixel, there is either 8-bit, 16-bit, 24-bit, or 32-bits of information to tell you what at what level each color is. For an image at 32-bit color, each pixel contains
12 bytes of information.

 
actually, 32 bit colour contains 8 bits each of RGB and a further 8 bits of "alpha blending" which IIRC, is transperancy. Which is why some video cards have 24bit as "ture colour" and some have 32 bit.
 
regarding images:

you store the images as pixels in RGB, as posted above. Lets say its 24-bit color. you send the pixels to the video card, which has a RAMDAC. the ramdac converts the digital version into an analog signal which the monitor displays.
 
There are many ways of storing images. It evolved somewhere along the following lines.

Early home & personal computers used characters. The screen was divided up into a number of character rows and columns. Each cell contained 8 bits, or 1 byte, which can contain one of 256 distinct values. Typically this was ASCII with the last 128 characters as either graphics or user defined.

User defined characters were each defined by 8 bytes, or 64 bits. A character was 8 rows by 8 columns, and so you could define any pattern you liked.

Initially this was black and white. No colour. Later, colour was added. Each cell had an additional 4 bits of colour associated. (2 cells per byte.) Some display adaptors handled 8 colours and then an additional 8 flashing colours. There weren't many standards, so typically they did what they thought was cool. Comodore for example used a byte of colour info per characters. 4 bits for the foreground colour, and 4 bits for the background colour.

To display the screen, the display card would look up the characters in the screen buffer, and then look up the bit battern for the character in rom. (Or ram for user defined characters.) If colour, it would also access the colour for the character.

Eventually people decided it would be cool to have bitmaps rather than characters. This allowed for graphs, charts etc. (and later arcade games.) At first each pixel either had a single bit (black & white) or 4 bits (16 colours). This took much more memory than the original 1K-2K screen buffers 38K for black&white, 150K for 16 colours. As this used so much more memory, some cards were limited to 320x480 or less in colour mode.

It wasn't too long before people grew tired of the limited colours, so the palette was invented. The colour palette was a table of 16 enteries. Each entry described a colour. Now, you could choose which 16 colours you wanted to display, and assign each pixel an index into the colour palette.

Now of course, 16 colours isn't many. So when memory became cheap enough, the palette was expanded to 256 enteries, and each pixel took a byte. Then came 65536 colours. Now I'm not sure to be honest if this uses a huge palette of 65536 enteries, or if it split the 16 bit word into 3 RGB parts. Eitherway, this gives fairly decent images, especially when used with dithering techniques which had been evolving for years by now and were becoming very decent. Actually, dithering is a whole different topic. But check out some early computer art done a a daisywheel printer. (One that only prints characters.) When the dot-matrix printer came along, people did wonderful stuff with grey scales and careful dithering.

Then at last, colour became true colour. 24bits. 8 bits per componet of RGB. This gives pretty decent colour for most people, but is really optimised for the raster display device. (CRT) It actually sucks for grapshic artists who use a much large colour space than a CRT can display. They use various other colour schemes such as HSL, CMY etc.

As Shalmanese mentions, some devices use 32bit colour, where the additional 8 bits is for alpha blending. Say you have two pixel, blue and red. When they are mixed, the alpha value of each determins how much blue and how much red is in the final pixel. The alpha value does not affect how the image is displayed, only how it is blended with other colours.

Edit: Stole credit from CTho9305 and gave it to Shalmanese where it rightfully belongs 🙂
(Thanks CTho9305!)
 


<< As CTho9305 mentions, some devices use 32bit colour, where the additional 8 bits is for alpha blending. >>



give gredit where its due. scroll up 😉 🙂
 
you need some sort of definition how to turn the numbers in the computer into pictures and sound.
A soundcard will move move the speaker membran for every bit you send it (simplified) and a monitor will increase the brightness of one dot on the screen.
After all, you can count or write down numbers but you cannot hear or see them 🙂. What you see and hear are the responses of specialised devices (monitor and speaker) that turn digital signals into analog equivalents. And its your body, that recognizes these analog responses as picture or sound.

(or am I too philosphical ?)
 
ooh, thanks for the commodore 64 flashback, heh some of my first programming was making custom characters on the C64, what fun.
 
Back
Top