Why is Interpolation in LCD so poor?

MobiusPizza

Platinum Member
Apr 23, 2004
2,001
0
0
Anyone would know, scaling up/down an image with software such as Photoshop can recreate good quality scaled images. For my experience scaling down always produce good results. Provided that the scaling up is not like 100% scale ups which might pixellate too much.

Often I have to scale down 1280 * 1024 wallpapers to 1024 * 768. Photoshops, ACDSee, in fact even MS paint do a great job.

Why though, interpolation in LCD monitors are so poorly done by the LCD hardware. The scaled images, scaled down or scaled up; are so blurred. What software can do of course can be matched by a special chip. Why can't the manufacturers really just put in a chip or whatsoever, to do what software does, in real time. Frame by frame...

One reason I can think of is the ammount of processing power needed is economically impossible. Hm how much processing power is needed to execute alogrithmns to sample down an image? (Video) Also it doesn't seem to hurt a lot when you resize a playing video with your player in Windows environment
 

RaynorWolfcastle

Diamond Member
Feb 8, 2001
8,968
16
81
Well I'm no expert but the short answer is probably that it costs money to add a chip. Also, Photoshop/Paint don't do it in real time, your monitor would have to; you'd need a somewhat powerful DSP to get a good resampling algorithm to work in real time on that kind of datarate.

Honestlly, I'm wondering why this isn't implemented in video drivers; I would think that FSAA algorithms could be easily modified to do this rescaling.
 

onix

Member
Nov 20, 2004
66
0
0
I have been equally puzzled by this.

CRT's seem to be good at outputting a decent image regardless of resolution, while an LCD that is displaying non-native is usually terrible, and so are LCD & DLP projectors.
 

Calin

Diamond Member
Apr 9, 2001
3,112
0
0
I think the LCD display is more granular than the CRT display (you have points of some .21mm on CRT and of .3mm on LCD. This might be a reason?
Also, keep in mind that the CRT sends the signal directly to the screen, while the LCD must digitally process it to send it to individual pixel elements.
If you have a lower resolution, you must remember at least three rows of pixels to make a good interpolation (as pixels of lower resolutions are bigger, they can cover a line of device pixels completely and partially affect other two lines). And for all those lines, you have to make floating point calculations with coefficients that changes for each line, and their values varies for every different resolution that the display must support. Also, you must make the calculations fast enough to show the image on the display (and there are LCDs that accept higher resolutions than native, and this force the computations to be even faster). And keep in mind that these calculations you must do for the columns also.
 

Calin

Diamond Member
Apr 9, 2001
3,112
0
0
Hmmm... 9 values if you want to show higher resolutions, if you want lower resolutions only 4 values
A calculation: 21" LCD at 1600x1200 native, image at 1280x1024.
You first must store the digital data of the image - you will need at least 3 lines of image stored (4k x 3bytes, 12kbytes) - let's call this video buffer.
You need weight coefficients for every line and column - if you code the process smartly (automagically compute the affecting pixels each line and for each column), you have only 4 values to weigh. That makes for each color: 4 floating point multiplications, three floating point additions, one demultiplication by 4. Multiply by 3 color channels.
You can compute at resolution change all the weights for display elements (one for lines and one for columns), but this way you should have one more multiplication for each point (4 per pixel) during run (to compute individual image pixel influence on display pixel). You end up with 16FP multiplications, 12 FP additions and 3 demultiplications.
You must have read and write access to the video buffer (as it will be a rotating buffer). You must compute in a second 1280x1024x72 pixels () - 1 hundred millions pixels a second, so you need 1.6 billion FP multiplications, 1.2 FP additions. I think you can have fast enough access to the data, as this access is very deterministic you might force loading long before you need the value, so when you need it it's loaded.
 

Calin

Diamond Member
Apr 9, 2001
3,112
0
0
We let the case of scaling resolutions down as an exercise to the avid reader :p
 

MobiusPizza

Platinum Member
Apr 23, 2004
2,001
0
0
1.6 billion FP multiplications, 1.2 FP additions... sounds a lot...sounds impossible as well
hm can't believe resizing "video" is that demanding
Never observe any performance hit when resizing my DivX playback...

CRT has analogue input, the electron beam inside can be directed to any points on the screen controlled by electromagnets, which makes scaling much easier.

hm seems that I have to use 1280*1024 for my next 19"LCD monitor.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Exactly that's the point. On a CRT, you don't have to "do" anything. The beam is directly connected to the incoming analog signal, and just turns brighter and darker at any arbitrary rate while it scans each line left to right in a contiguous fashion. No matter whether that makes any sense or match to the aperture grill or mask. For a CRT, there are no pixels, just scanlines containing an arbitrary analog waveform that directly translates to beam brightness. Whether you have two or 2000 brightness changes in a single scanline plain and simply doesn't matter.

On an LCD, the logic controls individual (native resolution) pixels, and you need to come up with a discrete brightness value for each and every one such pixel.

It's not like CRT scaling is free of artefacts - in low resolutions, the individual scanlines become visible since they don't overlap anymore; and even in its "best" resolution a CRT is never going to hit its RGB triplets EXACTLY, so display will never ever be as crisp as it is on an LCD.
 

gbuskirk

Member
Apr 1, 2002
127
0
0
Originally posted by: AnnihilatorX
...hm can't believe resizing "video" is that demanding
Never observe any performance hit when resizing my DivX playback...

I have designed chips to do image scaling and translation, and it can be that demanding. My design scaled down by powers of 2 in X and Y, and it was still necessary to implement a few reduction algorithms (such as direct selection, averaging, or peak picking). Each algorithm suffered its own type of artifacting. For example, an algorithm might drop thin vertical or horizontal lines from the image, or "smear" a crisp edge. For generalized (non-power of 2) reduction, more complex algorithms are required, such as weighted averaging of a range of pixels in X and Y. What is the effect of separately averaging R, G, and B (or C,M,Y) values? What types of artifacting could it produce? Also, large frame buffers may be required, with flexible addressing schemes to accomodate various screen resolutions. It's not an impossible design problem, but difficult enough to be a fun challenge.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: RaynorWolfcastle
Honestlly, I'm wondering why this isn't implemented in video drivers; I would think that FSAA algorithms could be easily modified to do this rescaling.

I believe that is exactly what is done for scaling video playback. This is why it takes basically no CPU power to scale video up and down -- if you use a software renderer, it will probably kill the quality and up the CPU load significantly.

I believe you actually *can* have the video card do the scaling for you with the last few generations of cards (you might need software like Powerstrip, though). This doesn't usually do any better than a "good" LCD monitor's internal scaling, though. It is at best a weighted point sample (not a true Gaussian or other more sophisticated type of sampling that you might get through something like Photoshop).

Basically, we can't do really good scaling cheap and fast enough to embed it in a monitor. At least not yet.
 

Howard

Lifer
Oct 14, 1999
47,982
11
81
Originally posted by: Peter
Exactly that's the point. On a CRT, you don't have to "do" anything. The beam is directly connected to the incoming analog signal, and just turns brighter and darker at any arbitrary rate while it scans each line left to right in a contiguous fashion. No matter whether that makes any sense or match to the aperture grill or mask. For a CRT, there are no pixels, just scanlines containing an arbitrary analog waveform that directly translates to beam brightness. Whether you have two or 2000 brightness changes in a single scanline plain and simply doesn't matter.

On an LCD, the logic controls individual (native resolution) pixels, and you need to come up with a discrete brightness value for each and every one such pixel.

It's not like CRT scaling is free of artefacts - in low resolutions, the individual scanlines become visible since they don't overlap anymore; and even in its "best" resolution a CRT is never going to hit its RGB triplets EXACTLY, so display will never ever be as crisp as it is on an LCD.
One more reason to hope SED comes to market quicker!
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: Howard
Originally posted by: Peter
Exactly that's the point. On a CRT, you don't have to "do" anything. The beam is directly connected to the incoming analog signal, and just turns brighter and darker at any arbitrary rate while it scans each line left to right in a contiguous fashion. No matter whether that makes any sense or match to the aperture grill or mask. For a CRT, there are no pixels, just scanlines containing an arbitrary analog waveform that directly translates to beam brightness. Whether you have two or 2000 brightness changes in a single scanline plain and simply doesn't matter.

On an LCD, the logic controls individual (native resolution) pixels, and you need to come up with a discrete brightness value for each and every one such pixel.

It's not like CRT scaling is free of artefacts - in low resolutions, the individual scanlines become visible since they don't overlap anymore; and even in its "best" resolution a CRT is never going to hit its RGB triplets EXACTLY, so display will never ever be as crisp as it is on an LCD.
One more reason to hope SED comes to market quicker!

SED is a fixed-pixel (perhaps more accurately, a "directly addressable") display technology and will suffer from exactly the same problems in this regard (unless their DPI is high enough that even crap scaling looks OK, but I do not believe this is the case).
 

xtknight

Elite Member
Oct 15, 2004
12,974
0
71
Why is Interpolation in LCD so poor compared to Photoshop? Because the DSP in the LCD is not powerful enough to do bi-cubic resizes at 60 or 75 times a second. As for video scaling, I'm not really sure.

The video card can do this. Infact, it's right in the ForceWare control panel. However, I found my monitor scaling to be sharper and of higher quality than that of my 6800NU chip.

You know, when I resize a 1024x768 image to 1280x1024 in PSP, it VERY closely resembles my monitor's scaling.