As for per-pixel backlight control, I'll leave it to the manufacturers to figure it out. If they can pull it off, so be it -- but I think turning LED's into a display is easier than doing what was described.
Now onto motion interpolation:
Mmm..,it's not quite like that and you seems way too pessimistic in this field, there are many ways to achieve good results and the worst interpolation maybe can ruin image quality if unproperly applied but certainly cannot introduce more input-lag than v-synch
Good interpolation requires looking ahead an extra frame at the absolute minimum, which requires adding another framebuffer layer, which adds an extra frame of input lag -- even with VSYNC. Good interpolation algorithms unfortunately require the PREVIOUS frame, the CURRENT frame, and the NEXT frame, to do far more accurate interpolation.
Linear-vector motion interpolation between just the previous frame and current frame (with no knowledge of other frames, such as the subsequent frame) do not yield the best possible motion interpolation, and causes unacceptable artifacts during video game motion.
Consider specailcases of non-straight vectors including things like curved motion, and non-straight motion, like a head turning, people riding a bicycle, scene changes, objects suddenly becoming obscured, multiple motion vector directions (e.g. fast zooms, explosions, first person perspectives super-fast motion, etc). Motion going too fast (e.g. motion steps of more than 50 pixels between frames). All of this is extremely challenging for motion interpolators. You don't want the motion interpolator to insert ugly artifacts whenever you do things like a fast 360 degree flick, etc.
Yes, it's possible to do *some kind* of basic interpolator with just the knowledge of the current frame and a previous frame, but it won't be good, since it does not provide enough information for non-linear motion (e.g. accelerating motion, sudden direct change, curved motion, etc). Even better interpolation is possible if you have more than 3 frames of knowledge centering around the current frame (prev, current, next), but I'm not sure which HDTV's use more than just the next frame (yes, some of them need that "future frame", and that is a major cause of interpolation input lag)
Poor interpolators often suddenly stop interpolating when motion get complex -- that's why sometimes motion is smooth, but motion starts stuttering when motion becomes complex (e.g. first perspective view of running through a forest, bumping and pushing leaves out of the way). That's pretty common in simple interpolators. The framerate suddely drops back to original framerate (e.g. 24fps)
Also, object occulusion is not a 100% solvable problem with motion interpolators. For example, scenery behind a picket fence with tiny gaps. Strafing in front of a picket fence, you'll reveal some slits of a scenery behind, then different slits of scene. It may not ever reveal the complete scene behind the picket fence because the gaps between slats only covers about 25% or less of the scenery. Between the two frames, only 50% of the scenery behind the fence is revealed (but due to granularity f a low framerate, having "skipped over" never-seen scenery, between the two frames). How do you motion-interpolate scenery that is never revealed, between the frames? (e.g. intermediate strafe positions in front of picket fence, showing never-revealed-before scenery) It's impossible. Object occulusion is not a 100% solvable science for motion interpolation. Better algorithms and logic improves this, but intrisinically, it's an unsolvable problem for total coverage of all situations. Even simple situations such as dragging a window on top of a fancy wallpaper or background window (possibly animated) -- Interpolating this (in a "black-box" interpolator) is very tricky.
Regardless.... Unfortunately, no dice. It's mathematically impossible to create a
*good* interpolator (
with no visible artifacts) without knowledge of at least one future frame. Therefore, additional buffering occurs, and this adds at least 1 additional frame of input lag (at the absolute minimum), even with VSYNC ON.
One *could* go to a higher intermediate framerate (e.g. 144fps) to reduce the lag of a frame, then the input lag of interpolation could be tolerable for computer use, but still poor for competitive gaming use (compared to an impulse-driven display).
Another problem: Motion interpolation works best on consistent framerates. Motion interpolation works best when the framerate is consistent and predictable. Computer games don't always run at consistent framerates. Interpolators (especially those that do not lookahead at least several frames) won't always be able to predict when a stutter happens and be able to successfully "smooth the stutter" out of existence.
Yet another problem: Motion interpolators often have lots of difficulties with low-contrast and blurry objects. How does an interpolator tell apart random "noise" (e.g. macroblocks randomly moving aroudn in video), versus intentional world movement in a dark, low-contrast environment where contrast is less than the contrast between the artifacts in poorly-compressed material? Interpolator fine-tuning leads to a lot of motion falses/mistaken motion (showing up as motion artifacts caused by an interpolator -- I've seen them and they look ugly).
*Even* better interpolators require several frames of lookbehind *and* lookahead buffers, to better predict direction changes, stutters, motion behavior (angular/accelerating motion, etc) and improved accuracy (being sure that something is actually moving and not just accidental 'noise').
If you do motion interpolation anyway -- then computer graphics must be interpolated far more flawlessly than video, because it's far easier to see defects in motion interpolation in computer graphics. Computer graphics are often 'razor sharp', often contains many straight lines, and full of unexpected interactive motion (sudden mouse pointer direction changes, drag direction changes, unexpected stutters, accelerated motions in games, rapid occulusion effects, etc). Flaws in an interpolator wreaks havoc with that, with far-more-easily noticed artifacts. If you are doing good lookahead/lookbehind interpolators (adds more input lag), the interpolation looks so much better, but various artifacts remain.
Motion interpolation definitely has its place, yes...
Maybe even with specially designed games that co-operate with a motion interpolator to maximize quality and minimize lag. But discrete "black box" independent motion interpolators, built into display (operating with no advance knowledge of the source material), isn't the solution for lag-free computers/gaming use.
To make even just a majority of population happy, using a computer/videogaming with motion interpolation, will likely not be the answer.
As an associated member of Society for Information Display (sid.org), and subscribing to papers written by universities and institutions (I pay $150/year to read these papers), there's definitely only two lag-free methods of shortening lengths of frames in order to reduce perceived motion blur on displays: More native Hz/frames and/or shorter impulses.
....As I've already explained before, that's extra frames in extra refreshes (in sample-and-hold displays, ala LCD), and/or shorter impulses with more black period between frames (in impulse-driven displays, ala CRT/plasma/backlight control).
Providing more Hz/frames in
a high quality manner via interpolation is not a lag free method.