Every time new tech is on the horizon "we" are asked to speculate about it.... I really don't under stand the point. If we are lucky this thread won't turn into how a GTX 780 is a way better buy than the Titan or that Crossfire frame times suck. Just saying..
Your good-natured humor and
obvious ignorance of GFX progress through past, present and future generations is clearly not pertinent to this thread.
But, yeah:
We don't really need 4K. It's just something the corps are pushing so there is a need to support it with uber GPU power.
I'm not the type of guy to say "640K is enough for anyone", BUT, I think 2560x1600 is the sweet spot. We are fine where we are now. Just my opinion. But I think a line needs to be drawn where improvement move beyond practical in the realm of intangible.
When large displays reach "retina" level resolution we might find that the cool VGA feature of the time is not so much to AA [relatively pixel-y 2560x1600] images but to increase performance by reducing the level of precision on less important parts of the images by "motion blurring" them.
Think about it. In much the same way as we currently (and at good expense) tweak vertices and textures (with AA) to make up for low resolution and the obvious distinctions between neighboring rows of pixels, wouldn't it follow that in a future with resolutions finer than the average eye can perceive some parts of the image would be "softened" prior to per-pixel shading and all that expensive stuff?
On the one hand this would have the effect, which is apparently desirable in some modern engine writer's minds, of softly blurring the more distant or less prominent parts of the image (which, to be fair, seems to espouse how real human visual perception works). And on the other, if it was implemented prior to the final shading (or rendering; or rastering, I don't know much about these things) it could save some GPU power by "blobbing" less remarkable parts of the ultra-high resolution 4K frame.
In support of this far-fetched vision of the GPU future I would mention the already existing move towards 'motion blurring' (which at this point
costs performance rather than affording more of it), and also the natural working of the eye and the cerebral structures which interpret its input: recognizing moving objects or immediately pertinent environmental features are what the eye and the brain allocate the best resources towards, so a smart way to make better looking, faster processed 3D simulations in a sub-retina resolution future would perhaps be to cull resolution on less prominent parts of the 4K final frame (based on object, distance, or priority, etc... -- idk w/e), opting to instead resolve these in some other, lower (say, 1/4 or 1/2 -- the latter of which still amounts to more than the 25x16 w/ pertinent ppi that you're comfortable with now. Mr. 640K!

) resolution.
This way, motion blur would be a feature you turn
on to gain performance at the expense of ridiculously sharp, sub-retina, resolution. And AA would be extinct as an interest to the 4K-level enthusiast.
Though I guess it could easily subsist as a general-hardware-based option for those at 1080P, playing with an APU on a budget.