I think what itsmydamnation and sushiwarrior are saying is this:
From my (very) limited understanding of game design, typically, game artists will first create a high polygon count model of a game prop. From that high poly count, they then extract displacement maps, and then reduce the model to a much lower count. They may even use displacement maps to make the original high poly model, but it seems like it would be easy for them to simply output a final high poly model if they wanted without any displacement maps if they wanted.
Now they send the low poly model to the video card, along with the displacement map. The video card tesselates the model, making it a higher poly count, and then applies the displacemnt map, making it look like the original high polygon model they could have outputted.
Here's where the question comes in:
If we ignore LOD benefits (since most of these arguments haven't seemed to include LOD yet) then why does the artist bother even creating the low poly model to begin with. Why doesn't he simply submit the high poly model and be done with it. The video card will still need to render a high poly model either way. So what benefit is their to doing tesselation+displacement instead of just sending all of the high poly geometry. Piroko, in your last example you showed a guy's original sphere which was low poly, and then showed the tesselated model that was smooth. But wouldn't it be very easy for a modeler to simply apply the same smoothing algorithm that tesselators perform on the model to make it a high poly count one? And then simply render that?