Because you just use a artificially timeframe. Why go back to Fermi? Why dont use Kepler and GCN1.0?
That particular choice isn't necessary. To get the best view, you'd obviously want to graph things out across time and the product stack to get the full picture, but I doubt it would make a difference. I picked that particular card for a few reasons, none of which have anything to do with what you're talking about because none of that matters for the purpose of this comparison:
1) The sizes of the two dies are about the same. Since the size of wafers doesn't change, you get about the same number of chips per wafer.
2) Unless a country is undergoing serious economic turmoil, the inflation rate in the short term is relatively small. A ~2% inflation rate means prices don't move a lot until you start to get 10 years out.
3) There was a somewhat convenient comparison that could be made with an AMD card to serve as a point of reference.
You keep completely missing the point of what everyone else is trying to tell you. No one is debating that you can get more powerful cards today at a lower price, that much is obvious. What everyone has been trying to tell you is that NVidia has been increasing the cost for all of their different performance categories and those increases outpace inflation. Since their margins are increasing, those price increases also outpace their growing expenses.
If you want performance that targets a particular resolution and frame rate (e.g. 1080p 60 FPS) then that will have gotten less expensive. If, however, you want the X80 card from NVidia, the cost to buy into that tier of cards has increased dramatically over time and most of it can't be blamed on increases in die size, inflation, etc.