If the best GM204 card is sub 20% faster than 780ti, as an enthusiast/hobbiest for PC, that is a failure to me. If the improvement is so small it doesn't motivate a purchase, that is a fail. It will become like Intel and CPUs; no point upgrading more than every couple generations because the performance improvement is so small. In Intel's case that is their intended choice to not focus on major performance gains. For nvidia & AMD, they are at the mercy of TSMC and not having near the resources or market dominance Intel has. There is nothing exciting about perf/w in the geforce/radeon market unless it means big performance increases as well. The choice between saving $2 a month in power or a 400W monster that is 100% faster is obvious. Gaming GPUs have a huge impact on gaming and we need big jumps consistently.
20-30% is decent and 40%+ is good. This is making allowances for the lack of a node shrink. Really, 70%+ is what you expect in performance increases from one flagship to the next. See 580 to 780 and 6970 to 290X; as much as 100% faster in performance. Same story with 285 to 480 and 4890 to 5870 etc. If they're having trouble delivering real solid improvements one generation to the next consistent with the past then this is bad news for discreet GPUs.
The tactic now is split up the mid-range and high-end of each new generation and sell them each as a generation unto themselves staggered apart. Best strategy is to buy them that way. Skip the mid-range 'high-end' and wait for the real high-end, and the mid-range 'high-end' to drop to a mid-range price when the real top part drops if that is your card.
I think the high prices are here to stay. Slower release cycles with less exciting product line-ups, massive cost increases on the new nodes and less discreet GPUs sold. $1000 could well become standard pricing on halo 20nm/16nm cards, whatever they'll be. Really, we are getting 4-5 years of nothing but 28nm GPUs if this is accurate ?
People like to blame TSMC for the lack of performance increases between new GPU generations. But you are missing the bigger picture and perhaps not wanting to accept how different times have changed.
It all comes down to the financial situation and today's economy. The cost of development/designing of a new architecture is simply becoming much higher with lower returns per generation. On top of the fact that GPUs are getting more complicated by each year, semiconductor process technology is getting much harder to polish/mature and the market in itself is shrinking due to newer markets/changing trends. The returns that many CPU/GPU manufacturers reaped in the early 00s is just not there anymore.
So from a business point of view, what they are doing is the right thing or else AMD/Intel/nVIDIA won't survive. We as consumers will have to deal with it because without sales/profits etc, there won't be products to buy to begin with.
Increasing GPU architecture cycles, using the tick-tock approach, increasing prices (incl. inflation), perhaps introducing new APIs like mantle so that you dont need to rely on new architectures to bring in performance improvements (just updating the GCN architecture little by little). Its going to get worse and worse from here and out.
I guess this process was accelerated due to the rise of consoles and mobile technology along with less and less competition meaning that launch dates are more relaxed compared to what it was 5~10 years ago where even a few months of delay would mean handing market shares to your nearest competitor.. Those days were good but we have to move on. This doesn't just apply to dGPUs either however.
I also happen to think that when games were becoming 3D and it was a real boom back then, the demand for 3D accelerators was huge. But nowadays not many games do push the boundaries due to consoles and lack of return for PC games. Thinking about it, its been a while since we had a really good game that also pushed graphical limits to its boundaries.. something that could help re-vitalize the dGPU business.