Last time I checked nVidia wasnt in the charity business. And it seems you value wishes beyond economics.
No one said anything about charity. I laid out a clear and specific process by which Nvidia would increase volume, and by doing so, increase profitability and pave the way for higher die size, and even more profitable, FinFET products later on.
You don't think this would work. It's at least possible that you are right. But unless you have some position inside the industry that you're not telling anyone about, you have no better insight into this matter than I do. Only time will tell what happens.
nVidia isnt going to discontinue GM206, just because you want a "big" lowend die. Also the lineup suddenly got an unwanted gap.
Any gap could easily be filled by using cut-down GK204 parts. Note that Nvidia has done this before - the GTX 700 series brought in the first appearance of Maxwell (GM107) and kicked out GK106; instead, the low midrange was filled by GTX 760, which was a GK104 salvage part.
GK106 had a short life but still earned its keep because it sold well. There's no reason the same cannot be true of GM106.
And you forget how cost prohibitive 14/16nm is.
You always manage to pull out these weird slides whenever discussing FinFET. Who the hell is IBS, and why should we care what they think - especially since this slide dates to 2013, and I'm very skeptical that some investment pundit in 2013 could make accurate predictions at this level about what TSMC, Samsung, and GloFo will be doing in 2016.
Have you considered that one reason why there have been so many delays around FinFET might be specifically
because the foundries are trying to get the production to an appealing price point for mass usage?
The key also lies here why Intel is the only company so far getting lower transistor cost on 14nm.
That isn't because Intel has some kind of magical unicorn dust that no one else does. It's because Intel has a lead of several years on the other foundries, so they've already worked out the kinks in the process and gotten yields up.
There is no reason to think this process node will be different than any other. Early adopters always face low yields, higher prices, and die size restrictions. Eventually the process matures, yields go up, larger dice become feasible, and price per transistor goes down. If Intel did it, so can others.
And there is plenty of room for 16nm/14nm FinFET to be viable in certain dGPU products even if per-transistor costs start slightly above that of 28nm and initial die sizes are restricted.