So here's the facts:
* Nothing much has happened on desktop performance-wise since Sandy Bridge.
* 14 nm Broadwell brought no major IPC or frequency increase.
* 14 nm Skylake is not expected to do that either.
* Same for 32->22 nm.
So that leaves us with the question:
Is desktop CPU development more or less "completed" as far as performance goes?
Extrapolating the future based on the previous 5 years of progress clearly indicate that. Is there anything on the horizon that may turn the development in a different direction, or is this what can be expected going forward too?
PS. Note that I'm only talking about CPU performance increase, not lower TDP or iGPU performance improvements.
There are various technologies, which to varying extents, are already available, which will (potentially) MASSIVELY improve the performance of desktop PCs.
I've used some of them, and they are amazing. When they can come to the mainstream market place, it will be a big improvement.
Examples are:
Imagine having a 72 thread, 36 core computer. You already can get them (but you are talking about $15,000, fully configured, very approximately).
Although the server(s) I was playing about with, was NOT that powerful, it was AMAZINGLY faster than current desktop PCs. I ran more than 100 things at the same time (for the heck of it), and it was still happy to do more!
(Obviously, NOT with single core applications, but when you do stuff, which can usefully utilize so many cores and threads).
As time goes on, huge number of cores in a desktop PC, along with progress on software which can usefully employ them, is very likely to come along (my opinion, some think that there are fundamental limits, to software's ability to usefully use a huge number of cores. Time will answer this question, I guess). Possibly with many stacked dies, in the same processor package. When/if the technical hurdles can be solved.
E.g. 100,000 Cores, in a $499 PC. (When 2025??, 2125??, year 9999?? etc)
(A bit like you can have 20,000 threads ??? (not sure of exact amount), in a modern, affordable graphics card).
Fully re-configurable (whatever it ends up being called) cpus. Whereby the internals of some/most/all of the cpu can be reprogrammed, at will, to suit what you want to do.
You can already get them (they are called FPGAs). One that I was playing with, was so fast and re-programmable, that I was able to "program" it to pretend to be a very high speed graphics generator, with the FPGA plugging straight into a monitor. I did NOT write the "program" for it (VHDL). It came from the internet and/or the FPGA manufacturer.
What was amazing about it, is that if you wrote a program on a PC. Its loop time would be way too slow, to send the image to a monitor, in real time. Whereas the FPGA, happily sends the image to the monitor in real time. Because they can happily run, even very complicated programs, at many hundreds of megahertz. Because they essentially do most things in parallel.
We sort of hit a laws of physics limit, in the past, when we transitioned from tubes/valves, to massively faster discreet transistors.
Similarly, there was a massive speedup, going from discreet transistors, to Integrated Circuits.
Potentially, other invention(s), may take place in the coming decades, which will massively speed up (even single core stuff), things even more.
Just before Intel started up, computers were either made out of discreet transistors, or made up of many (usually hundreds, but not always, sometimes only a few) different integrated circuits. Intel created the worlds first commercial single chip microprocessor (some people disagree Intel was first, because there was the odd, similar chip, here and there, such as one used in a secret fighter jet project).
So it could be a completely different company that creates the next, greatest thing, in the world of computing.
It would be anyone's guess as to what it will be. Nano-tubes, Graphene, Light based, Quantum particles, non-Silicon materials etc etc.