At this point I'm most curious about what sorts of improvements are made at node shrinks to make the transistors better outside of introducing new materials, geometries, etc like HKMG, FinFET, and so on. Traditionally, what comes every other node or so for Intel, like 65nm and 32nm. Or even more so TSMC and Samsung's half node steps, which I always figured were more or less straightforward shrinks, but now I'm led to believe they have a lot of refinements we don't know about. Probably some things that aren't really discussed or are buried in papers, and probably a lot of things that would be way over my head.
Its all about optimized the electrical characteristics. Implants with specific dopants, stress engineering, channel shaping, etc.
I couldn't possibly do it justice, there are teams of hundreds of rocket scientists at every IDM and foundry right now spending years optimizing this stuff, no way to capture it all in a forum post.
The materials engineering angle is going to figure prominently going forward. Thin film engineering, ALD (
atomic layer deposition). The dielectrics pre-cursor business is exploding from the opportunities.
There's a big difference between "possible" and "feasible for a sellable product". It does seem like the advantages of going to a smaller node are getting less, while the costs and the heat density are only going to get worse. I'm sure Intel will want to keep pushing, but the risk of getting overexposed is there.
Heat density is the real killer.
Dark silicon is a big concern, one that doesn't get any better as we keep scaling to smaller and smaller nodes.
However, while power consumption may remain constant at 22 nm vs. 45 nm, at 11 nm it drops to 0.6. All this means that at a 45-nm power budget, at 22 nm only 25% of the silicon is exploitable and only 10% is usable at 11 nm. Clearly this isn't an acceptable trend line.
I find it amazing that we might be able to have a 1.2nm process. And don't get me wrong, I'm not doubting you one bit. I'm just amazed. If you consider that the Van Der Wals radii of a Si atom is 210pm, the diameter is 410pm. 1200pm=1.2nm This means that a part of the node that is this dimension would only be 3 atoms wide!
Even at 12nm, which we know is doable we're looking at a purpose built structure that is 30 atoms wide. It's simply amazing when you consider at this size scale absolute position and velocity become very fuzzy parameters.
But then again if you can actually get the atoms in the correct position (and this may be the best example of easier said than done in the world) then the things doing the work are the electrons and they are much, much smaller than the atoms.
I was blown away when one day while wandering around a part of the TI North Campus, looking for a TEM lab located in an otherwise unremarkable building, and I came across a museum of prototypes locked up behind glass in a high security area. What blew me away was they had a working prototype of a single-atom transistor in one of the cases
Now this was some 20yrs ago. So even back then it wasn't all that much of a brick wall getting to 1.2nm dimensions. The problem was that one prototype cost a few millions dollars for a single xtor. No one is going to pay that price per transistor for a CPU built with a trillion or so of them (it was developed in the military/defense division of TI, called DSEG at the time and eventually sold off to Raytheon).
So what we see industrial research doing is less pure fundamental R&D and more economic-bound R&D. Its not enough to scale a transistor to 1.2nm, you need to figure out a way to do it such that when you attempt to manufacture a chip with a few trillion of them in it the chip can still be sold for $150 with room for profits.
If you can't make the economics work then industry isn't going to pursue it. But they'll figure out a way to get there in 20yrs or so.
Since there are many knowledgeable forumers in this section, I would like to ask a question somewhat related to the subject in this thread. Since my field is not electrical engineering, this question might be irrelevant. Please forgive me in this case. I read somewhere in this forum that the quantum tunnel effect starts meaningful in the current 22nm node and that the lowering temperature of cpu has more significant impact on stability of performance in this generation than in previous ones (somebody recommended to reduce temperature significantly for overclocking 3770K). Then probably this applies to gpus as well. Current gpus (Kepler) are based on 28nm nodes. Before Kepler, there was Fermi gpus (45nm node) which were famous for high temperature (such as 90C or more). In fact many believed then that gpus were supposed to operate at such high temperature. However, in Kepler Nvidia seems to vigorously try to restrict power consumption and heat dissipation (such as temperature limit settings) unlike in Fermi. Is it because Nvidia wants to keep the temperature lower to stabilize gpus because of node size shrinkage? If this true, gpu of the next generation (Maxwell; 20nm node) will need more restrict temperature control?
Temperature is key not only to stability in terms of clockspeed and power consumption but it is also key to the lifetime reliability of the CPU.
Today's CPUs can be expected to work for a good 10yrs if kept at reasonable temperatures. But as we shrink the transistors and wires in those chips on future nodes it becomes all the more challenging to make those products reliable enough to last 5-10yrs if they are going to operate at elevated temperatures.
Because most degradation mechanisms in solid-state CMOS are
kinetically activated, they adhere to
Arrhenius equation type models.
So an easy rule of thumb to apply is that for every 10C cooler you can make the CPU operate then the CPU will function twice as long before it dies.
For example, lets say a CPU can be expected to last 3yrs if operated at 70C. Decrease the temperature to 60C and you can reasonably expect the operating lifetime to double from 3yrs to 6yrs.
Decrease it another 10C, to 50C, and your chip can be expected to last 12yrs (another doubling).
It is pretty amazing how steep the temperature versus reliability curve is, but that is what exponentials do for you.