AMD: Moore's Law's end is near

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
If you don't change any of the properties of an xtor, that has to include speed, amperage, and voltage, making all of your arguments moot, since 50% more xtors at the same speed could not possibly use less power, unless you also change what he at first said wouldn't be changed, the electrical properties (this would primarily be amperage and voltage for any CPU).

I wasn't making the argument that 50% more transistors at the same speed would use less power w/o improved electrical parametrics, you are correct, I'm definitely not that dense :p

What I was saying was that for some applications 50% more transistors in the same area could be more efficient since they could complete the same tasks at enough of a lower clock speed (and therefore, voltage) that they use less power. Because increase clock speed scales power consumption very non-linearly. For an example see Haswell GT3, which will accomplish much higher perf/W than IB by using over twice as many execution units, even though otherwise being a similar uarch on the same process.

What I was really missing was that "same electrical characteristics" doesn't mean transistors that run at the same clock speed and voltage and use the same power (things that someone like me naively looking at a datasheet would consider electrical characteristics) but drive strength, impedance, current vs voltage curves, leakage etc parameters of the transistors. And that a physically smaller transistor will suffer in these characteristics. I still didn't want to rule out the possibility that there could be some situation where the increased density of a smaller but lower performing transistor still outweighs the decreased performance. But I can't provide any concrete example, and from what I can tell IDC's point is that if this sort of tradeoff made sense fabs would be providing it.

At this point I'm most curious about what sorts of improvements are made at node shrinks to make the transistors better outside of introducing new materials, geometries, etc like HKMG, FinFET, and so on. Traditionally, what comes every other node or so for Intel, like 65nm and 32nm. Or even more so TSMC and Samsung's half node steps, which I always figured were more or less straightforward shrinks, but now I'm led to believe they have a lot of refinements we don't know about. Probably some things that aren't really discussed or are buried in papers, and probably a lot of things that would be way over my head.

(also, to nitpick, I don't think IDC is a CPU architect rather than a process engineer or something like that, but I'll let him field that one..)
 

Charles Kozierok

Elite Member
May 14, 2012
6,762
1
0
Consider for a moment why Intel would bother spending the millions it spent in terms of salaries and project time just in creating the publicity documents (PR and videos) to hype the existence of its 22nm 3D xtor tech. That wasn't expensed for the benefit of the CPU consumer, it was for the benefit of the INTC owner.

I understand, and I did not mean to imply that Intel's 22nm process was smoke and mirrors or anything like that.

I guess my real question is -- where before the node labels were larger than the real minimum feature size, has this now swung around in the other direction? It seems likely, based on what I've read, that the 14nm node will have minimum feature sizes larger than that.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
I understand, and I did not mean to imply that Intel's 22nm process was smoke and mirrors or anything like that.

I guess my real question is -- where before the node labels were larger than the real minimum feature size, has this now swung around in the other direction? It seems likely, based on what I've read, that the 14nm node will have minimum feature sizes larger than that.

That's true, but its acceptable as long as the original purpose of pursuing Moore's Law continues:

That is, increase density by 2x and improve performance/lower power. 22nm does exactly that over 32nm. It really doesn't matter if the gate size changes, or the smallest feature is 30nm or whatever.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
At this point I'm most curious about what sorts of improvements are made at node shrinks to make the transistors better outside of introducing new materials, geometries, etc like HKMG, FinFET, and so on. Traditionally, what comes every other node or so for Intel, like 65nm and 32nm. Or even more so TSMC and Samsung's half node steps, which I always figured were more or less straightforward shrinks, but now I'm led to believe they have a lot of refinements we don't know about. Probably some things that aren't really discussed or are buried in papers, and probably a lot of things that would be way over my head.

Its all about optimized the electrical characteristics. Implants with specific dopants, stress engineering, channel shaping, etc.

I couldn't possibly do it justice, there are teams of hundreds of rocket scientists at every IDM and foundry right now spending years optimizing this stuff, no way to capture it all in a forum post.

The materials engineering angle is going to figure prominently going forward. Thin film engineering, ALD (atomic layer deposition). The dielectrics pre-cursor business is exploding from the opportunities.

silisb02.jpg


There's a big difference between "possible" and "feasible for a sellable product". It does seem like the advantages of going to a smaller node are getting less, while the costs and the heat density are only going to get worse. I'm sure Intel will want to keep pushing, but the risk of getting overexposed is there.

Heat density is the real killer. Dark silicon is a big concern, one that doesn't get any better as we keep scaling to smaller and smaller nodes.

However, while power consumption may remain constant at 22 nm vs. 45 nm, at 11 nm it drops to 0.6. All this means that at a 45-nm power budget, at 22 nm only 25% of the silicon is exploitable and only 10% is usable at 11 nm. Clearly this isn't an acceptable trend line.

armchart-540x334.jpg


I find it amazing that we might be able to have a 1.2nm process. And don't get me wrong, I'm not doubting you one bit. I'm just amazed. If you consider that the Van Der Wals radii of a Si atom is 210pm, the diameter is 410pm. 1200pm=1.2nm This means that a part of the node that is this dimension would only be 3 atoms wide!

Even at 12nm, which we know is doable we're looking at a purpose built structure that is 30 atoms wide. It's simply amazing when you consider at this size scale absolute position and velocity become very fuzzy parameters.

But then again if you can actually get the atoms in the correct position (and this may be the best example of easier said than done in the world) then the things doing the work are the electrons and they are much, much smaller than the atoms.

I was blown away when one day while wandering around a part of the TI North Campus, looking for a TEM lab located in an otherwise unremarkable building, and I came across a museum of prototypes locked up behind glass in a high security area. What blew me away was they had a working prototype of a single-atom transistor in one of the cases :eek:

Now this was some 20yrs ago. So even back then it wasn't all that much of a brick wall getting to 1.2nm dimensions. The problem was that one prototype cost a few millions dollars for a single xtor. No one is going to pay that price per transistor for a CPU built with a trillion or so of them (it was developed in the military/defense division of TI, called DSEG at the time and eventually sold off to Raytheon).

So what we see industrial research doing is less pure fundamental R&D and more economic-bound R&D. Its not enough to scale a transistor to 1.2nm, you need to figure out a way to do it such that when you attempt to manufacture a chip with a few trillion of them in it the chip can still be sold for $150 with room for profits.

If you can't make the economics work then industry isn't going to pursue it. But they'll figure out a way to get there in 20yrs or so.

Since there are many knowledgeable forumers in this section, I would like to ask a question somewhat related to the subject in this thread. Since my field is not electrical engineering, this question might be irrelevant. Please forgive me in this case. I read somewhere in this forum that the quantum tunnel effect starts meaningful in the current 22nm node and that the lowering temperature of cpu has more significant impact on stability of performance in this generation than in previous ones (somebody recommended to reduce temperature significantly for overclocking 3770K). Then probably this applies to gpus as well. Current gpus (Kepler) are based on 28nm nodes. Before Kepler, there was Fermi gpus (45nm node) which were famous for high temperature (such as 90C or more). In fact many believed then that gpus were supposed to operate at such high temperature. However, in Kepler Nvidia seems to vigorously try to restrict power consumption and heat dissipation (such as temperature limit settings) unlike in Fermi. Is it because Nvidia wants to keep the temperature lower to stabilize gpus because of node size shrinkage? If this true, gpu of the next generation (Maxwell; 20nm node) will need more restrict temperature control?

Temperature is key not only to stability in terms of clockspeed and power consumption but it is also key to the lifetime reliability of the CPU.

Today's CPUs can be expected to work for a good 10yrs if kept at reasonable temperatures. But as we shrink the transistors and wires in those chips on future nodes it becomes all the more challenging to make those products reliable enough to last 5-10yrs if they are going to operate at elevated temperatures.

Because most degradation mechanisms in solid-state CMOS are kinetically activated, they adhere to Arrhenius equation type models.

So an easy rule of thumb to apply is that for every 10C cooler you can make the CPU operate then the CPU will function twice as long before it dies.

For example, lets say a CPU can be expected to last 3yrs if operated at 70C. Decrease the temperature to 60C and you can reasonably expect the operating lifetime to double from 3yrs to 6yrs.

Decrease it another 10C, to 50C, and your chip can be expected to last 12yrs (another doubling).

It is pretty amazing how steep the temperature versus reliability curve is, but that is what exponentials do for you.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I understand, and I did not mean to imply that Intel's 22nm process was smoke and mirrors or anything like that.

I guess my real question is -- where before the node labels were larger than the real minimum feature size, has this now swung around in the other direction? It seems likely, based on what I've read, that the 14nm node will have minimum feature sizes larger than that.

It has swung around in the other direction. Gate length scaling rapidly fell to ~35nm at the 65nm node and havn't really gotten much smaller since.

So now with the 22nm node we see fins that are on the order of ~25nm wide.

Checkout this presentation, it provides a very nice overview of the challenges and the realities in today's node scaling efforts. Specifically note pages 4 and 9.

Nodescaling90nmto32nm.jpg:original


Nodescaling90nmvs32nm.jpg:original
 
Last edited:

Xpage

Senior member
Jun 22, 2005
459
15
81
www.riseofkingdoms.com
Even in sram the smallest allowed xtors aren't routinely used. Intel's L1$ always uses larger xtors than their L3$, even though they could have a larger L1$ (bit-total wise) if they did use the slower/denser L3$ sram in their L1$ design.

The trade-offs are all determined and calculated during the development phase of the node itself. It is one of the guiding aspects of node development, otherwise imbalances would be introduced in terms of spending too much R&D effort optimizing one parameter at the expense of not improving enough on another parameter.

Maybe what is missing here is an understanding of how xtor dimensions factor into the design of a circuit, why both dimensions (gate length and gate width) are variables to be optimized during design and are not just a fixed minimum value set by a given node?

So the L1$ issues AMD has (usually having a slower L1$) may probably be fixed by say altering the size of the transistors to allow more current to allow faster switching. Of course then the L1$ would be less dense.



I find it amazing that we might be able to have a 1.2nm process. And don't get me wrong, I'm not doubting you one bit. I'm just amazed. If you consider that the Van Der Wals radii of a Si atom is 210pm, the diameter is 410pm. 1200pm=1.2nm This means that a part of the node that is this dimension would only be 3 atoms wide!

Even at 12nm, which we know is doable we're looking at a purpose built structure that is 30 atoms wide. It's simply amazing when you consider at this size scale absolute position and velocity become very fuzzy parameters.

But then again if you can actually get the atoms in the correct position (and this may be the best example of easier said than done in the world) then the things doing the work are the electrons and they are much, much smaller than the atoms.


I have issues thinking that major leakage will not occur with 3 atom wide or long transistors, i do not think that will ever be feasible. Also wouldn't you have to figure out the distance between Si atoms in a lattice for a minimum distance

Thus I am now curious how AMD will add 50% more L1$ without sacrificing speed/latency, unless the electrical characteristics are vastly improved on the 28nm node steamroller will be on, which although I root for AMD, I do not think Global Foundaries is competent.
 
Last edited:

Charles Kozierok

Elite Member
May 14, 2012
6,762
1
0
It has swung around in the other direction. Gate length scaling rapidly fell to ~35nm at the 65nm node and havn't really gotten much smaller since.

So now with the 22nm node we see fins that are on the order of ~25nm wide.

Checkout this presentation, it provides a very nice overview of the challenges and the realities in today's node scaling efforts. Specifically note pages 4 and 9.

Thanks for the confirmation.

Your link is a bit messed up, for anyone interested, try this.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I have issues thinking that major leakage will not occur with 3 atom wide or long transistors, i do not think that will ever be feasible.

This is an everyday basic reality in the field of electrochemistry where you need single-electron transfers from one atom to another at the surface of the item you are plating.

Get the voltage too low (below the work potential of the atom involved) and electron transfer will not happen. Zero leakage.

This is what I meant in one of my posts above when I wrote that we don't need new physics or chemistry in order to work at those length scales. The innovation that is to come will be in the electrical nature of the compute devices themselves.

Look at what it takes to make an SLC NAND device function as an MLC or TLC device. The innovation there was not in making tiny flash cells but in making efficient ECC type algorithms that can work with the reality that the device is a continually changing electrical beast.
 

piasabird

Lifer
Feb 6, 2002
17,168
60
91
This whole premise is based on the concept that thinner silicon is the only way to improve a processor or a computer. Computers are more complex than that. For instance a blue ray disk optical drive is a dinasaur. The disks are too easy to damage and the drives are too bulky, and the truth is that these drives exist to make the digital rights groups happy and not the consumer. Bulky drives is the one thing other than bulky video cards that prohibit the manufacture of smaller computers. If you could make smaller compters just imagine all the wasteful giant computer cases you could get rid of?
 
Apr 21, 2012
125
0
76
If AMD goes under I don't think it's going to be simply from Intel. I'm sure most people here probably read the article floating around a few days ago about Tablets outselling PC's in a few years, so even if AMD does shut down the PC market is going to be a lot smaller than it is today and that's probably why Intel is focusing so heavily on reduction in power useage. I'm guessing it'll be NVidia vs Intel in the PC market, and probably a bunch of competitors in the tablet market, though I think the next few Atom generations have the potential to upset that market as well.
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
I don't think people realize how much the market has already consolidated.

Here's a list of who (used to) make x86 chips :

Intel
AMD
VIA
Transmeta (discontinued its x86 line)
Rise Technology (acquired by SiS)
IDT (Centaur Technology x86 division acquired by VIA)
National Semiconductor (sold the x86 PC designs to VIA and later the x86 embedded designs to AMD)
Cyrix (acquired by National Semiconductor)
NexGen (acquired by AMD)
Chips and Technologies (acquired by Intel)
IBM (discontinued its own x86 line)
UMC (discontinued its x86 line)
NEC (discontinued its x86 line)

The only one left is VIA. They're shipping on 40nm process tech, mostly to cheap white-box companies for thin clients and embedded systems, and only have 0.2 - 0.3% of the PC market.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I don't think people realize how much the market has already consolidated.

Here's a list of who (used to) make x86 chips :

Intel
AMD
VIA
Transmeta (discontinued its x86 line)
Rise Technology (acquired by SiS)
IDT (Centaur Technology x86 division acquired by VIA)
National Semiconductor (sold the x86 PC designs to VIA and later the x86 embedded designs to AMD)
Cyrix (acquired by National Semiconductor)
NexGen (acquired by AMD)
Chips and Technologies (acquired by Intel)
IBM (discontinued its own x86 line)
UMC (discontinued its x86 line)
NEC (discontinued its x86 line)

The only one left is VIA. They're shipping on 40nm process tech, mostly to cheap white-box companies for thin clients and embedded systems, and only have 0.2 - 0.3% of the PC market.

You can add Texas Instruments to that list.
 

lagokc

Senior member
Mar 27, 2013
808
1
41
I don't think people realize how much the market has already consolidated.

Here's a list of who (used to) make x86 chips :

Transmeta (discontinued its x86 line)
NexGen (acquired by AMD)

Transmeta never made an x86 chip, they made VLIW chips designed to emulate x86 quickly in software and it didn't really work out for them. Also NexGen never made a chip they merely had the design for a chip.
 

SiliconWars

Platinum Member
Dec 29, 2012
2,346
0
0

A "lets wait and see" approach is recommended for GlobalFoundries.

On the whole I believe that what will end CMOS is the lack of gains for the cost involved. TSMC's 20nm is probably not that much better than their 28nm, maybe Intel's 14nm isn't much better than their 22nm. Sooner or later the increasing billions needed for transistor shrinking R&D will bring it to an end.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
TSMC's 20nm allows 90% more transistors or 30% lower power consumption. I think that is much better than their 28nm process...
 

R0H1T

Platinum Member
Jan 12, 2013
2,583
164
106
On the whole I believe that what will end CMOS is the lack of gains for the cost involved. TSMC's 20nm is probably not that much better than their 28nm, maybe Intel's 14nm isn't much better than their 22nm. Sooner or later the increasing billions needed for transistor shrinking R&D will bring it to an end.
This :whiste:

Now moving onto graphene, I can't wait for terahertz computing :p
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
A "lets wait and see" approach is recommended for GlobalFoundries.

On the whole I believe that what will end CMOS is the lack of gains for the cost involved. TSMC's 20nm is probably not that much better than their 28nm, maybe Intel's 14nm isn't much better than their 22nm. Sooner or later the increasing billions needed for transistor shrinking R&D will bring it to an end.
That has been the rate limiting factor in node scaling since the 60's.

There is a reason newer nodes didn't come any faster than they did.

So I'd definitely say it is safe to conclude it will continue to be the primary limitation for node evolution going forward ;)