Originally posted by: Idontcare
Originally posted by: pm
Originally posted by: Idontcare
Nowadays it's just a label meant to imply the technology has iterated. Its not meant to be a mathematical measurement of anything.
I understand the difference between Leff and Ldrawn but to say that process technology nodes are just a label without it being a measure of anything is a bit farther than I would go. SRAM cell size has decreased by approximately ~50% from generation to generation for as far back as I've been in the industry. The percentages bounce around from generation to generation, but the overall trend is for a 50% reduction. Decreasing SRAM cell size implies that the primary drawn features are getting ~30% smaller. And clearly if things are getting smaller on a consistent basis, then there is something measureable being reduced.
I think I know you well enough to know you don't mean to do it intentionally, but you've made a strawman argument there in that you are asserting that because successive nodes do deliver consistent improvements in their deliverables (sram density, etc) over past nodes and because surely there are physically measurable features within a given process technology (all of which is true, but I have not said any of this is untrue) that somehow this is to be invoked as proof that my assertion is untrue regarding node labels being simply labels.
I don't think it's a strawman arguement. Because the progress is consistent at each point, the labels have a deterministic measureable value, thus they are not meaningless labels, but notches on a measuring stick showing incremental progress over time. Changing the labels to meaningless numbers like "854, 856, 858" which have limited descriptive value, doesn't detract from the fact that these descriptors point to a measurable value on silicon. If you know where you started, and you know that SRAM area is reduced by ~50%, then you can determine with some level of accuracy what the size of a given size of memory is at each point.
I make the statement based on being right there in the decision rooms as new process technology nodes (something that already existed on paper as a label) had virtually every single physical and electrical attribute jockied around for yield/manufacturability reasons as well as risk to timeline and development cost. There is zero influence on these decisions stemming from the label given to the node.
Agreed.
We called the successor to the 45nm node as being the 32nm node for simple sake of the fact that is what marketing and media expected us to call the successor to our 45nm node. Had we used a different label then it would have appeared we were diverging from the rest of the industry.
Yes. Agreed that everything gets moved around and nothing gets scaled by a fixed amount - well, not since ~350nm. But, the end result is that certain devices - a latch, an SRAM cell - are basically 50% (+/- 10%) smaller than they were on the previous generation. And this is something specific and measureable. Even at Intel, where there are generic 854, 856 names to processes, a new name isn't given to a new process unless it shows a substantial, close-to-50%, size reduction for an SRAM cell over the previous process node. There are plenty of recipes at each process node - and device characteristics can change dramatically as the guys in Oregon tweak the recipe - but it doesn't get a new process node label unless the size of the SRAM area is ~50% smaller than it was at the last process label. I have seen substantially changes in electrical characteristics for a given process node from one iteration (called "revs") to the next, but it doesn't get a new process node label.
When a new node's specs are being "fleshed out" there is absolutely zero decision making going on to the tune of "well guys, this is going to be called the 32nm node so that means first and foremost we need 1/2 gate pitch to be 32nm" or "this is going to be called the 32nm node so that means first and foremost we need printed gate length, Ldrawn, to be 32nm".
Well, yes, but there are certain goals that need to be achieved in terms of device characteristics and one of these is that the main circuitry is smaller than before. ~50% smaller in terms of SRAM area. And SRAM now accounts for the largest portion of the higher end products and thus it's area has a substantial impact on yield. So while I agree that whatever is drawn can be whatever size you guys want it to be, the end result is ~50% smaller circuitry
There are design targets, i.e. deliverables, for a new node...50% areal reduction of some token circuit of merit (sram for some), 5x decrease in leakage, 20% increase in Idrive, etc. They are all based on successive iterative improvements from a pre-existing node or pre-existing specs for a node under development (N+1 vs N+2), etc.
There is a reason Intel internally refers to their 32nm node as the P1268 and 22nm as the P1270. Whether you call it 32nm or P1268, both descriptors are merely labels to differentiate the underlying process tech from prior and successive generations. And if I wanted to make false (and silly) maths with those labels to say silly stuff like ratios between them I could do things like say "CPU's on 22nm will be 22/32 = 69% the size of 32nm chips"...this is wronger than wrong, but no less correct than if I said "CPU's on P1270 will be P1268/P1270 = 99.8% the size of P1268 chips". In both cases I am treating a node label, a text item not a mathematical quantity, as if it were indeed a mathematical quantity which I can do maths with.
And this gets back to my point at the top - which is that you can, with a certain amount of error, do math with it.
Taking an arbitrary point in time, say, 180nm. For Intel, this process had a 5.59um^2 cell size. Now let's go foward 3 process nodes in time, 180->130->90->65nm. If you devide 5.59 by 2 three times, you get 0.7um^2 which is not too far off from the real value of 0.57um^2.
The basis on my thinking on the subject is pretty much slide 6 of this presentation:
http://download.intel.com/tech...m-logic-technology.pdf
The points along that line more or less line up with the extrapolated line, and the points on it are more or less equidistant. There's some error bars, but overall, it's a steady progression and if you know one point along it, you can extrapolate other points.
You making the argument that since things get smaller on a consistent basis (aka node reduction results in step-wise linear changes in deliverables) has nothing to do with what I am saying about the node label being nothing more than simply a label.
I'm saying that the label has some mathematical meaning at the macroscopic level - maybe just at Intel, which is the only process technology that I very familiar with. Not in terms of drawn values at the low levels which have lots of limitations due to OPC and phase-shifted masks and other things that I know little about, and not in terms of size of a CPU - because that's an arbitrary thing that is determined by the market as well as a lot of other variables - but in terms of the size of specific cells on the chip. Latches are problematic to use because it depends on whether it's a scan latch or not, whether it's a mulithreaded cell or not, whether it's a master-slave or a pulse latch, etc. But SRAM is used all over the chip, it's a good figure of merit for the size of a cache.
The gist of what you are saying - if I'm understanding correctly and you will no doubt correct me if I'm not - is that making a process recipe is a messy business and that all sorts of things get changed and none of these are determined by anything from the ITRS or anything else that you can measure with a 0.7X measuring stick consistently from one generation to the next. And I'm not debating that - you are far more of an expert in these matters than I am. But I am saying that there are goals to each process in terms of speed, and especially - at least at Intel - in terms of density and that these goals add up to a progression over time which has a concrete measureable and mathematical value.