Why GT300 isn't going to make 2009

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Zstream
Originally posted by: Idontcare
This is really like three articles rolled up into one.

First let me point out the obvious:
If you do the math, the shrink from 65nm to 55nm ((55 * 55) / (65 *65) ~= 0.72) saves you about 1/4 the area, that is, 55nm is 0.72 of the area of 65nm for the same transistor count. 55nm shrunk to 40nm gives you 0.53 of the area, and 65nm shrunk to 40nm gives you 0.38 of the area.

This is simply an embarrassment. I'm embarrassed for Charlie because he doesn't even know why he should feel embarrassed for publishing this "do the math".

Node labels are simply nothing more than that. They could call it the Apple node, the Pear node, and the Banana node for all the node label means anymore. A "55nm" node has nothing to do with the number "55" or the units "nm". Go to Home Depot and by yourself a 2x4 and measure it, does it measure 2inches by 4inches? No, the label "2x4" is not meant to mean anything mathematical or numeric beyond the basic logic that "a 2x4 is smaller than a 4x4 is smaller than a 4x6" etc.

Umm a 2x4 is two inches by four inches or replace inches with units.. All of the die sizes listed above serve as an average number.

I have no doubt charlie thinks his "do the math" is absolutely correct, just as I am sure you are absolutely convinced a "2x4" physically measures 2inches by 4inches.

But you've never measured a 2x4 (have you?), and charlie has never been involved in any of the hands-on aspects of the chip business, hence he has no clue that he has no clue about what he thinks he is expertly talking about.

I see BenSkywalker proved that out as well regarding the DX11 stuff. This guy Charlie is a legend in his own mind, the once and forever king.

Its only natural to be ignorant of stuff in life, when we are first born we are ignorant of all things in life. But Charlie's arrogance prevents him from extinguishing his ignorance in the very technological field he endeavors to be a technical writer...another thing Ben aptly hits on with the comment regarding whether or not Charlie actually understands the insider info his sources are trying to pass onto him. We've seen it time and time again from this guy, he just keeps dancing in the aisles.

The bump shear problem was perhaps one of the best trade journal documentations of a legitimate stress-fatigue failure mechanism in modern IC's that I have seen outside of work. Charlie really did team up with the right insiders with the right background to nail that problem right down to the root-cause. How much of it was charlie and how much of it was ATI engineers doing everything they could to help Charlie publicize NV's problems (heck, I'd do it if I worked for ATI :laugh:) we will never know. But for sure Charlie has failed to replicate this solitary temporary phase of genius that he appeared to have for a short time.

Originally posted by: SlowSpyder
IDC, I don't suppose you have any insider info, or can give us an educated guess on the cost to build some of the current cards, could you?

When the 8800GT was launched it was usually around $200 (a little more if I remember right) and I have no doubt Nvidia made a ton of money on them. The 4870 has a similar sized GPU, uses a 256 bit memory connection, and both being considered being a mid to lower part of the highend card I would imagine they use similar quality components.

Won't talk about the insider stuff for obvious reasons, but the educated guesses on costs is actually something we can ballpark guess about based on conference call admissions on gross margins. It is a crude estimation because GM comments apply to their entire product mix whereas we know different SKU's will have differing GM's. But its better than nothing.

Keep in mind making a ton of money on one SKU is not the same as selling enough product across the board at high-enough gross margins at to generate a self-sustaining business going forward. There is "cost to manufacture" and then there is "cost to produce". Producing a part includes R&D investments, admin, sales overhead, distribution and marketing, etc. Manufacturing the part merely includes a bill of materials, gross margins usually only speak to the bill of materials aspects of the business (not exactly, but close enough for this level of detail in a forum conversation).

The incremental cost of producing one more successive unit is pretty low provided the production capacity already exists.

The BoM to manufacture the HD4770 for example is somewhere around $60, slightly higher if GPU yields are worse than my expectation at the moment or slightly lower if GPU yields are coming up as TSMC has said they would. The gross margins for this SKU have got to be fairly low (maybe 30-35%?) once the resellers take their profits from the markup.

Keep in mind I am not saying these guys need to double their prices so they stay in business. A 10% higher price at retail can translate into 20-30% higher GM's at AMD's and NV's end of the business.

There, did I sufficiently dodge the question? ;) :p

Originally posted by: Just learning
Do you think Nvidia having CUDA implemented in its hardware is significantly affecting their performance/$ ratio? (with respect to Games)

If this is true....then I hope someone is able to write some legitimately useful non-gaming software for CUDA.

Its a fair question. Presumably NV's gaming drivers engineers actually program the drivers to take advantage of the general purpose parts of the GPU to their maximum benefit. I would shudder at the idea that while gaming the GP part of the GPGPU is truly sitting idle, I doubt that is happening.

But a different question might be "had NV's chip architecture engineers been able to use the same xtor budget to craft the GT300 sans any and all CUDA general purpose support, would the resultant beast of a GPU deliver significantly better performance/$ in games?"

That answer has to be a yes, dedicated hardware will always trump general purpose hardware when it comes specifically to the tasks that the dedicated hardware is designed to do. If given the choice between computing the square root of Pi on a x86 CPU versus building a dedicated IC solely capable of computing SQRT(PI) the dedicated hardware will always win performance...(not performance/$ though in this case as volume of the general processor drives down the cost per unit, hence the entire reason the market exists).

Look at Nehalem vs. Penryn for gaming...your performance/$ when it comes to just gaming is better maximized with the tired FSB and older architecture than with the new CPU. (not talking about the extreme corners where tri-SLI edges out, etc, performance/$ is the subject here)
 

poohbear

Platinum Member
Mar 11, 2003
2,284
5
81
is it just me or has GPU development slowed to a crawl? i recall it was a refresh every 6 months or so & a new generation product every year which DOUBLED the fps. now we've seen refreshes of the same GPU architechture for over 2 years, and i have yet to see a vid card that doubles the fps of my 8800gt. i mean, the 250 gts is essentially an 8800gt refreshed 2 times.:(

mind u, the price to performance ratio of current vid cards is the best its ever been, so i guess its a nice balance? if u told me 3 years ago that ii could get a vid card that plays most games maxed @ 1280x1024 for $100 i would've laughed at ya, and now its reality.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: poohbear
is it just me or has GPU development slowed to a crawl? i recall it was a refresh every 6 months or so & a new generation product every year which DOUBLED the fps. now we've seen refreshes of the same GPU architechture for over 2 years, and i have yet to see a vid card that doubles the fps of my 8800gt. i mean, the 250 gts is essentially an 8800gt refreshed 2 times.:(

mind u, the price to performance ratio of current vid cards is the best its ever been, so i guess its a nice balance? if u told me 3 years ago that ii could get a vid card that plays most games maxed @ 1280x1024 for $100 i would've laughed at ya, and now its reality.

There was a considerable difference in the scope and complexity of the GPU's a decade ago versus what the engineers are challenged with nowadays.

http://pc.watch.impress.co.jp/...html/kaigai_1.jpg.html

Personally I am astounded at the torrid pace at which ATI and NV manage to still cycle thru new architectures and ISA's.
 

Leyawiin

Diamond Member
Nov 11, 2008
3,204
52
91
Originally posted by: OCguy
Someone posted this in the GT300 thread already, and it was discussed there.

Yes but it must be assured that we won't miss it and it has to be posted in it's entirety (without quotes) just in case we don't want to click the link once we see the Inquirer URL. This - is - important.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
While the current AMD tesselator found in 2000, 3000 and 4000 cards may not be fully DX11 compatible, I was under the impression that they will be able to perform the majority of the functions of a full DX11 tesselator. Or is that not true?

It is true that it can perform most of the tasks of DX11 but making it an actual DX11 tesselator is going to take more then slight modifications. There are two new shader types that DX11 tesselator requires that the current ATi parts don't support, Hull and Domain shaders. While I would wager ATi will save development time over designing a tesselator from the ground up, the functionality of these shaders built in to the unit are going to require a rather heavy reworking of the current tesselator to be compliant.

He seems to have hit pretty close to the mark with his insider info regarding the failing laptops with Nvidia GPUs.

Court documents and SEC filings would have given people most of the same information Charlie had, but besides that, getting info from an OEM is quite different then getting it from an IHV.
 

amenx

Diamond Member
Dec 17, 2004
4,531
2,867
136
Originally posted by: Keysplayr
Originally posted by: Zstream
Originally posted by: Idontcare
This is really like three articles rolled up into one.

First let me point out the obvious:
If you do the math, the shrink from 65nm to 55nm ((55 * 55) / (65 *65) ~= 0.72) saves you about 1/4 the area, that is, 55nm is 0.72 of the area of 65nm for the same transistor count. 55nm shrunk to 40nm gives you 0.53 of the area, and 65nm shrunk to 40nm gives you 0.38 of the area.

This is simply an embarrassment. I'm embarrassed for Charlie because he doesn't even know why he should feel embarrassed for publishing this "do the math".

Node labels are simply nothing more than that. They could call it the Apple node, the Pear node, and the Banana node for all the node label means anymore. A "55nm" node has nothing to do with the number "55" or the units "nm". Go to Home Depot and by yourself a 2x4 and measure it, does it measure 2inches by 4inches? No, the label "2x4" is not meant to mean anything mathematical or numeric beyond the basic logic that "a 2x4 is smaller than a 4x4 is smaller than a 4x6" etc.

Umm a 2x4 is two inches by four inches or replace inches with units.. All of the die sizes listed above serve as an average number.

No, Zstream. A standard 2x4 (in the continental United States at least) is actually 1 1/2 to 1 5/8 inches by 3 1/2 to 3 5/8 inches, depending on the mill. No 2x4 is 2 inches by 4 inches. Unless you go back to the late 1800's early 1900's.

Yes, I was a carpenter. And yes, this is what Idontcare was referring to with his analogy.
Sorry, couldnt help responding here. What you're talking about it S4S lumber (smoothed 4 sides) where the smoothing/planing process shaves off from the size of the 2x4. But rough cut lumber comes in actual 2x4 size and is sold to those who prefer it that way. ;)
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
And yet, what you end up with is a 1 5/8 x 3 1/2 piece of lumber called a 2x4.
What you mentioned could be an exception to the norm. I've never heard of anyone caring about specifically having raw cut lumber anyway. They buy studs, they're called 2x4s, but aren't.
S4S lumber, if that's what the "proper" name for it is the norm nationwide. You go into home depot, that's what you get. A local family lumber yard, that's what you get.
So, the standard "S4S" lumber is what IDC was referring to.

And IDC, I'm going to beat you if you use another lumber analogy!!! :D
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Keysplayr
And yet, what you end up with is a 1 5/8 x 3 1/2 piece of lumber called a 2x4.
What you mentioned could be an exception to the norm. I've never heard of anyone caring about specifically having raw cut lumber anyway. They buy studs, they're called 2x4s, but aren't.
S4S lumber, if that's what the "proper" name for it is the norm nationwide. You go into home depot, that's what you get. A local family lumber yard, that's what you get.
So, the standard "S4S" lumber is what IDC was referring to.

And IDC, I'm going to beat you if you use another lumber analogy!!! :D

Hey :laugh: I was very specific in my example...go to Home Depot and buy a "2x4". It's on the shelf and that is what it is labeled as.

I also bought a 1"x10" plank...it was labeled 1"x10" on Home Depot shelf...I can tell you it absolutely does not measure 1" by 10". (but you know that already)

I was using a real-world example that I have firsthand experience with and that I knew just about any forum reader could equate to as an example of what I was trying to say about node labels because most people can go to a lumber reseller or home depot see for themselves. Same as the node labeling, I spent years working on it. I thought the 2x4 analogy was a good one.

How many people are going to go to a mill and request rough cut lumber just so they can measure a 2x4 and call shens on IDC? Only one poster that I can think of would go to the trouble to antagonize me like that, but for everyone else that is interested in "getting to know your node labels" the home depot 2x4 analogy is a pretty good learning aid I thought.

But yeah, lesson learned. After having taught chemistry for 5 yrs I should have expected the unavoidable "uh, Mr teacher sir, your information isn't quite exactly 100% technically correct" response from at least one person in the crowd. It is unavoidable in any group setting once there are more than about 15 people involved. (again speaking from personal experience) ;) :p
 

Denithor

Diamond Member
Apr 11, 2004
6,298
23
81
Originally posted by: Idontcare
But yeah, lesson learned. After having taught chemistry for 5 yrs I should have expected the unavoidable "uh, Mr teacher sir, your information isn't quite exactly 100% technically correct" response from at least one person in the crowd. It is unavoidable in any group setting once there are more than about 15 people involved. (again speaking from personal experience) ;) :p

Chemistry teacher? Seriously? Wow. Favorite subject in HS (had an awesome & inspiring teacher), so much so that I went on and got a degree in it.

And I love to see you guys rip apart a "Charlie" article like this. Good reading!
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Denithor
Originally posted by: Idontcare
But yeah, lesson learned. After having taught chemistry for 5 yrs I should have expected the unavoidable "uh, Mr teacher sir, your information isn't quite exactly 100% technically correct" response from at least one person in the crowd. It is unavoidable in any group setting once there are more than about 15 people involved. (again speaking from personal experience) ;) :p

Chemistry teacher? Seriously? Wow. Favorite subject in HS (had an awesome & inspiring teacher), so much so that I went on and got a degree in it.

And I love to see you guys rip apart a "Charlie" article like this. Good reading!

Oh yeah, 5 yrs teaching freshmen chemistry at university, in my younger years. I seriously considered pursuing it to tenure, but I was lured away to industry by the promises of fat checks and an invigorating career. The checks were fat, but the career not so much :laugh: I enjoyed teaching very much, but it gave me early mid-life crisis when I realized one fall semester that the "kids" I was teaching were born when I was in high-school. Experiencing that first sensation of generation gap, ouch.

Being a chemist as you are, you know what I mean when I say that you never look at the world the way your non-chemist friends and family see the world. You see molecules and reactions everywhere you look, from the soda you drink to the leaves on the trees. Sometimes it feels like a blessing, other times you wish life had remained simple where leaves were just leaves (and not light harvesters for photochemical reactions) and soda was still just soda and not some unhealthy brew of phosphoric and carbonic acid :p ;)
 

yacoub

Golden Member
May 24, 2005
1,991
14
81
Originally posted by: Chaotic42
Now ATI just needs to get their drivers straight and we might be okay.

Hasn't happened in YEARS so I don't put much hope in that. I'd love to own an ATi card... would give me better bang-for-the-buck and help support the underdog. Alas, their drivers are crap and cause so many frustrations I don't wish to return to after enjoying stable NVidia drivers for the past couple years. I have to rule out ATi cards until they can get their drivers sorted, which hasn't happened yet.
Heck just look at the 9.5 Catalyst thread on the front page of this forum. People advocate waiting another month for 9.6 because 9.5 appears worse than 9.4. It's the same cycle every month: "wait for next month's drivers". No thanks, I'd rather not have the added stress and frustration. I just want my GPU to play the games I bought it to be able to play. NVidia has managed to do that for me with 100% consistency and stability for the past three years (7900GT and 8800GT). ATi never managed to be completely stable for any of the generations of their cards I owned, including 9600Pro, 9800Pro, and X800XL. They also mounted hairdryers on their cards, which was an added incentive to convert to NVidia. Hot and loud is a very close second to "able to actually play the games".
 

Denithor

Diamond Member
Apr 11, 2004
6,298
23
81
Originally posted by: yacoub
It's the same cycle every month: "wait for next month's drivers". No thanks, I'd rather not have the added stress and frustration. I just want my GPU to play the games I bought it to be able to play.

Easy solution: don't worry so much about performance/$ and just buy more card than you really need for a given game/resolution. Even with bad drivers a 4870X2 or GTX295 will cruise through practically anything you can throw at it.
 

magreen

Golden Member
Dec 27, 2006
1,309
1
81
Originally posted by: Keysplayr
Originally posted by: Zstream
Originally posted by: Idontcare
This is really like three articles rolled up into one.

First let me point out the obvious:
If you do the math, the shrink from 65nm to 55nm ((55 * 55) / (65 *65) ~= 0.72) saves you about 1/4 the area, that is, 55nm is 0.72 of the area of 65nm for the same transistor count. 55nm shrunk to 40nm gives you 0.53 of the area, and 65nm shrunk to 40nm gives you 0.38 of the area.

This is simply an embarrassment. I'm embarrassed for Charlie because he doesn't even know why he should feel embarrassed for publishing this "do the math".

Node labels are simply nothing more than that. They could call it the Apple node, the Pear node, and the Banana node for all the node label means anymore. A "55nm" node has nothing to do with the number "55" or the units "nm". Go to Home Depot and by yourself a 2x4 and measure it, does it measure 2inches by 4inches? No, the label "2x4" is not meant to mean anything mathematical or numeric beyond the basic logic that "a 2x4 is smaller than a 4x4 is smaller than a 4x6" etc.

Umm a 2x4 is two inches by four inches or replace inches with units.. All of the die sizes listed above serve as an average number.

No, Zstream. A standard 2x4 (in the continental United States at least) is actually 1 1/2 to 1 5/8 inches by 3 1/2 to 3 5/8 inches, depending on the mill. No 2x4 is 2 inches by 4 inches. Unless you go back to the late 1800's early 1900's.

Yes, I was a carpenter. And yes, this is what Idontcare was referring to with his analogy.

Not to beat a dead horse :p but...

I think Zstream didn't mean a 2x4 is actually 2 inches by 4 inches. Rather he meant that you could replace "inches" with some other unit (such as 3/4 of an inch) and the 2x4 would really be 2 units x 4 units, so you could take it to be an accurate relative measure of size. But even there it's not accurate, since 2x4 implies a 1:2 ratio, and 1.5:3.5 != 1:2.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: magreen
Not to beat a dead horse :p but...

I think Zstream didn't mean a 2x4 is actually 2 inches by 4 inches. Rather he meant that you could replace "inches" with some other unit (such as 3/4 of an inch) and the 2x4 would really be 2 units x 4 units, so you could take it to be an accurate relative measure of size. But even there it's not accurate, since 2x4 implies a 1:2 ratio, and 1.5:3.5 != 1:2.

The point is to know when a label means nothing more than a label.

If I said the RV740 was manufactured on TSMC's latest and greatest process node, dubbed the pear node it would be just as useful to you as if I called it the 40nm node.

But because of legacy reasons nearly everyone runs around treating the node label as if it had some direct mathematical physical bearing to something about the device features.

40nm? That's a number right, so I cans divide 40/55 and haz charburger afterwards? The only physical significance of the label "40nm" is that the some features (some, not all) are smaller than their equivalent on the preceding 55nm node. They aren't 40/55 = 0.727x smaller. In fact if you happened to find a feature that scaled by that exact amount it would have been by happinstance and not by some over-arching design requirement out of adherence to Moore's law or some such.

At TI we internally tracked our nodes by a the measurement of a physical feature...in our case it was the metal-1 pitch. What we publicly called our 130nm node was internally called C035 because the metal1 pitch was 350nm (0.35um, hence the 035...the "C" was for CMOS). 90nm was C027 (270nm pitch), etc. I've seen other companies internally iterated their nodes by contact pitch, some by gate pitch, and some by minimum printed linewidth for gate ("as drawn"), others still do it by post-etch gate width, etc.

But it is just label, you can't treat it like a number (regardless of the units) to perform mathematical functions with and come out with anything meaningful whatsoever. The label "2x4" was just an example of this that most people can experience themselves by going to home depot, whereas not many people can firsthand experience why a node label is just a label, hence the analogy.
 

magreen

Golden Member
Dec 27, 2006
1,309
1
81
Lol @charburger :laugh:

Right, I was actually agreeing with you, and justifying your 2x4 analogy, which I happen to think was a good one. Thanks for the infoz!
 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
So IDC, what DOES TMSC, Intel or w/e company mean with 65/55/40nm etc? What does Intel mean with 32nm? Nothing? Is there not some kind of relationship between how many transistors fit on a 100mm2 die, with lets say 65nm, and 55nm? Also, most foundy's seem to hang on to the same, well rounded numbers, but there are production nodes referred to as 48nm (ram) for example. Could the 45nm and 48nm be the same, but a different measurement is used (contact pitch vs gate pitch for example?).
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
If GT300 is released in 2009, I am starting a motion to block links to that website.

Claiming NV was going to have partners hide the 55nm 216s should have been the last straw.
 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
Originally posted by: OCguy
If GT300 is released in 2009, I am starting a motion to block links to that website.

Claiming NV was going to have partners hide the 55nm 216s should have been the last straw.

Did you know this is actually 'somewhat' happening? In Holland it's pretty hard to determine wether a GTX260 is an old 65nm or 55nm one. You'd have to go compare it with the AIB's website, if you get that far. On newegg.com, you can see wether its a 192 or 216c version right away.
 

yacoub

Golden Member
May 24, 2005
1,991
14
81
Originally posted by: Denithor
Originally posted by: yacoub
It's the same cycle every month: "wait for next month's drivers". No thanks, I'd rather not have the added stress and frustration. I just want my GPU to play the games I bought it to be able to play.

Easy solution: don't worry so much about performance/$ and just buy more card than you really need for a given game/resolution. Even with bad drivers a 4870X2 or GTX295 will cruise through practically anything you can throw at it.

No i mean play them without crashing. Flaky ATi drivers always caused instability in games.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
Originally posted by: MarcVenice
Originally posted by: OCguy
If GT300 is released in 2009, I am starting a motion to block links to that website.

Claiming NV was going to have partners hide the 55nm 216s should have been the last straw.

Did you know this is actually 'somewhat' happening? In Holland it's pretty hard to determine wether a GTX260 is an old 65nm or 55nm one. You'd have to go compare it with the AIB's website, if you get that far. On newegg.com, you can see wether its a 192 or 216c version right away.


Sorry, his article didnt say "In Holland it may be somewhat hard to tell for the average consumer."

It implied there would be a system in place to make sure they could clear the 65nm stock. I think EVGA put 55nm on the freaking box.
 

videogames101

Diamond Member
Aug 24, 2005
6,783
27
91
Originally posted by: yacoub
Originally posted by: Denithor
Originally posted by: yacoub
It's the same cycle every month: "wait for next month's drivers". No thanks, I'd rather not have the added stress and frustration. I just want my GPU to play the games I bought it to be able to play.

Easy solution: don't worry so much about performance/$ and just buy more card than you really need for a given game/resolution. Even with bad drivers a 4870X2 or GTX295 will cruise through practically anything you can throw at it.

No i mean play them without crashing. Flaky ATi drivers always caused instability in games.

Personally, I have never had a problem with either ATI or Nvidia drivers, and I have a feeling most other poeple haven't either. Unless you're actually going to cite some study about ATI vs. Nvidia aversage game stability, don't think you're personal experience with a few ATI cards is enough to make a blanket claim like that. I hope your not that ignorant.

On topic though, I have a question. If all these node sizes are relative, do companies just keep bullshitting us simply to keep up with competitors? Are amd and intel just claiming too be hitting 32nm for the marketing? I'd really like to know.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: MarcVenice
So IDC, what DOES TMSC, Intel or w/e company mean with 65/55/40nm etc? What does Intel mean with 32nm? Nothing? Is there not some kind of relationship between how many transistors fit on a 100mm2 die, with lets say 65nm, and 55nm? Also, most foundy's seem to hang on to the same, well rounded numbers, but there are production nodes referred to as 48nm (ram) for example. Could the 45nm and 48nm be the same, but a different measurement is used (contact pitch vs gate pitch for example?).

What they mean when they label their upcoming node as the "32nm" node means a lot of things, it doesn't mean it is 32/45 smaller than the 45nm node.

Within a company there will be specific metrics of success, criterion if you will, for what successive nodes must accomplish. Some of the metrics relate to performance, for example the xtors on 32nm must be capable of 20% higher Idrive than those of 45nm. (just as an example). Other metrics will relate to area scaling for cost reasons, i.e. sram cell-size must be 50% smaller for the new node over the old node. Still other metrics may simply be cost related, reduce average wafer cost by 5% or some such.

But we "lay people" don't get this information. Its not typical marketing material and what we get is the marketing material. It has become sexy to market your sram size as a measure of establishing your technology prowess. This means little to the end-user but rather is meant to bolster confidence among shareholders.

There is no explicit relation between the node label (32nm, 55nm) and some measure of the xtor density. Once you know a company's sram cell size for whatever they call the node (i.e. let's say AMD told you their 90nm node had 1um^2 cell size) then you could extrapolate (assume) that there next node will likely have a 0.5um^2 cell size, and the node thereafter 0.25um^2, etc. But whether they call those nodes the 65nm and 45nm nodes or the pear and apple nodes doesn't really have any bearing on the sram cell size, or metal pitch, etc.

Cars are another example of this. I own a 2006 Toyota Sienna. I bought it in 2005. How was that possible? Did I time travel? It's called a 2006 after all. And I'm still trying to figure out how it is that my neighbor can own a 2007 Honda Odyssey, which he bought "new" in late 2007, is note exactly 12 months older than my van. 2007-2006 = 1 yr, does it not?

But you know why this "logic" is flawed, because you are already conditioned to know the year designation for a car model has little to do with "time" beyond the understanding that a 2006 model car is older than a 2008 model car, beyond that linearity of the scale the numbers themselves mean very little. You wouldn't try and do much math with the labels and expect the resultant answers to mean all that much.

back to semiconductor, companies hold onto the idea of node labels because it is what their analysts understand, its what their shareholders understand, and everyone "in the industry" implicitly understands that the information that a node label communicates and what information it does not.

The node labeling "convention" basically diverges between the logic guys and the memory guys. The logic guys adhere to the convention created decades ago. The memory guys diverged sometime after the 0.25um node (I forget exactly when) as the features of the interest for them were not exactly the same as those of the logic guys.

Why the companies use the same label is purely for marketing reasons. AMD will call their 32nm node products as 32nm node products so they are compared with Intel's 32nm products, etc. The same reason Toyota will call their 2010 Sienna the 2010 Sienna and Honda will call their Odyssey the 2010 Odyssey. Marketing knows how to communicate to the consumer that a 2010 anything means you are buying the latest and greatest, whereas 2009 (or 45nm) was last year's latest and greatest.

I have a similar relationship with my wife, I tell her it's a full 6 inches, she tells me she wears a size 3 dress. Neither questions the others numbers or units, it just works out :laugh: