• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

YAGFXT or What's the holdup in manufacturing NV30?

element

Diamond Member
Yet another geforceFX thread.\

In this thread I'd like to know what the holdup is in manufacturing NV30? Intel's Northwood line uses the .13 micron process (do we need a mu symbol on these boards or what?)

So what is holding up NV30 when Intel is able to ship .13 parts already? Are they using different fab facilities? If so, why not use whichever fab facility has good yields on .13 parts?
 
That doesn't answer my question Deeko. Postcount-- for you.

We need a button to reduce the postcount of neffers when they post nothing of value just to postcount++. Who's with me?

I already know Nvidia uses TSMC. What I would like to know is why Intel is able to mass produce .13 parts and Nvidia is not. So who does Intel use for manufacturing?
 
Originally posted by: element®
That doesn't answer my question Deeko. Postcount-- for you.

We need a button to reduce the postcount of neffers when they post nothing of value just to postcount++. Who's with me?

I already know Nvidia uses TSMC. What I would like to know is why Intel is able to mass produce .13 parts and Nvidia is not. So who does Intel use for manufacturing?

I answered your question. I don't really know who Intel uses, maybe they make their own. TSMC is the reason that NV30 is delayed. TSMC was not ready for .13, so it was delayed.
 
Intel does all their stuff in-house. Everyone else is struggling to get to .13mu (AMD, nVidia, ATI, etc.) while Intel moves ahead with .09mu next year. Talk about leaving the competition in the dust!

TSMC is the only bottleneck here. If you want a more specific answer, you have to ask them. Do they have the capital for new equipment? The space? Problems with new equipment? Dunno...
 
See I was right, silly boy trying to discount my postcount
rolleye.gif
 
Originally posted by: JavaMomma
element®. Postcount -= 2; 😀

LOL!

I believe Intel has 8 fabs around the world. Also they have one or two 12" inch fabs.

One of the two Intel guys will know for sure.
 
Penstarsys - The hiccup heard 'round the world
The .13u fabrication process has been problematic for all companies that have transitioned to it. Intel was the first to make the jump successfully, but it did not disclose the cost nor trouble that it encountered in the conversion. Intel has millions and millions to invest in fabrication techniques and technology, and it used a total of 4 Fabs to transition to the .13u process, with each one doing it its own way instead of the copy exact technique Intel was famous for. This hints that the problems with the .13u process were so severe, that Intel was willing to use 4 of its mega-Fabs to test and troubleshoot the process, each in its own separate way. Every lesson learned from each Fab eventually made its way into the other Fabs. This is like using 4 separate minds to work on a problems from many angles, with each conferring with the other on what they have learned. Most semi-conductor companies do not have this luxury. AMD, Infineon, UMC, TSMC, and others had to do this piecemeal and figure out the problems one step at a time (as Intel is not going to help them out). Only now are we starting to see .13u parts from AMD, fully 9 months from when Intel released its .13u Pentium 4 part. Other companies like TSMC and UMC are only now getting their .13u lines up to mass production.

One of the main problems with the new process was that perfectly good dies would fail around 300 hours of use. This perplexed engineers all over the world, as this should not have been happening. This failure was seen time and time again, and eventually yields would eventually reach around 10% after testing. The main culprit in this case turned out to be voids in the copper that under heat and pressure would migrate to the interconnects. Once the void reaches the interconnect, it would essentially dislodge the transistor, causing large portions of the chip to fail. There was no simple way around this problem, and in the end the .13u design libraries had to be changed to reflect this. The change in the design libraries seems to have helped this problem and yields on the .13u process are now becoming acceptable. The problem with changing the design libraries is that chips designed around the old libraries had to be re-designed to reflect the new libraries.

This looks to be where the delay in the NV-30 stems from. TSMC had to change its design libraries for the .13u process, and in so doing has forced NVIDIA (and other companies) to redesign their chips to work on the process. This is especially painful for the NV-30, as it comprises nearly 130 million transistors. The redesign on such a part using the new libraries is a huge task. New simulations must be made, new testing, and new troubleshooting. While not exactly starting from scratch, it is much like deciding to add a second story to a new one story house. The foundation and first floor can remain intact, but a lot of extra work is required to get the building up and inhabitable.

Relying on the latest fabrication technology is a risky venture, but one that has paid off for NVIDIA time and again. Eventually their luck would run out, and the .13u process seems to have offered the opportunity for bad luck to come and visit. NVIDIA may or may not be part of the blame here, but the fact remains that NVIDIA is well behind schedule and this has allowed the competition to catch up and release compelling products at both the high end and mainstream. Transitioning to the .13u process will eventually bear fruit, and NVIDIA is well ahead of the curve in terms of utilizing this process for products from the top to bottom. This will eventually pay off, and their expertise in working with .13u products will be second to none. When future NV-30 variants start to get close to production, the experience NVIDIA has gained here will pay off in spades. The transition to these future products will most likely be a lot smoother, and the late spring introduction of DX9 mainstream parts will most certainly help NVIDIA to keep pressure on the industry.

TSMC 13 micron info

Greg
 
Too bad AMD, nVidia, ATi, Infineon, and others didn't form an über-coalition to get TSMC's .13mu process working sooner to benefit all. Whatever Intel learned, it's allowing them to get .09mu chips out sometimes in H2 2003 or H1 2004!
 
Originally posted by: Gstanfor
Penstarsys - The hiccup heard 'round the world


This looks to be where the delay in the NV-30 stems from. TSMC had to change its design libraries for the .13u process, and in so doing has forced NVIDIA (and other companies) to redesign their chips to work on the process. This is especially painful for the NV-30, as it comprises nearly 130 million transistors. The redesign on such a part using the new libraries is a huge task. . . .

. . . .13u process will eventually bear fruit, and NVIDIA is well ahead of the curve in terms of utilizing this process for products from the top to bottom. This will eventually pay off, and their expertise in working with .13u products will be second to none. When future NV-30 variants start to get close to production, the experience NVIDIA has gained here will pay off in spades. The transition to these future products will most likely be a lot smoother, and the late spring introduction of DX9 mainstream parts will most certainly help NVIDIA to keep pressure on the industry.

TSMC 13 micron info

Greg[/quote]

I edited the "article" so I could ask a question based on it's relevant (to my ?) parts:

Doesn't ATI also use TSMC? Won't N'vidia's struggle to transition to .13 microns also help ATI (as the new design libraries are set up by TSMC?)
 
Originally posted by: JavaMomma
element®. Postcount -= 2; 😀

Deeko, Javamomma and MotoAmd:
Unlike you little limp wristed, pimple farming retards I am not concerned with my postcount. Don't you 3 kids have something better to do like run down to your local pharmacy to stock up on pimple cream or something? Be careful not to mix it up with the hemmoroid cream or you'll disappear ya little pain in the asses.

To everyone else, thanks for contributing to my thread in a mature, professional manner.
 
Not necessarily.

For one thing there would be a contract between TSMC and nVidia barring TSMC from sharing optimizations developed by nVidia with other fab clients.

What works well for one design may not work at all for other designs. All ATI will get is the basic design libraries applying to the 13 micron process - it is up to them to design a chip that works with them. TSMC may well assist with finetuning, but they can't tell ATI what nVidia did to solve a problem.

Greg
 
Originally posted by: Gstanfor
Not necessarily.

For one thing there would be a contract between TSMC and nVidia barring TSMC from sharing optimizations developed by nVidia with other fab clients.

What works well for one design may not work at all for other designs. All ATI will get is the basic design libraries applying to the 13 micron process - it is up to them to design a chip that works with them. TSMC may well assist with finetuning, but they can't tell ATI what nVidia did to solve a problem.

Greg

My whole point is that Nvidia pioneered the way for TSMC to transition to .13 micron - they had to do their basic design libraries TWICE (something ATI is not likely to have to do). The basic "problem-solving" of the "yields" is done ("Thanks, Nvidia"--ATI). 😀

 
Originally posted by: element®
Originally posted by: JavaMomma
element®. Postcount -= 2; 😀

Deeko, Javamomma and MotoAmd:
Unlike you little limp wristed, pimple farming retards I am not concerned with my postcount. Don't you 3 kids have something better to do like run down to your local pharmacy to stock up on pimple cream or something? Be careful not to mix it up with the hemmoroid cream or you'll disappear ya little pain in the asses.

To everyone else, thanks for contributing to my thread in a mature, professional manner.

I did contribute in a mature, professional manner, until you got all whiney because I didn't use 100 words for something that required a 1 word answer.

fvcking retard.
 
Originally posted by: Gstanfor
For one thing there would be a contract between TSMC and nVidia barring TSMC from sharing optimizations developed by nVidia with other fab clients.

What works well for one design may not work at all for other designs. All ATI will get is the basic design libraries applying to the 13 micron process - it is up to them to design a chip that works with them. TSMC may well assist with finetuning, but they can't tell ATI what nVidia did to solve a problem.
Exactly. There's no "universal" formula for the .13µ (or any other) process. Not only that, but AMD is using SOI, which has a longer learning curve because it tends to have more native defects. How much has that decision hurt AMD up to this point? Is SOI a significant part of the problem? Could AMD have gotten parts out sooner if they decided to go with bulk silicon? I guess all of that could be a topic of debate.
Originally posted by: element®
(do we need a mu symbol on these boards or what?)
"µ" = Hold down Alt and type 0181 on your keypad. 🙂
Originally posted by: GTaudiophile
I bet Wingznut is smiling somewhere...he knows if others are on the right track or not...
True. But mostly I'm smiling when I read whacky articles like the one gstanfor posted. Intel DOES use "Copy Exactly" and trust me... It can be a royal PITA.

Of course there are going to be variences from Fab to Fab. Hell, there's variences in Lithography from tool to tool. With the incredibly small size of circuits being printed, you can set up two different scanners the exact same way and get two different results. As much as they (Nikon, SVGL, Canon, etc.) would love to be able to make the projection optics (mirrors, lenses, splitters, etc) exactly the same, they cannot with the dimensions we are talking about.

But I digress, because that's not really what "Copy Exactly" is about. I'm not sure what the writer of that article thinks "Copy Exactly" is, considering his statement "Every lesson learned from each Fab eventually made its way into the other Fabs" is pretty much CE in a nutshell.

And another part of that article, "Only now are we starting to see .13u parts from AMD, fully 9 months from when Intel released its .13u Pentium 4 part."... Keep in mind that the Northwood came out almost a year after Tualatin (Intel's .13µ P3). 🙂
 
Originally posted by: GTaudiophile
So, Wingz...when will we be able to call you ".09µ Lithography Technician, Intel Corp.?"
When I can say to myself, "My work here is done." (And maybe not even then.) 😉
 
So . . . won't ATI be able to take advantage of Nvidia's (& TSMC's) "mistakes" (or at least TSMC's "experience") to get to .13 micron (more quickly)?
 
Originally posted by: apoppin
So . . . won't ATI be able to take advantage of Nvidia's (& TSMC's) "mistakes" (or at least TSMC's "experience") to get to .13 micron (more quickly)?
Not really... Well sorta.

It depends on what the issue is. If it's because of a TSMC problem, then maybe. If it's because of a process or design problem, then certainly not.

For example (very hypothetically...)... Say, nvidia figures out that one source of poor yields is due to some sort of Cu polish issue. They tweak the process and underpolish a little. Now that part of the equation is fixed. This may or (probably) may not apply to whatever ATI is doing.

Two different processes may have two different polish "recipes". Like I said, there's no "universal" formula for all .13µ processes.

(Hope that made sense. I was trying to think of a very simple, yet somewhat realistic issue. I can think of many real-life issues, but obviously didn't want to go into those.)
 
Originally posted by: Wingznut
Originally posted by: apoppin
So . . . won't ATI be able to take advantage of Nvidia's (& TSMC's) "mistakes" (or at least TSMC's "experience") to get to .13 micron (more quickly)?
Not really... Well sorta.

(Hope that made sense. I was trying to think of a very simple, yet somewhat realistic issue. I can think of many real-life issues, but obviously didn't want to go into those.)


Um . . . sure . . . "not really . . . Well sorta". 😛

Maybe you should try for a job with Intel marketing. 😉


😀
 
Back
Top