• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Wafer Yield Distributions

BrownTown

Diamond Member
So, I'm trying to research what type of statistical distributions can be used to calculate yields of chips on silicon wafers. I've been looking around, and different sites seem to suggest differnet probability distributions, so I thought I might see what insight ya'll had into the situation. Basically I'm just trying to get an equation that will relate die size to expected yield. I've seen some, but mostly they make alot of assumptions, so I was also looking for one that was based on real data and not just abstract models. Also would be helpfull to find some real data and just what type of yields companies are getting nowadways. Anyways, thanks for your help and all, i appreciate it.
 
hmmm... that may be the hardest information to get because yields is one of the most closely guarded number that a company will have. If anything, you probably have to find a university that does their own fabrication because they'll more likely share the numbers. I know UC Berkeley has a tiny little fab (makes like... 1 wafer at a time, someone has to babysit the whole process and it makes junk more often than not)
 
Tux is right...yeild rates are one of the most guarded statistics in the industry. If you wanted to work with "real" numbers, companies **may** release yeild rates for old processors (like the orginal pentium or maybe the PII era, certainly not the PIII era or later). Even then, there are so many factors that make estimating current yeilds by comparing it to old data next to impossible and very inaccurate - process size, process maturity, complexity of the die, die size, etc all have changed since the PII days.

Side note: does anyone know why companies guard yeild rates so closely? I can't see a real reason that would give the competition an advantage. The number of processors shipped is public knowledge.
 
yield rates are actually secondary in my quest here, what im really looking for are statistical models which represent yeilds are a function of die size. For example, one place i looked modled defects with a poison random variable, but another said that the assumption made by the poison random variable (that the defects are evenly distributed) is a worst case scenario, and it postulated another distribution. Basically, I'm more interested in the theoretical equations for estimating yields based on defect density than i am of the actual numbers. The numbers themselves would be great, but as ya'll have said, they are very closely guarded.
 
Originally posted by: byosys
Side note: does anyone know why companies guard yeild rates so closely? I can't see a real reason that would give the competition an advantage. The number of processors shipped is public knowledge.

My guess would be that it allows them insight into the near term availability of the product and an idea of the costs of production. Off the bat I can think of Nvidia and ATI as being good examples as the yield rates for their highest ueber end cards are very low and often after a short burst of cards to the market it can take weeks before another shipment is ready. That would allow your competition to plan their shipments to take advantage of droughts in their competitor's supplies. And if your yields are down, the costs of production are higher. So if a competitor knows that your margin is probably lower than theirs, they can cut their prices to slim up your profit margins. I have noticed that hardware companies are generally very concerned with overhead in comparison to software companies (damn Microsoft and their free drinks and food).

UIUC has a fab lab in the basement of my building. There is a laboratory class for undergrads where you get to fab simple IC's. I'm sure they would have data of yield rates (but given that it's students who are doing the fabwork it's going to be extremely low). Just anecdotally, I hear that the yield rate with the undergrads is like 50%.
 
I think that Poisson distribution models are the most common for forecasting yields in new/proposed processes. Once a process matures, the company can start to modify the distribution model based on collected data.

I know that there are other distribution models, but I think that they're tied fairly closely to individual processes. Different tools used in the manufacturing process are susceptible to leaving different defect signatures on the wafers. The specific tools that a process uses will impact defect density and density modeling.
 
Born2bwire: That makes sense, but I still don't see a reason for guarding yeilds of older chips (say PII, maybe even PIII or later) as they are no longer in production and are not cutting edge by any means. Though I guess releasing thoes numbers would give the competition more data to base their perdictions about current yeilds off of, but I can't imagine it making a huge difference due to the large number of factors that have changed.
 
From this book
Simplest model: Poisson statistics, assuming independent randomly distributed defects:
Y=exp(-A*D) where A is critical area, and D is defect density.
If not independent defects
Y=1/(1+A*D/C)^C
Where C is the clustering factor (infinity for independent)
They mention C=2 is consistent with NTRS roadmap.
 
Originally posted by: senseamp
From this book
Simplest model: Poisson statistics, assuming independent randomly distributed defects:
Y=exp(-A*D) where A is critical area, and D is defect density.
If not independent defects
Y=1/(1+A*D/C)^C
Where C is the clustering factor (infinity for independent)
They mention C=2 is consistent with NTRS roadmap.

The first one is poisson and the second one is known as bose-einstein.
 
Are you sure that the other distribution is called "Bose-Einstein". I have never heard that name before.

The name BE distribution is usually used for the distribution function

1/(1+exp(U/kbT)

which tells you the disitribution of an ensemble of bosons (the other distribution is called a Fermi disitribution and is valid for Fermions, in the limit of high temperatures the latter becomes the Boltzmann distribution exp(-U/kbT)
 
I am pretty sure since I look at the numbers at work 😉

However, each manufacturer may have it's own factor, so a D in a mfg A is not the same as same D in mfg B.
 
OK, that IS actually a bit odd. AFAIK no "generalized" BE distribution function is used in statistical physic, simply because the form of the function is a consequence of how boson statistics work (it can be derived from simple physical arguments).

I wonder WHY they named it BE distribution? I am quite sure that neither Einstein nor Bose has anything to do with it.

Well, I guess it is not really important...


 
Back
Top