Let's say the initial yields are at only 10-20%. Then what changes to the original process can be made to improve that? After all the process yields can eventually reach 50-80% or more with time. That's a really big improvement.
Does the process equipment have to be changed / swapped out? Any steps that can be fine tuned to become more accurate? If someone could give us examples of what changes can be made to improve yield, I think that would be very interesting.
Spent my fair share of time working on yield issues. To answer your question the first thing the reader must understand is that in the course of manufacturing a chip the chip itself is going to undergo somewhere in the neighborhood of 400 to 500 separate processing steps.
Every one of which can cause yield issues ranging from nuisance issues that affect parametric yield to bigger issues that affect functional yield to critical issues that affect lifetime reliability.
The cumulative totality of each individual process steps fail adder leads to an overall yield number that you might see come out in the press.
Allow me to give you a numerical example. Let's say you make 20nm semiconductor chips and your 20nm process entails 400 separate processing steps (depositions, litho, etch, cleans, cmp, doping, etc.).
Now let's say you've spent 4 years toiling away to optimize every process at every step in the entire process flow to the tune of having 99.7% yield for each and every given step. Your litho process gives you perfectly printed die with minimal misalignment 99.7% of the time, your deposition processes give you your desired film thicknesses and non-uniformity, as well as particulate and defect free, 99.7% of the time, etc.
Now take that 99.7% perfection and apply it 400 times to your wafer while you make your entire IC. Your 99.7% perfection at any given process step will only result in 30% device yield. (99.7%^400 = 30%)
So you have 30% device yields on your 20nm process flow. So what do you do to make that 30% yield go higher, say to a still meager 45%? Well the answer is, in short, that you have to address each and every one of those 400 process steps in your process flow, you remember the ones which are already optimized to give you 99.7% yield for the given process, and figure out a way to make the yield for that one process go from 99.7% to 99.8%.
And then do that for the remaining 399 steps as well, all so the aggregate effect will be to raise yields from 99.7%^400 to 99.8%^400 (answer: 45% device yield).
Want to take a guess at how difficult, time consuming, and expensive it is to take something that is already very nearly perfect (99.7% is pretty darn close to perfect) and make it just a teeny tiny bit closer to perfect? And then hire enough engineers, give them enough test wafers and access to enough analytical lab resources so as to generate the test data necessary to then feedback into a design of experiments from which a 99.7% yielding process can be incrementally improved to deliver 99.8% yields...
In short, there is a very darn good reason why billions of dollars can be spent, and years of development time afforded to thousands of R&D process engineers, and the device yields will merely be a paltry 20%.
And to improve that yield requires scrutinizing every single step in a multi-hundred step process flow looking for places where a 0.1% yield boost can be found and implemented here or there.