• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

How will Intel keep its process tech lead in the future?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DominionSeraph

Diamond Member
Jul 22, 2009
8,386
32
91
I seemed that it was only the movement of objects in relation to each other, or relative motion. If nothing moved in relation to anything else, time would stand still. And time is only the measurement of relative movement; and not direct movement. Next I went about trying to figure out what level of relative movement affected time at our level. This was a very difficult question. It was obvious pretty quickly during my research that relative movement at a molecular level definitely affected our interpretation of time. However, certain relative motion below that level seems to affect it as well, while others did not. The conclusion that I came up with was that our interpretation of time was controlled at the molecular level, and that the relative location, direction, and energy (speed/acceleration) of the molecules were what controlled time at our level.

Wow, relative to the Theory of Relativity, your theory isn't even close.
I call this the Theory of Relativity.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
Wow, relative to the Theory of Relativity, your theory isn't even close.
I call this the Theory of Relativity.


Don't bother. It's just the babble of someone trying really hard to sound intelligent, and not realizing that there will be people around that understand the topic that he claims to be addressing ;)
 

carop

Member
Jul 9, 2012
91
7
71
if EUV isn't ready at 10nm, why even bother with it, just so they can run it at 7nm and then 5/4nm? also, welcome

First, it is possible that EUV may never may it. The biggest problem is the light source. It is very hard to create photons with a wavelength of 13.5 nm. Lithography guru Chris Mack estimates, EUV lithography must be able to support the production of 150 wafers per hour, which will require a light source with 200 to 350 W. The sources are now stuck around 30 W. Mack thinks EUV lithography is like peeling on onion. Manufacturers are just now drawing away its outer layers, and it is likely that there are other problems lying ahead.

http://spectrum.ieee.org/semiconductors/devices/euv-faces-its-most-critical-test

Second, it seems that the foundries are good at coping with a couple of changes at a time. Skipping 10nm would introduce several changes. As such, it is probably a much bigger chunk than even Intel could swallow.

Third, there are several 7nm and 5nm transistor technologies that are in development in the lab. Nanowire (all round gates), Super-steep subthreshold devices, and carbon-based structures are some of the technologies that will probably be used besides FinFET (bulk) at 7nm and 5nm. It has been shown that SiGe PMOS FinFET transistors perform better than FinFET (bulk) transistors, and they may be introduced used as early as the 10nm node.
 

Sable

Golden Member
Jan 7, 2006
1,130
105
106
They won't.

They're barely there now, 22nm is a marginal improvement over 32nm with 28nm easily matching or surpassing it. The perception is that it is much better though thanks to marketing and shills trolling the forums. There's big money being plowed into R&D from companies other than intel.
lol. lolollllol. rly.
 

Martimus

Diamond Member
Apr 24, 2007
4,490
157
106
Don't bother. It's just the babble of someone trying really hard to sound intelligent, and not realizing that there will be people around that understand the topic that he claims to be addressing ;)

That was exactly why I was writing it. I havent thought about this stuff in years and it is nice to get into the discussion again. I wasn't trying to sound intelligent, and I don't really care if anyone thinks I am smart or dumb. However I don't want to further derail this topic, and am more than willing to talk about it via PM. I would like to hear more about the topic, but so far no one has bothered to explain any reason they believed what they did.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
I'm going to be nice here, but saying you theorized Einstein's relativity independenly as a youth without any ties to the speed of light is kind of like saying you theorized the entirety of baking, but didn't realize that heat is involved.

It's been ages since I've studied general relativity, so I'll just touch on the specifics of special relativity here.

The invariant speed of light *is* the basis for all of the relativistic effects, or at least is so heavily mated to them that one can't possibly grasp the effects without first realizing that all light is going to appear to be the same speed relative to anyone else. In fact, one could, with the basis of knowing that light's speed is invariant to all reference frames, derive most (if not all) relativistic effects. The doppler shift of light, the changes in the rate of time between two locations, the change in lengths (these first three are really just obvious from the speed of light phenomenon), the change in mass (which is slightly more tricky, but also obvious when you know you can never put enough energy in to an object to make it reach the speed of light. However, that energy has to go some where, and energy and mass are different forms (to oversimplify it a litttle) of the same exact thing, so...).

You can see from even this one paragraph that the speed of light is such a huge part of it, that you can't possibly understand it without that fundamental fact of nature.
 

CHADBOGA

Platinum Member
Mar 31, 2009
2,135
833
136
They won't.

They're barely there now, 22nm is a marginal improvement over 32nm with 28nm easily matching or surpassing it. The perception is that it is much better though thanks to marketing and shills trolling the forums. There's big money being plowed into R&D from companies other than intel.

Sometimes I wonder if you have any clue as to how ridiculous your posts sound.

22nm is marginal to 32nm, but 28nm easily surpasses it?

lol. lolollllol. rly.

AMD are going to be stuck on 28nm, long after Intel moves to 14nm, so of course he has to tell himself these kind of fairy tales. :D
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
Sometimes I wonder if you have any clue as to how ridiculous your posts sound.

22nm is marginal to 32nm, but 28nm easily surpasses it?


Your words actually, mine were easily match or surpass. Now that we've got the reading comprehension out of the way, yes.
 

Hatisherrif

Senior member
May 10, 2009
226
0
0
images


MORE FABS! MORE FABS!
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Your words actually, mine were easily match or surpass. Now that we've got the reading comprehension out of the way, yes.

In your efforts to create a straw-man out of my post, you repeated the same silly nonsense of your own. That at least confirms your original posting of said nonsense was indeed intentional and no mere accident.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
In your efforts to create a straw-man out of my post, you repeated the same silly nonsense of your own. That at least confirms your original posting of said nonsense was indeed intentional and no mere accident.

ZING!!!
 

Coup27

Platinum Member
Jul 17, 2010
2,140
3
81
In your efforts to create a straw-man out of my post, you repeated the same silly nonsense of your own. That at least confirms your original posting of said nonsense was indeed intentional and no mere accident.
lmfao. Permission to put in sig?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
lmfao. Permission to put in sig?

Granted :D

What's even better is that's probably true...

Fab first

Copy exactly

????

Profit.

The economics of semiconductor manufacturing pretty much requires it.

Whether you are building more fabs, or just expanding the footprint of existing fabs, the bottom line is that the cost-structure of mass production requires upscaling the volume of production over time.

I'm sure there are exceptions to the observation, but I am bereft of examples where an industry survived without its own TAM growth exceeding that of inflation in the geomarkets it serviced.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I'm glad to see this thread is still alive :D

Some pages from the VLSI Fab 42 report:

http://electronics.wesrch.com/page-summary-pdf-EL1SE1KWROSKE-the-cooks-tour-intel-fab-42-21

Intel's Modular Approach

Looking at the photos, the site seems impressively large. But, this building is only
one module, sized the same as Intel's D1X. The module is designed to be cloned
multiple times per finished fab. That's why it takes copy-exactly to an extreme
level. In doing this, Intel is breaking decades of common wisdom, which is what
makes this particular fab so interesting.
This is a bold approach because of the
potential for inefficiency in a small modular approach.

Normally fabs are built to a minimum size of 40K WSM and Gigafabs are in the
100K-plus WSM. This is done, in part, because efficiently sized fab support items
have tended to come in big chunks of capacity. Since the module is on the small
side, it means either Intel has made some significant breakthroughs in fab
support systems or there's some level of built-in inefficiency when there are fewer
completed modules.


http://electronics.wesrch.com/page-summary-pdf-EL1SE1KWROSKE-the-cooks-tour-intel-fab-42-22

Intel is fully aware of this historical trade-off and makes a convincing argument
that they've planned around it as best anyone can. They have done some jazzy
things with the approach to the support system problem. Intel always cost-
models the daylights out of things, so you can bet they have internal figures that
show the savings from a modular system outweighs the added expense.

One reason is that even when one builds a full-sized fab, the support systems
are underutilized anyway, until all the tools are put in place and its capacity is
maxed out. In between, what you have is mostly an empty facility -- all stranded
capital. Moreover, you have to pay overhead to keep it maintained and
somewhat operational.

Stranded capital is a significant problem, since what typically happens in the
industry is that new fabs get funded in an upturn and are finished in a downturn
... when they are not needed. Fabs have even been built that never went
operational because of this problem. So the idea of a modular fab has always
been a dream
, since it would give capacity in smaller increments.


http://electronics.wesrch.com/page-summary-pdf-EL1SE1KWROSKE-the-cooks-tour-intel-fab-42-23

Why Copy-Exactly at the Fab Level

The need to transition a new fab in the future is something that is usually
considered but seldom planned for. Building a new fab is enough of a job without
having to plan for what it will need to be well out in the future. It's also not that
important if all you need to do is build a single fab. But this time, Intel is building
two: D1X and Fab 42. Now Brian Krzanich and his team have built plenty of fabs
and transitioned even more. So that got them to asking, `what would happen if
we applied copy-exactly to the entire fab?' More importantly, with 450mm
looming, `how do we make it easier to transition the fab with minimal disruption to
future production?' A third question they sought to answer in the new design was,
`How can we do all this with less stranded capital during expansion phases?'
They have answered all three by taking a modular approach to the fab.

http://electronics.wesrch.com/page-summary-pdf-EL1SE1KWROSKE-the-cooks-tour-intel-fab-42-24

Applying copy-exactly to the entire fab: If you are building more than one fab and
plan to build more, this makes absolute sense and it's easy for Intel to do. There
are many benefits:

· Minimizes Architectural and Engineering (A&E) costs with a single design
· Minimizes start-up costs because everything is the same
· Time to revenue is faster
· Easier to permit

There are also plenty of benefits to a standardized modular fab:

· 450mm transition can be done with more predictable results
· Capacity expansion can be better modulated to market demand
· Other modules will have no disruption from transition of one
· Empty floorspace minimized
· Training is faster and associated costs are lower
· Maintenance and support are more efficient
· Faster learning curves on preventative maintenance

All in all, it makes loads of sense.

Overall, I thought the VLSI Fab 42 report was extremely well done. The explanations and pictures were just great IMO.

The idea of "modular fab" (which breaks conventional wisdom) also sounds extremely interesting. I only wish some kind of fab floor diagram (with arrows) was provided. This so tech outsiders (like myself) could more easily visualize the concept (which sounds like a pivotal fab development).
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
"Modular fab" is basically copy-exact of the fab itself, in contrast to copy-exact historically being a copy-exact of the contents of the fab (but not including the fab itself).

Think of it like this - a housing development composed of different cookie-cutter homes, the homes may look different on the outside but they are all spec'ed the same on the inside. The same number of bedrooms, the same number of bathrooms, same square footage assigned to the kitchen, to the master bedroom, etc.

cookiecutter2.jpg


So the homes may be copy-exact internally, but the actual configuration of the homes might not be the same (one home may have 4 bedrooms upstairs whereas another might be ranch-style with 4 bedrooms on the ground floor).

Contrast this to an apartment complex where not only is each apartment configured to be identical (copy-exact) but the apartment buildings themselves are architected to be identical. Apartment complexes are modular, want to add a 12th apartment complex to the existing 11? Just build another one, its already permitted and approved for the zoning, etc.

stuyvesant-town-domijpg-d249909ec21a3edf_large.jpg


In effect, Intel is making an exact copy of its Oregon development fab in Arizona and will then make multiple copies of that original plant on the same site.
source
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
"Modular fab" is basically copy-exact of the fab itself, in contrast to copy-exact historically being a copy-exact of the contents of the fab (but not including the fab itself).

Think of it like this - a housing development composed of different cookie-cutter homes, the homes may look different on the outside but they are all spec'ed the same on the inside. The same number of bedrooms, the same number of bathrooms, same square footage assigned to the kitchen, to the master bedroom, etc.

cookiecutter2.jpg


So the homes may be copy-exact internally, but the actual configuration of the homes might not be the same (one home may have 4 bedrooms upstairs whereas another might be ranch-style with 4 bedrooms on the ground floor).

Contrast this to an apartment complex where not only is each apartment configured to be identical (copy-exact) but the apartment buildings themselves are architected to be identical. Apartment complexes are modular, want to add a 12th apartment complex to the existing 11? Just build another one, its already permitted and approved for the zoning, etc.

stuyvesant-town-domijpg-d249909ec21a3edf_large.jpg

Thanks IDC.

Yes, I am very interested to learn how Intel is able to make these "small" (in a relative sense) D1X sized fab modules efficient?

I even wonder if some simple diagrams could go a long way to clearing things up? For example, if Intel transitions to a smaller node in one of the modules (leaving the rest of the modules on a larger node).....how would this have compared to a large (non modular) fab transitioning?
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,811
1,290
136
Sometimes I wonder if you have any clue as to how ridiculous your posts sound.

22nm is marginal to 32nm, but 28nm easily surpasses it?
32-nm from Intel is marginal to 40-nm from TSMC.

Bobcat > Saltwell

Same CPU performance/Bobcat outclasses Saltwell up to 6x in graphic performance.
Sure 18W TDP vs 6.5W TDP but you have to point out Bobcat is up to 6x faster in graphics.

Jaguar vs Silvermont appears to be even less pretty for Intel. Since, they are going to be late to the game with 22-nm Atom like they were with 32-nm Atom.

Cedar Trail-M: 65-72 mm² die size
Bobcat: 74-78 mm² die size
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
http://semimd.com/mack/2012/08/08/why-450-mm-wafers/

Why 450-mm wafers?

Why is 450-mm development so important to Intel (and Samsung and TSMC)?

A few years ago, Intel and TSMC began heavily promoting the need for a transition from the current standard silicon wafer size, 300 mm, to the new 450-mm wafers. While many have worked on 450-mm standards and technology for years, it is only recently that the larger wafer has received enough attention and support (not to mention government funding) to believe that it may actually become real. While there has been much talk about the need for a larger wafer, I’d like to put my spin on the whole debate.

First, a bit of history. Silicon wafer sizes have been growing gradually and steadily for the last 50 years, from half-inch and one-inch silicon to today’s 300-mm diameter wafers. The historical reasons for this wafer size growth were based on three related trends: growing chip size, growing demand for chips, and the greater chip throughput (and thus lower chip cost) that the larger wafer sizes enabled. And while chip sizes stopped increasing about 15 years ago, the other two factors have remained compelling. The last two wafer size transitions (6 inch to 8 inch/200 mm, and 200 mm to 300 mm) each resulted in about a 30% reduction in the cost per area of silicon (and thus cost per chip). And since our industry is enamored with the thought that the future will look like the past, we are hoping for a repeat performance with the transition to 450-mm wafers.

But a closer look at this history, and what we can expect from the future, reveals a more complicated picture.

[/b]First, how does increasing wafer size lower the cost per unit area of silicon?[/b] Consider one process step as an example – etch. Maximum throughput of an etch tool is governed by two basic factors: wafer load/unload time and etch time. With good engineering there is little reason why these two times won’t remain the same as the wafer size increases. Thus, wafer throughput remains constant as a function of wafer size, so that chip throughput improves as the wafer size increases. But “good engineering” is not free, and it takes work to keep the etch uniformity the same for a larger wafer. The larger etch tools also cost more money to make. But if the tool cost does not increase as fast as the wafer area, the result is a lower cost per chip. This is the goal, and the reason why we pursue larger wafer sizes.

As a simplified example, consider a wafer diameter increase of 1.5X (say, from 200 mm to 300 mm). The wafer area (and thus the approximate number of chips) increases by 2.25. If the cost of the etcher, the amount of fab floor space, and the per-wafer cost of process chemicals all increase by 30% at 300 mm, the cost per chip will change by 1.3/2.25 = 0.58. Thus, the etch cost per chip will be 42% lower for 300-mm wafers compared to 200-mm wafers.

While many process steps have the same fundamental scaling as etch – wafer throughput is almost independent of wafer size – some process steps do not. In particular, lithography does not scale this way. Lithography field size (the area of the wafer exposed at one time) has been the same for nearly 20 years (since the era of step-and-scan), and there is almost zero likelihood that it will increase in the near future. Further, the exposure time for a point on the wafer for most litho processes is limited by the speed with which the tool can step and scan the wafer (since the light source provides more than enough power).

Like etch, the total litho process time is the wafer load/unload time plus the exposure time. The load time can be kept constant as a function of wafer size, but the exposure time increases as the wafer size increases. In fact, it takes great effort to keep the scanning and stepping speed from slowing down for a larger wafer due to the greater wafer and wafer stage mass that must be moved. And since wafer load/unload time is a very small fraction of the total process time, the result for lithography is a near-constant wafer-area throughput (rather than the constant wafer throughput for etch) as wafer size is changed.

One important but frequently overlooked consequence of litho throughput scaling is that each change in wafer size results in an increase in the fraction of the wafer costs caused by lithography. In the days of 6-inch wafers, lithography represented roughly 20 – 25% of the cost of making a chip. The transition to 200-mm (8-inch) wafers lowered the (per-chip) cost of all process steps except lithography. As a result, the overall per-chip processing costs went down by about 25 – 30%, but the per-chip lithography costs remained constant and thus become 30 – 35% of the cost of making a chip.

The transition to 200-mm wafers increased the wafer area by 1.78. But since lithography accounted for only 25% of the chip cost at the smaller 6-inch wafer size, that area improvement affected 75% of the chip cost and gave a nice 25 – 30% drop in overall cost. The transition to 300-mm wafers gave a bigger 2.25X area advantage. However, that advantage could only be applied to the 65% of the costs that were non-litho. The result was again a 30% reduction in overall per-chip processing costs. But after the transition, with 300-mm wafers, lithography accounted for about 50% of the chip-making cost.

Every time wafer size increases, the importance of lithography to the overall cost of making a chip grows.

And there lies the big problem with the next wafer size transition. Each wafer size increase affects only the non-litho costs, but those non-litho costs are becoming a smaller fraction of the total because of wafer size increases. Even if we can achieve the same cost savings for the non-litho steps in the 300/450 transition as we did for the 200/300 transition, its overall impact will be less. Instead of the hoped-for 30% reduction in per-chip costs, we are likely to see only a 20% drop in costs, at best.

So we must set our sights lower: past wafer size transitions gave us a 30% cost advantage, but 450-mm wafers will only give us a 20% cost benefit over 300-mm wafers. Is that good enough? It might be, if all goes well. But the analysis above applies to a world that is quickly slipping away – the world of single-patterning lithography. If 450-mm wafer tools were here today, maybe this 20% cost savings could be had. But shrinking feature sizes are requiring the use of expensive double-patterning techniques, and as a result lithography costs are growing. They are growing on a per-chip basis, and as a fraction of the total costs. And as lithography costs go up, the benefits of a larger wafer size go down.

Consider a potential “worst-case” scenario: at the time of a transition to 450-mm wafers, lithography accounts for 75% of the cost of making a chip. Let’s also assume that switching to 450-mm wafers does not change the per-chip litho costs, but lowers the rest of the costs by 40%. The result? An overall 10% drop in the per-chip cost. Is the investment and effort involved in 450-mm development worth it for a 10% drop in manufacturing costs? And is that cost decrease enough to counter rising litho costs and keep Moore’s Law alive?

Maybe my worst-case scenario is too pessimistic. In five or six years, when a complete 450-mm tool set might be ready, what will lithography be like? In one scenario, we’ll be doing double patterning with EUV lithography. Does anyone really believe that this will cost the same as single-patterning 193-immersion? I don’t. And what if 193-immersion quadruple patterning is being used instead? Again, the only reasonable assumption will be that lithography accounts for much more than 50% of the cost of chip production.

So what can we conclude? A transition to 450-mm wafers, if all goes perfectly (and that’s a big if), will give us less than 20% cost improvement, and possibly as low as 10%. Still, the big guys (Intel, TSMC, IBM, etc.) keep saying that 450-mm wafers will deliver 30% cost improvements. Why? Next time, I’ll give my armchair-quarterback analysis as to what the big guys are up to.

Chris Mack

www.lithoguru.com

I thought this was some great info on some strategy involving 450mm wafers.

Lithography is making up an an even greater % of the cost as the wafer size increases (due to the field size remaining constant, resulting in an increase in exposure time as the wafer area increases.)