Why not skip some process nodes and jump ahead e.g. to 5 nm directly?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

norseamd

Lifer
Dec 13, 2013
13,990
180
106
The fabs want to avoid this situation, so they only make investments in new nodes when they see a clear near-term solution for all of the engineering hurdles.

seems they learned how to do things differently than the military
 

Mand

Senior member
Jan 13, 2014
664
0
0
Yes, but you still get the point, right? Regardless if the process nodes are numbered 14 or 16 nm, why not skip ahead some nodes?

Because developing a new node is really, really hard, and with each step it gets even harder. Even down to things as simple as the light they're using to expose the photoresist to make the deposition patterns gets harder. The thing they're going to be using for a light source for pushing nodes down farther involves spitting out a stream of droplets of molten tin, hitting it with a laser to make them puff up bigger, then hitting them with another laser that excites them further, after which they emit 13-nanometer light (for reference, visible is 400-700nm, and the kind of UV that causes sunburns is around 350-380nm) that is then collected to do the exposure. The optical engineering involved in that process is, in a word, extreme. And that's just one system - skipping nodes would involve doing that sort of major technological leap in a huge number of areas.

Skipping just magnifies the challenge, which magnifies the cost and risk. It's just not worth it to even try to skip.
 
Last edited:

BUnit1701

Senior member
May 1, 2013
853
1
0
Hi,

...
Sure it would take more time to go from 22->5nm than going from e.g. 22->14 nm. But couldn't it be quicker than going through each of those individual process nodes (22->14->10->7->5nm)?

It wouldn't really be quicker than the small steps, with the added problem of now you aren't selling cutting edge products to pay for the huge amount of R&D it takes to shrink the node.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,040
456
126
Well, the benefits would be huge to side step the competitors and introduce e.g. 5 nm or below several years before anyone else.

If going directly for some disruptive technology like graphene that doesn't directly depend on all previous semiconductor process technology advancements it ought to be possible.

Sure, the risks would be high, but so would the reward in case of success. Basically you could completely dominate the semiconductor industry. That's a win worth tens of billions of dollars.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Well, the benefits would be huge to side step the competitors and introduce e.g. 5 nm or below several years before anyone else.

If going directly for some disruptive technology like graphene that doesn't directly depend on all previous semiconductor process technology advancements it ought to be possible.

Sure, the risks would be high, but so would the reward in case of success. Basically you could completely dominate the semiconductor industry. That's a win worth tens of billions of dollars.

Well, I guess Intel nicely showed that you don't need to skip any nodes to get a huge, long leadership position on bleeding edge nodes, just by following the roadmap (which doesn't give you massive risks), but at a pace that no one can follow. Like I already said, following the roadmap of Moore's law is already difficult enough. A lot of people already gave a lot of good answers to your question, I'm not sure what you don't understand about the explanations.

But you also mention other technologies like graphene. Those things also get on the roadmap. The semiconductor industry really isn't a cheap business, it's best to follow the most economical path, which happens to be pretty publicly available (I already explained that 1 company can't just develop a node that lies 6 or 8 years into the future). If a technology is available for 1 company, other companies can very likely also implement it, so it's really an issue of money, how fast you can implement technologies and execute the roadmap.

Example: As we near the end of Moore's law, more drastic changes are required to keep up with it and sort of continue Dennard scaling, and in the relatively near future, one of those technologies that will have to be implemented, happens to give a huge improvement in power consumption, namely III-V materials, as replacement for silicon. So while the other companies are still figuring out how SiGe or Ge works, or trying to build 450mm fabs, Intel with its ~4 year lead will have such a technology as you mention (if it indeed gives something like a 10x lower power consumption and 1,5x higher clock speed).

Maybe TSMC/Samsung/... will again do some magical marketing trick to try to implement this technology faster, but that will probably be at the cost of other things such as transistor size.
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
Why don't babies come out of the womb running and talking? Why do they even take 9 months to be born?
 

norseamd

Lifer
Dec 13, 2013
13,990
180
106
Why don't babies come out of the womb running and talking? Why do they even take 9 months to be born?

outside of the subject there is an entire set of natural development for animals and reasons for them. deer are born nearly ready to run. they usually are able to start running very fast within 10 minutes. carnivores and primates often have long development times.

off topic but oh well
 

positivedoppler

Golden Member
Apr 30, 2012
1,144
236
116
Why do people wait for the thread starter to ask a question before answering? Why not just start many posts with already answers.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
The engineers do learn things from node to node. Intel develops a new node, what, every 2 years or so? You can be pretty sure that the product you get after 6 years using this method is going to be a better product than if they did nothing for 6 years and came out with a CPU using that same node.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,040
456
126
Why do people wait for the thread starter to ask a question before answering? Why not just start many posts with already answers.

I think such general analogies to everyday life situations are quite pointless actually. We've already had several of them already, so can we please stop destroying the thread with more of them? Such meaningless general statements really are no technological proof of that it should not be possible to skip ahead a couple of process nodes.

Disruptive technological advances happen in most other areas from time to time, so I don't see why it could not also apply to the semiconductor industry. If you've got some actual technological / physical reasons for why it should or should not be possible to skip ahead some nodes, please share it. Because that's the kind of information I was looking for when starting this thread, if that was not obvious to everyone.
 

norseamd

Lifer
Dec 13, 2013
13,990
180
106
Disruptive technological advances happen in most other areas from time to time, so I don't see why it could not also apply to the semiconductor industry. If you've got some actual technological / physical reasons for why it should or should not be possible to skip ahead some nodes, please share it. Because that's the kind of information I was looking for when starting this thread, if that was not obvious to everyone.

the computer industry actually has good technology gains from year to year

compare with battery technology or small arms technology
 

KingFatty

Diamond Member
Dec 29, 2010
3,034
1
81
I think such general analogies to everyday life situations are quite pointless actually. We've already had several of them already, so can we please stop destroying the thread with more of them? Such meaningless general statements really are no technological proof of that it should not be possible to skip ahead a couple of process nodes.

Disruptive technological advances happen in most other areas from time to time, so I don't see why it could not also apply to the semiconductor industry. If you've got some actual technological / physical reasons for why it should or should not be possible to skip ahead some nodes, please share it. Because that's the kind of information I was looking for when starting this thread, if that was not obvious to everyone.

I think that it might be backwards to think about this in terms of Intel/AMD "choosing a node."

Rather, I think it's more of "the node chooses you" where AMD/Intel just looks around at what industrial tools are available, maybe does some tweaking, and proceeds.

So just as they are in the business of producing CPUs, they are not in the business of developing all the various new tools that incrementally enable a new node to be used. That's my armchair view of it. Thus the analogies to silly things we've been seeing. You can't just choose to change nodes, because there are no tools to do it. The tools get improved here and there, and eventually it coalesces in the ability to change nodes at Intel/AMD or whatever fab.
 

cubby1223

Lifer
May 24, 2004
13,518
42
86
Why were the first computers made with vacuum tubes? Damn fools should have been working on 5nm silicon manufacturing technology from the start. If they had, just think of the possibilities that could have been opened today - negative nanometer manufacturing!
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
Why were the first computers made with vacuum tubes? Damn fools should have been working on 5nm silicon manufacturing technology from the start. If they had, just think of the possibilities that could have been opened today - negative nanometer manufacturing!

Your newsletter, I would like to subscribe to it. And if you happen to have a tiger-repelling rock, I would like to buy it.
 

cubby1223

Lifer
May 24, 2004
13,518
42
86
I think such general analogies to everyday life situations are quite pointless actually. We've already had several of them already, so can we please stop destroying the thread with more of them? Such meaningless general statements really are no technological proof of that it should not be possible to skip ahead a couple of process nodes.

Disruptive technological advances happen in most other areas from time to time, so I don't see why it could not also apply to the semiconductor industry. If you've got some actual technological / physical reasons for why it should or should not be possible to skip ahead some nodes, please share it. Because that's the kind of information I was looking for when starting this thread, if that was not obvious to everyone.

The width of an atom is anywhere from 0.1nm to 0.5nm, depending on which it is.

To advance to a smaller manufacturing process, it requires tighter and tigher control over the specific placement of atoms on the silicon wafer.
 

coercitiv

Diamond Member
Jan 24, 2014
7,170
16,729
136
Disruptive technological advances happen in most other areas from time to time, so I don't see why it could not also apply to the semiconductor industry. If you've got some actual technological / physical reasons for why it should or should not be possible to skip ahead some nodes, please share it. Because that's the kind of information I was looking for when starting this thread, if that was not obvious to everyone.

Read bellow or jump to article.

Ars Technica said:
The limiting factor is the size of pattern that can be created on the photoresist layer. Higher transistor densities require finer mask patterns and shorter light wavelengths. Here's the pressing issue: current photolithography uses ultraviolet light with a 193nm wavelength, but at some point in the near future, probably around the 10nm process, a switch to extreme UV (EUV) with a 13.5nm wavelength will be required.

In the late 1990s and early 2000s, there was confidence within the semiconductor industry that EUV equipment was coming soon. It has failed to materialize, however, due to the technical difficulties that EUV imposes. Optically, EUV is harder to work with. It precludes the use of lenses, as most optical materials strongly absorb EUV light. Instead only mirrors can be used.

Ars Technica said:
But good news could be on the horizon. ASML, the world's largest supplier of photolithographic equipment, has said (via HotHardware) that it could have production-ready commercial equipment by 2015, suitable for producing chips with 10nm features.

Now imagine if we had stopped developing iterative shrinks in the late 1990s and shot directly for 5nm, this post might have been written on a 130nm CPU.

PS: but that upgrade coming in the next few years... omfg :D
 
Last edited:

Yuriman

Diamond Member
Jun 25, 2004
5,530
141
106
On the topic of skipping development, why did Intel bother to design all of the "intermediate" steps toward Haswell when they could have just designed Haswell in much less time?
 

norseamd

Lifer
Dec 13, 2013
13,990
180
106
so how are these optical computing parts going along? they should be easier than graphene right?
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
On the topic of skipping development, why did Intel bother to design all of the "intermediate" steps toward Haswell when they could have just designed Haswell in much less time?

Haswell has a lot more transistors than earlier microarchitectures, so it only becomes economically viable at a certain node. And I think you underestimate the cost of designing something like Haswell.
 

norseamd

Lifer
Dec 13, 2013
13,990
180
106
Haswell has a lot more transistors than earlier microarchitectures, so it only becomes economically viable at a certain node. And I think you underestimate the cost of designing something like Haswell.

think he was being sarcastic
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
Haswell has a lot more transistors than earlier microarchitectures, so it only becomes economically viable at a certain node. And I think you underestimate the cost of designing something like Haswell.

Another thing to consider is the heavy move to on-die GPUs means that a huge % of the increased transistor count goes to non-CPU functions. This has a stagnating effect on processor performance. Without the iGPU to account for, something with the transistor count of a Haswell i7 could be an 8-core 16-thread CPU w/16MB cache.

In my enthusiast's mind, I would prefer that the GPU be an on-package option rather than integrated into the CPU die itself, in a modular fashion. That way you could get very high end processors with no GPU at all, or on the flip side, a very decent one should you need it. But of course from a production perspective that's not all that practical.

The downward price pressure on average PCs makes pragmatism extremely important, and a perfect explanation as to why a SB from 2011 is still 80-90% as good as a brand-new Haswell for 99% of CPU functions. At the same time, the iGPU leaps from SB to IB to Haswell are very high.

If this were the 90s when ASPs were dramatically higher (even more so when you consider inflation), a greater emphasis on raw performance increases across the board would be feasible. But we're on a trend following emphasis on mobile platforms, low power use, low heat output, small package sizes, and increased GPU abilities due to display tech finally increasing (think of how common ultra-high-resolution displays are getting on phones and tablets).

We *will* finally get beyond SB/IB/Haswell in CPU performance, but it will take those leaps down to 14nm/10nm/beyond to get there.
 

norseamd

Lifer
Dec 13, 2013
13,990
180
106
Another thing to consider is the heavy move to on-die GPUs means that a huge % of the increased transistor count goes to non-CPU functions. This has a stagnating effect on processor performance. Without the iGPU to account for, something with the transistor count of a Haswell i7 could be an 8-core 16-thread CPU w/16MB cache. In my enthusiast's mind, I would prefer that the GPU be an on-package option rather than integrated into the CPU die itself, in a modular fashion. That way you could get very high end processors with no GPU at all, or on the flip side, a very decent one should you need it. But of course from a production perspective that's not all that practical. The downward price pressure on average PCs makes pragmatism extremely important, and a perfect explanation as to why a SB from 2011 is still 80-90% as good as a brand-new Haswell for 99% of CPU functions. At the same time, the iGPU leaps from SB to IB to Haswell are very high. If this were the 90s when ASPs were dramatically higher (even more so when you consider inflation), a greater emphasis on raw performance increases across the board would be feasible. But we're on a trend following emphasis on mobile platforms, low power use, low heat output, small package sizes, and increased GPU abilities due to display tech finally increasing (think of how common ultra-high-resolution displays are getting on phones and tablets). We *will* finally get beyond SB/IB/Haswell in CPU performance, but it will take those leaps down to 14nm/10nm/beyond to get there.

when did they start considering putting more effort for heat

thought everything was running with excessive heat now
 

Yuriman

Diamond Member
Jun 25, 2004
5,530
141
106
when did they start considering putting more effort for heat

thought everything was running with excessive heat now

"Low heat output" is not the same as "low temperature". Ivy Bridge produces significantly less heat than Sandy Bridge even though it generally runs at higher temperatures.
 

norseamd

Lifer
Dec 13, 2013
13,990
180
106
"Low heat output" is not the same as "low temperature". Ivy Bridge produces significantly less heat than Sandy Bridge even though it generally runs at higher temperatures. [/quote

was actually a sarcastic comment but alright
 
Last edited:

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
when did they start considering putting more effort for heat

thought everything was running with excessive heat now

It's not so much excessive heat as it is a design consideration to save money.

SB had soldered IHS, which meant that heat transfer from the die into the IHS and into the heatsink itself was very good.

IB and Haswell went to a cheaper process which replaced the soldering steps with a thermal paste to conduct heat from die to IHS (called TIM), and this has been proven exhaustively (IDC on our forum is a GOD in this area) to be a dramatic step back in efficiency. So what you get is a less efficient heat transfer from the die to the IHS and out to the heatsink. And this is precisely why you often see crazy high temps with IB/Haswell CPUs, yet the heatsinks themselves are cool to the touch, and extreme air cooling/watercooling don't seem to help that much. The TIM is the limiting factor.

And all of that only matters to the supernerds like us that OC :) The average user doesn't know or care, and even the slightly more advanced users who know a little bit might just assume that IB/Haswell are hotter by nature (there is some truth to the density and certainly with regards to the new instruction sets), but the truth is far more complex.

At the end of the day it was a financial decision. Intel's change from solder to TIM didn't have any real negative effect on non-OC systems (the vast bulk of their sales of course), and offered a cheaper way to manufacture the product. So from their perspective it made perfect sense.