Is 7nm the practical limit?

whm1974

Diamond Member
Jul 24, 2016
7,447
483
96
#1
I'm starting to wonder what is the practical limit to shrinking process nodes? I mean will it be too costly and difficult to get to 5nm that Fabs will simply spend their resources on improving 7nm and chip designers will make more efforts to improve the designs of their products instead of depending on ever reducing node size?

Just curious... I just think that we will be stuck with 7nm for a really long while....
 

Yotsugi

Senior member
Oct 16, 2017
794
196
96
#2
There's 5nm and 3nm nodes coming.
3nm brings device change.
 

whm1974

Diamond Member
Jul 24, 2016
7,447
483
96
#3
There's 5nm and 3nm nodes coming.
3nm brings device change.
The last I read about 3nm is that researchers in South Korea only managed to reach that by using carbon nanotubes. Which no one has figured out yet on how to mass produce the things.
 

maddie

Platinum Member
Jul 18, 2010
2,512
418
136
#4
The last I read about 3nm is that researchers in South Korea only managed to reach that by using carbon nanotubes. Which no one has figured out yet on how to mass produce the things.
The thing with breakthroughs are that they're unpredictable. Trying to do so, is often an exercise in frustration. Witness fusion energy.
 

whm1974

Diamond Member
Jul 24, 2016
7,447
483
96
#5
The thing with breakthroughs are that they're unpredictable. Trying to do so, is often an exercise in frustration. Witness fusion energy.
Very unpredictable. Requires very long and hard work and a fair amount of luck.
 
Mar 11, 2004
18,213
608
126
#6
It depends on if you're talking about the actual nm or the marketing name. From what I've gathered, 5nm (marketing name), should be possible with EUV, but beyond that is anyone's guess. I don't know how much improvement they'll keep getting.

10 years from now, I think we'll still be on 5nm. 7nm seems off to a good start, and I think it'll be viable for cutting edge processors for 5 years. I think 7nm EUV will be largely a trial (some chips will be made on it, but I think it'll be too costly and limited, especially since foundries will be keen on pushing forward so they'll move to 5nm - which from what I've seen should be pretty easy once they get EUV up and running, and they'll need to claim 5nm for marketing purposes to justify the costs), and then in about 5 years 5nm EUV will get into production.

Er, I need to correct myself. Samsung's 7nm is apparently developed for EUV, but I think it'll take longer for it to be affordable for mass production, so they'll get a non-EUV 7nm process going. And I believe Intel's 7nm is also built around EUV, but its also I think more advanced (as far as actual nm) than other foundries so it'd probably be more comparable to TSMC/Samsung 5nm process. But Intel still has 10nm process before then, and it'll be awhile before I think we see Intel mass producing 7nm, just because they're still waiting on EUV equipment (so even when they get it up and running the amount of chips they can produce is going to be limited some).

I think 7nm will improve as well, but I actually think a lot of chips are going to be developed in stages, where they tweak parts of the chip to get better density and performance. I'm not even talking about modules/chiplets like AMD, but rather how they'll focus on certain parts, and automate other parts of the chip (forget what its called, but AMD started using it back with the construction cores and I think its common in the industry now) until the next version where they'll put more work into tweaking it.

At these density levels it takes a lot of work to design, engineer, and produce chips, (oh and cost) and since process improvements will be drawn out as well, I think they'll take a measured approach.
 

whm1974

Diamond Member
Jul 24, 2016
7,447
483
96
#7
@darkswordsman17 Yes I mean actual nm. I read for a long time that 5nm is the limit for silicon and we would have to use more exotic materials to get any smaller.

Of course I'm likely to die of old age before the last happens....:eek:
 

maddie

Platinum Member
Jul 18, 2010
2,512
418
136
#8
7nm with EUV is supposed to be cheaper than the present 7nm from TSMC. Less masks, less defects, higher yields.
 

Yotsugi

Senior member
Oct 16, 2017
794
196
96
#9
7nm with EUV is supposed to be cheaper than the present 7nm from TSMC. Less masks, less defects, higher yields.
And every problem EUV brings.
 

Xpage

Senior member
Jun 22, 2005
457
5
81
www.riseofkingdoms.com
#10
years ago I recall seeing this but we were at 28nm, and maybe intel just was ramping 22nm.

IIRC, I thought around 10nm was the limit, around 50 silicon atoms wide between features. This would be about a 7nm node from Samsung or TMSC, 10nm for intel. I think a 5nm technology is getting close to the limit, that would be around 35 Si atoms width between features. Quantum tunneling of electrons are what I am worried about for leakage, not sure how often that happens from atom 1->35.

I think 7nm and 5m will be very long nodes. The only difference is that 5nm might get non silicon materals. I think that may give us a large one time leap in clock speed. Not sure how that will work for leakage and cost. I only see CPU speeds above 5ghz to be common if III-IV materials work wonders for leakage and speed gains. Otherwise I think <5ghz is the norm except for high end SKUs.

Though CPUs then to be fast enough today, I worry more about GPU usage. **insert chart of GPU rendering with pictures of rendering quality with Dawn and adrianne curry***

I still think we need another 5 doublings of power in GPUs for good VR. 1 doublings for use for increased resolution, 2 doublings for descent FPS at that resolution, and 2 more doublings for that amount of GPU power to be affordable. Assuming 2 doublings going from current 14nm gen -> 7nm (should be 10nm) -> 5nm (marketing BS more like 7nm), I think we're a bit short. Hopefully we'll get a 2x GPU boost from Silicon changed to III-IV materials, though that leaves a lot of GPU processing power that needs to be made up somehow and software optimization won't get us a 4x increase in speed. But that's my thoughts.
 

whm1974

Diamond Member
Jul 24, 2016
7,447
483
96
#11
Personally I'm thinking that 5,000 Mhz will the the limit for CPU clock speeds and that is for high end products. And that is for CPUs with no iGPU.

Processors with decent iGPU? Well we could be stuck at 3600Mhz/1200Mhz since both parts have to share resources.
 

jpiniero

Diamond Member
Oct 1, 2010
6,305
208
126
#12
Actually physically shrinking the transistor... yeah that's coming to an end soon. But there might be room for more tricks in manipulating the transistor to work in a 3D space more. That's what GAAFet sounds like to me anyway. Now that might get awfully toasty.
 

whm1974

Diamond Member
Jul 24, 2016
7,447
483
96
#13
Actually physically shrinking the transistor... yeah that's coming to an end soon. But there might be room for more tricks in manipulating the transistor to work in a 3D space more. That's what GAAFet sounds like to me anyway. Now that might get awfully toasty.
It will be really ironic if it turns out that the x86 ISA has finally reach its limits with 7nm and we will have to use designs that are based on RISC like ISAs to get any farther.
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,415
8
91
#14
It will be really ironic if it turns out that the x86 ISA has finally reach its limits with 7nm and we will have to use designs that are based on RISC like ISAs to get any farther.
The Intel processors, have been RISC internally**, for a rather long time now.

**They basically, convert the x86 instruction set into the RISC instructions, of that particular Intel cpus, architecture. Before/during executing the machine code. i.e. on the fly.

There is more information about it here:
https://stackoverflow.com/questions...l-hide-internal-risc-core-in-their-processors
 
Last edited:

whm1974

Diamond Member
Jul 24, 2016
7,447
483
96
#15
The Intel processors, have been RISC internally**, for a rather long time now.

**They basically, convert the x86 instruction set into the RISC instructions, of that particular Intel cpus, architecture. Before/during executing the machine code. i.e. on the fly.

There is more information about it here:
https://stackoverflow.com/questions...l-hide-internal-risc-core-in-their-processors
Yes I'm aware of that, but I was referring to pure RISC designs such as RISC-V and OpenRISC and the like.
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,415
8
91
#16
Yes I'm aware of that, but I was referring to pure RISC designs such as RISC-V and OpenRISC and the like.
Ok, I'm glad you realize that Intel already use RISC/micro-ops, already.
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,415
8
91
#18
If memory serves, Intel had this since the Pentium Pro and AMD the K6?
So, since they are already using a RISC system (and ignoring the small chip area, used for the extra x86 translation units), there is no real room for improvement, as regards RISC.
My understanding is speed wise, it (removing the x86 stuff), would not help, because the translations occur in parallel with the other execution units.
Also, the extra chip area, is somewhat insignificant, compared to the overall chip size.

Anyway, they (Intel) like to use the micro-ops, which would still have needed to have been translated from, even a RISC instruction set, I believe.
Because the micro-ops, are at an even lower level. They also allow, Intel the capability, to make major changes, between chip generations (tick/tock), without needing to rewrite the instruction set each time. Which would really annoy the PC community (I would expect).
 
Mar 11, 2004
18,213
608
126
#19
7nm with EUV is supposed to be cheaper than the present 7nm from TSMC. Less masks, less defects, higher yields.
Right but the shrink to 5nm once they get EUV going allegedly will be pretty easy, and because of the costs, and the limited amount of EUV equipment (and the premium cost of that equipment - I believe that stuff is not cheap even as far as chip manufacturing equipment goes), I think they'll move to 5nm quickly on it so as to make 5nm and EUV more premium and worth the cost, while leaving the 7nm production lines EUV free, till there's enough equipment. That's just my random guess (not based on anything more than simply the information I mentioned).

I think for the first couple of years of EUV/5nm that it'll be relegated to few products. Apple can probably afford it, I think AMD could for EPYC chips, and Nvidia for their larger chips.

I just realized, have they remarked on the prices for modern ARM chips? I haven't really seen a break down of that (but haven't really looked either, but I thought Anandtech used to comment on like the mass pricing setup but don't think I've seen that the past few years?), but phone prices have continued to climb (even kinda stripped down stuff like One Plus phones have climbed in price quite a lot - how much of that is because of increased costs of the higher end ARM SoCs they use?). 7nm is going to be more expensive, and I think 5nm/EUV will be too (it should eventually be cheaper once they get enough equipment). And I know that not all all of the prices is due to that (for Apple the OLED displays are part of it), but makes me wonder how long the market will support the costs, especially if we see economic downturn (which seems very possible) and issue over tariffs and trade. Honestly it might be a good thing that we'll have probably a bit of a slowdown coming as I have a hunch that we'll start to see a pretty significant shrink in consumer demand (between how powerful these mobile SoCs are getting, how many cores AMD seems intent on getting to people, I can see people feeling like they've got good enough for awhile), but I worry that it might happen before we really transition to EUV and beyond, and it'll end up relegated to enterprise level stuff.
 

whm1974

Diamond Member
Jul 24, 2016
7,447
483
96
#20
So, since they are already using a RISC system (and ignoring the small chip area, used for the extra x86 translation units), there is no real room for improvement, as regards RISC.
My understanding is speed wise, it (removing the x86 stuff), would not help, because the translations occur in parallel with the other execution units.
Also, the extra chip area, is somewhat insignificant, compared to the overall chip size.

Anyway, they (Intel) like to use the micro-ops, which would still have needed to have been translated from, even a RISC instruction set, I believe.
Because the micro-ops, are at an even lower level. They also allow, Intel the capability, to make major changes, between chip generations (tick/tock), without needing to rewrite the instruction set each time. Which would really annoy the PC community (I would expect).
Yeah I would be very upset if I had to give all my games in ordered to get a huge improvement in performance and what not. And that is assuming I would even notice the improvement.
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,415
8
91
#21
Yeah I would be very upset if I had to give all my games in ordered to get a huge improvement in performance and what not. And that is assuming I would even notice the improvement.
Exactly! :)
If Intel suddenly overnight, changed 100% to a completely new, RISC instruction set, and simultaneously dropped all x86 downward compatibility.

Suddenly, for a long while (very approximately) 12 (it would vary, maybe 3 to 36+) months, windows 10, windows server editions, most Linuxes, BSDs, Mac operating systems, would suddenly all refuse to work on the new Intel cpus.
Not to mention, 99.99% (estimate), of the worlds application/user software.

Also, AMD would then be the only major player selling (as regards the LATEST cpus), x86 compatible ones.

Also, Intel would then directly be against Arm, RISC5, etc.
But, without the massive/huge x86 advantage.

tl;dr
I think it would be a dangerous thing for Intel to do, especially in the current climate, of a very strong/upcoming AMD and their ever capable latest cpu offerings.

One final note. The Intel x86, is often in practice these days NOT really x86. In that the old/original 8086/8 instructions, are somewhat largely not used.
What you really see (don't get me wrong, there are still plenty of non-SSE/AVX instructions in programs, as they can't do everything, usually) is a lot of the much more modern instruction (sets), such as SSE2, AVX, etc. Because, many of the newer instructions, handle 2 or 4 or more, data values, automatically, within the same instruction. At essentially no speed penalty, compared to doing things one at a time. But a significant speedup, if the SMP concept is being used.
tl;dr
I think most compilers these days, especially if the optimizer is enabled, will use the later instruction sets. Which although in the strictest sense is still CISC, is also comparable to modern RISC implementations, because some/many modern RISC instruction sets (e.g. Arm), can also SMP things, such as Neon.

https://en.wikipedia.org/wiki/ARM_architecture#Advanced_SIMD_(NEON)

That is, in a sense, the latest CISC and RISCs, have partially merged (become similar), because of the way modern cpu architectures, tend to favor, doing multiple things at the same time (Superscalar) + SMP (if you are not including it with the definition of Superscalar).

https://en.wikipedia.org/wiki/Superscalar_processor
 
Last edited:

Despoiler

Golden Member
Nov 10, 2007
1,824
69
136
#22
My understanding is that 3nm is the limit to semiconductor manufacturing in terms of having at least an idea of how to do it. 5nm is doable. It's both a materials issue, but also a transistor design reality. We have to rethink transistors to keep going. Metal-air gap is the type of design that might replace what we currently use.

https://spectrum.ieee.org/nanoclast...w-metalair-transistor-replaces-semiconductors
 

whm1974

Diamond Member
Jul 24, 2016
7,447
483
96
#23
My understanding is that 3nm is the limit to semiconductor manufacturing in terms of having at least an idea of how to do it. 5nm is doable. It's both a materials issue, but also a transistor design reality. We have to rethink transistors to keep going. Metal-air gap is the type of design that might replace what we currently use.

https://spectrum.ieee.org/nanoclast...w-metalair-transistor-replaces-semiconductors
Stuff like this is always "five years" away from making the tech work, let alone actually getting implemented into "Real World" products. Just remember how long it took get OLEDs to work to large displays. I can remember reading about them back in 2000.
 
Thread starter Similar threads Forum Replies Date
D CPUs and Overclocking 34
P CPUs and Overclocking 9

Similar threads



ASK THE COMMUNITY

TRENDING THREADS