[Fool] Speculation: 7nm in 2021

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
http://www.fool.com/investing/2016/09/02/intel-corporation-may-have-pushed-7-nanometer-tech.aspx

In my opinion, this article completely misses the mark. For who (rightly so) doesn't want to read this article: apparently earlier some job listing said 2020 with "sub-10nm" (most likely just referring to next decade), while it has now been updated to 2022.

Now, who cares about job listings? It would be crazy to find unannounced product roadmaps in there, right? But even then who cares? This job is talking about 6 years into the future. You can't predict the future with 3 sigma certainty in tech. Job listings can't, neither can CEOs, neither can roadmaps, neither can investor meetings.

The job is talking about architecture, so that could be a Tock or an optimization, on a 7nm+ or 7nm++ node.

So here is my own prediction of 7nm timeframe, and I hope it is also made into an article :D.

Intel now has 3 year node cadance. According to Mark Bohr (Intel's manufacturing fellow), 10nm will be released in H2'17 as of August (most likely IDF of course, while that article by Ashraf says 2018 :rolleyes:). Count 3 years further: 10nm+, 10nm++, and then in H2'2020 you have 7nm. Wasn't too hard.

If Intel can't hold on to even a 3 year cadance, then I don't know how they even managed Tick-Tock for such a long time or why they have a 2 year lead right now.
 

mikk

Diamond Member
May 15, 2012
4,293
2,382
136
I believe 2021 is a more realistic scenario considering that delays are business as usual. Furthermore with 10nm+ and 10nm++ it wouldn't surprise me when we would see a 12-15 months timeframe between those updates, the lifetime of 10nm should be longer than 14nm.
 
  • Like
Reactions: CHADBOGA

nismotigerwvu

Golden Member
May 13, 2004
1,568
33
91
We're well and truly past "business as usual" on advancing node shrinks. As we approach the physical limits of the materials, the degree of difficulty ramps up exponentially and with it, the time required to make the whole thing work. Also, where are we setting the bar at in terms of availability? A single product, a reasonable portion of the stack or top to bottom?
 

jpiniero

Lifer
Oct 1, 2010
16,761
7,216
136
2021 is possible. I don't think Intel is going to shrink beyond 10 nm without EUV but I imagine they would want to go as soon as realistically possible.
 
Mar 10, 2006
11,715
2,012
126
2021 is possible. I don't think Intel is going to shrink beyond 10 nm without EUV but I imagine they would want to go as soon as realistically possible.

Mark Bohr said:
Bohr: We are well down the path of developing our 7nm technology today on an all-immersion process. We are closely monitoring the health progress of EUV tools. But again, they are not yet at that maturity level that we could say we’ll be committing them for 7nm.
 

ConsoleLover

Member
Aug 28, 2016
137
43
56
It would actually be in 2019 since now that finfet's are a known quantity it would be easier to produce 10nm than it was to produce 16nm or 14nm. From 28nm they didn't know how to do 16nm, Finfets were a new technology and the discovery took as much time as the actual shrinkage design. So now that everyone has the blueprint, it would be much easier to go from 16/14nm to 10/7nm.

In fact GF announced with AMD that they are skipping 10nm and going straight to 7nm. This means they can start researching for 7nm from 2017 and I believe with Finfet's being known quantity it will be 2019 when they physically have 7nm and is able to be mass produced, with REAL availability arriving in 2020.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,582
10,221
126
A realistic question - will this matter at all for the consumer?

Intel has shown a severe reluctance to giving the consumer "more" with every shrink (more transistors, more cores, more cache, whatever). Gone from 45nm down to 14nm, still four cores max on mainstream platform, actually LESS cache on mainstream platform (total effective cache decreased after Core2Quads), and Intel has been shrinking their die sizes with shrinks, rather than adding more transistors and more performance and/or cores.

So, really, maybe shrinks are good for shareholders (greater profits for shareholders), but given Intel's apparent greed, they mean virtually nothing for consumers. (Oh, the new chips might consume 10% less power. Big whoop, especially on desktop.)

Edit: 8C/16T should be standard on desktop machines by now, given all the shrinks we've been through since 4C debuted. (Zen should be interesting in that regard.)
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
If I may insert a small comment (woops, it didn't turn out so small).

I do think that Bohr may be holding up a little bit of a smokescreen there. Because surely Intel has very close contact with ASML and so they know how the technology is progressing and what their goals are. I think by now Intel, just like the others, has some EUV systems they are doing research and stuff on. So from that I conclude that 7nm will be EUV ready from the start, and so the only question will be how mature the EUV ecosystem is (power output, mask, pellicle, reticle, availability, WPH).

Few milestones:

Q1'15: peak of 1000 WPD with 90W
Q1'15: mid '15: presentation Intel, result of 4 week demo (end of Q1'15) at 40W is 280WPD and 64% availability, although exceeding expectations/goals by a large margin (but certainly not even close to production ready): http://euvlitho.com/2015/P1.pdf
mid'15: uptime 70% in a week at multiple consumers; 130W source

Q2'16: 1488 WPD *peak* (1200WPD at consumer)
2016 goal: 1500 WPD, availability 80-89% (don't know running time, though)

For sure I understand that Intel isn't yet going all in on EUV. In 2002, Intel's roadmap stated EUV insertion at 45nm node. In 2010, ASML thought within a few years they would be at 250W, 350W (~2013-14). You can see in the previous link on slide 4 those roadmaps. They look the same as those infamous Itanium roadmaps.

BTW, while reading up on EUV, I wonder what they mean with the 10/7nm node in 2017 (link below), they view that as 1 thing. What sort of fancy things will those foundries do now. I can't believe that they will just like a walk in the park go from 10nm to 7nm like they did with 14nm, 'cause there *will* be a shrink two times.

http://seekingalpha.com/article/398...-results-earnings-call-transcript?part=single

Anyway, at least it seems right now there is a more modest roadmap: http://www.anandtech.com/show/10097...-good-progress-still-not-ready-for-prime-time. The 3400 already has 7 orders according to link above. Those will be delivered in 2018. If they can continue to improve on yields and availability in the next couple or years, it should be doable. I mean, not long ago people were thinking EUV would be ready for 10nm (insert). According to ASML, the time from shipment to production is 12-18 months. They're aiming for 2019-2020 HVM.

And finally you have those prices that are just exploding :D. Enjoy following quote.

List price, 3300 was somewhere between €60 million and €65 million. 3350 is mid-90s. And then the 3400 is about €20 million higher than that [= $115B].
 
Mar 10, 2006
11,715
2,012
126
A realistic question - will this matter at all for the consumer?

All right, fine, let's dance :)

Intel has shown a severe reluctance to giving the consumer "more" with every shrink (more transistors, more cores, more cache, whatever). Gone from 45nm down to 14nm, still four cores max on mainstream platform, actually LESS cache on mainstream platform (total effective cache decreased after Core2Quads), and Intel has been shrinking their die sizes with shrinks, rather than adding more transistors and more performance and/or cores.

For client workloads, a highly clocked 4C/8T part is going to be better than a lower-clocked 6C/12T or 8C/16T part and the "mainstream" CPUs reflect that. Also note that for client workloads, especially in notebooks (>50% of the PC market), GPU is important. Intel has been spending a lot more transistors on the iGPU for this very reason.

Anyway, the die size argument is also bunk because cost/mm^2 of silicon is going up, so it would be uneconomical for Intel to keep die sizes flat-to-up generation over generation. Die sizes need to come down.

Transistor counts have gone up, but I don't see why this matters so much. All that matters is delivered performance. Take the cache example...the reason that Core 2 Quad/Penryn had very large caches is that Intel hadn't yet integrated the memory controller onto the CPU die. By moving the memory controller on the die and moving to a three level cache structure, Intel could include smaller but much faster caches (remember, caches get slower the larger you make them).

So, really, maybe shrinks are good for shareholders (greater profits for shareholders), but given Intel's apparent greed, they mean virtually nothing for consumers. (Oh, the new chips might consume 10% less power. Big whoop, especially on desktop.)

Edit: 8C/16T should be standard on desktop machines by now, given all the shrinks we've been through since 4C debuted. (Zen should be interesting in that regard.)

Ah, this old chestnut. Look, Intel's margins are mainly correlated to two things:

1. Factory utilization rate
2. Product competitiveness

If Intel doesn't make products that people want to buy, then people won't buy them, and thus factory utilization rates come down. In this case though, Intel's major customers are system vendors. Those system vendors tell Intel what they want from future processors and then Intel does its best to deliver.

In terms of said product competitiveness, Intel is doing well.

Factory utilization rate depends on the health of the overall PC market and Intel's ability to forecast future PC demand when it is putting in capacity (since capacity doesn't just get put in overnight).

As far as 8C/16T being standard on desktops, I disagree. Most desktop users wouldn't benefit from those additional cores, and it would even be a negative impact to product desirability of those cores were clocked lower in order to fit them into the proper power envelope.

The only reason AMD is bothering with an 8C/16T desktop part is that the die itself was built for server applications and they think they can bring in some extra cash by selling it as a "high end desktop" part. Don't think for one second that the Zeppelin 8C/16T die was created as some sort of special love-letter to enthusiasts.

There is literally zero difference between AMD packaging up 8C/16T Zen as a HEDT part and Intel packaging up a cut-down 10C/20T BDW-E as an 8C/16T HEDT part.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136

Not going to be surprised if that's true. It takes somewhat more time than 12 months for an "annual" refresh. 32nm Sandy Bridge was early 2011. 22nm should have been early 2012, and 14nm early 2014. With Ivy Bridge there was a delay, and with Haswell there was a delay, and with Broadwell there was a delay.

Early 2018 - 10nm: P
Early Q2 2019 - A
Early Q3 2020 - O
Early 2022 - 7nm

In fact GF announced with AMD that they are skipping 10nm and going straight to 7nm.

I've read such rumors since 45nm, that AMD will "skip" a node. There's no such thing. People that have no clue are claiming that. The guys doing such research themselves are finding solutions as they are going. It's not like the roadmap to the future is already paved.

I guess that doesn't prevent them slapping whatever node designations they seem necessary. :rolleyes: Intel changed the naming of 16nm to 14nm because they said the gains would be greater than normal, but historically the gains after 90nm were smaller than ever, so not really. With TSMC/GloFo/Samsung they took a mediocre half node "20nm", and above-average half node "14/16nm" and call it two node jump.
 
Last edited:

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Not going to be surprised if that's true. It takes somewhat more time than 12 months for an "annual" refresh. 32nm Sandy Bridge was early 2011. 22nm should have been early 2012, and 14nm early 2014. With Ivy Bridge there was a delay, and with Haswell there was a delay, and with Broadwell there was a delay.

Early 2018 - 10nm: P
Early Q2 2019 - A
Early Q3 2020 - O
Early 2022 - 7nm
That was because Tick-Tock was a bit more than 24 months and 22nm was were things were becoming more complex, but delta between nodes was not a lot more than 24 months (http://images.anandtech.com/doci/10504/Node.png). Big problem was the transitioning to 14nm, not even close to 24 months.

But in the last few years you can really see the 1 year time between products. Haswell: Computex (June, mobile a bit later), Haswell Refresh: Computex, Broadwell: IDF (sort of, that was of course in the wake of all the yield problems), Skylake: IDF (sort of), Kaby Lake: IDF (sort of), Cannonlake: IDF (hopefully those millions of units that BK once talked about that would be available).

So in 4 years an average delta of just a tiny bit more than 1 year. Of course internally we don't know exactly how things are going with the process nodes, but BK and Bohr are talking about 2.5 years, so 3 years is overshooting the mark a little, which they do so they have this predictable cadance.
 

Eug

Lifer
Mar 11, 2000
24,090
1,732
126
A realistic question - will this matter at all for the consumer?

Intel has shown a severe reluctance to giving the consumer "more" with every shrink (more transistors, more cores, more cache, whatever). Gone from 45nm down to 14nm, still four cores max on mainstream platform, actually LESS cache on mainstream platform (total effective cache decreased after Core2Quads), and Intel has been shrinking their die sizes with shrinks, rather than adding more transistors and more performance and/or cores.

So, really, maybe shrinks are good for shareholders (greater profits for shareholders), but given Intel's apparent greed, they mean virtually nothing for consumers. (Oh, the new chips might consume 10% less power. Big whoop, especially on desktop.)

Edit: 8C/16T should be standard on desktop machines by now, given all the shrinks we've been through since 4C debuted. (Zen should be interesting in that regard.)
From my perspective, we have gotten way more, in terms of performance per watt.

I'm not a hardcore computer user compared to some in this forum, but relatively speaking I'm a power user compared to most consumers, yet middling performance on the desktop is all I need. OTOH, we've now got these gorgeous ultra portable machines that have decent performance, something that was just not possible a decade ago.

These days there just aren't that many people in mainstream in the greater scheme of things that need hardcore top tier desktop performance, and Intel knows that. Hell, these days, people don't even buy that many desktops anymore. In 2017, the forecast is that there will be over 400 million tablets shipped worldwide, and about 200 million laptops. Add that together and you've got over 600 million portable devices, not even counting phones. In contrast, the prediction is about 120 million desktops will be shipped worldwide.

Given the direction of the market, it's no surprise that Intel has been focusing more on performance per watt and lower wattage chips than absolute performance.
 
Last edited:
  • Like
Reactions: zentan

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
But in the last few years you can really see the 1 year time between products. Haswell: Computex (June, mobile a bit later), Haswell Refresh: Computex, Broadwell: IDF (sort of, that was of course in the wake of all the yield problems), Skylake: IDF (sort of), Kaby Lake: IDF (sort of), Cannonlake: IDF (hopefully those millions of units that BK once talked about that would be available).

Haswell was still later than Ivy Bridge. Ivy Bridge was more early Q2. And the launches became more staggered, even though the differences were not immediately obvious. Even amongst U chips, which I was tracking it extremely closely, it was 1-2 months late compared to Ivy Bridge. I am talking about devices and CPUs you can buy. You couldn't buy nothing other than the crippled, Core M with Broadwell in 2014. Another staggered launch and majority was early Q1. From early Q2 with 22nm to early Q1 with 14nm. That's 30 months. From early Jan with 32nm to early Q2 with 22nm, that's 27-28 months.

The 18-24 months claim for a new node until very recently was bandied about(Hint: PAO) was not met. 18 months was a lie. Intel wasn't going to give you two products in 1.5 years. OEMs/ODMs won't allow it either, they wanted a year or more for their products to sell. 12 months minimum. 24 months most of the time in the first half of the decade from year 2000, more than 24 months in the last part of the decade from year 2000.

If it really was 24 months:
0.13u Pentium III-M: July 2001

Exact 24 months every process generation would mean:
32nm Clarkdale: July 2009
22nm Ivy Bridge: July 2011
14nm Broadwell: July 2013

In fact, Intel got very fortunate with 32nm, and that became an excellent process, one that's still mostly unmatched in its characteristics, with the product Sandy Bridge. About that time, rumors started surfacing(it was not widespread) that Intel might have needed another bridge after Ivy, because 22nm might have issues. Other than the delay, 22nm went along fine. 14nm became the one that needed one more year.
 
Last edited:

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
@IntelUser2000

I don't know why you come up this 18 months number. Moore's Law has *never* been 18 months. It was 12 months when he published his paper in 1965, and he made a revision to every 2 years 10 years later. In any case, it's already truly remarkable that a number from 1975 for transistor scaling has held up for so long.

I don't know why you dwell on on 14nm. we all know what happened. Bad yield problems.
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
There is another way to look at Intel's new 3 year cadence. If they really rushed it they could probably have done it in 2.5 years. So they have given themselves an extra 6 months to improve yields. In which case, 7nm yields in 2H 2020 could end up being very good.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
What is $11B R&D per year worth?

* Non leading modem
* Almost inexistent foundry business
* Tick-Tock cadance delayed by 1 year per shrink
* Architecture improvements that are within margin of error (5%) in terms of improvement
* Many key products like silicon photonics, 3D NAND, 3D XPoint, Knights Landing, 2.5D interconnect, Stratix 10, etc. not delivered in time
* An aweful mobile fail. (If Intel was actually executing on the things above, we could forgive them.)

They must do something exciting and unexpected. Deliver 2x efficiency without architecture shrink (Maxwell), or graphics processor with a gazillion EUs (GT9) with competitive performance per area, or quantum computer, or biological DNA computer. Or "simply" stunning 10nm process + yields (like 22nm "highest yields ever") that will enable super duper fast rollout (because of margin improvement vs poor 14nm process).

I guess for those things you'd need $100B budget. But hey, they would cut the nonsensical share buyback and dividend :).

Can't wait for Zen to eat into Intel's market share. That will make them stop worrying like "oh no, with subpar 14nm yields our gross margin would drop from 65% to 63%, so better delay the s*** out of it", and hurry up because of market share loss paranoia. (With 14nm, yields probably dropped from Intel-like to TSMC-like, and I don't see Apple having difficulties with those and it doesn't stop Nvidia from producing monster Pascal GPU dies.)

Something cheap Intel could do is just give some in-depth presentations or measuring apparatus to AnandTech. If you don't have nice products, at least you can talk about them, right :).

(I'd love to be CFO of this company (or Apple xD) and look at all those numbers.)