In-Depth: Intel's 10nm was definitely NOT too ambitious

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

dullard

Elite Member
May 21, 2001
25,907
4,494
126
Thanks for quoting me out of context! If you had used the full statement it would have read like below ...apparently you saw need to make up an argument.



Reading the full statement makes it clear to everyone that i am referring to delays and not to the theoretical impossibility.
No, not it does not make it clear. You specifically say it is impossible. No amount of context can change a statement that says something is impossible to being possible.

You could have said that you think it is unlikely to launch 10 nm+ without a mass production of 10 nm. Then context matters.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Oh we continue with hairsplitting? Lets make it short - you are right and i am wrong. Hopefully this ends the pitiful discussion.

EDIT: Yes i could have said that...but i did not...my fault...but do we have discuss eternally semantics?
 

jpiniero

Lifer
Oct 1, 2010
16,491
6,983
136
I never implied that it is technically impossible - just highly stupid - because this essentially means not going into 10nm mass production despite being ready.

Well, Intel's already missed the window for mass producing Cannonlake, and I don't know if you would want to bet they will make the window for Icelake with the rumors of Comet Lake. So there very may well be a gap where they could in theory go to HVM but there's no point since they missed the release window.
 

ehume

Golden Member
Nov 6, 2009
1,511
73
91
They cannot possibly ramp 10+ before 10. So are you implying they delaying 10nm even further until 10+ is ready? I do not think Intel can afford additional delays - they will bring 10nm into mass production as soon as possible, while 10+ will come later.
I have to agree with dullard, here. You appeared to me at the time -- and still do -- to be saying "They cannot possibly ramp 10+ before 10." Further, you are saying that 10+ will be delayed while Intel fixes 10nm, even though it is a delay they cannot afford. In the end you predict they will fix 10nm, and we will see it, late if necessary.

My guess is that they should abandon 10nm, move on to an alternate design, whatever they choose to call it (10+? The Don't-Look-At-The-Man-Behind-The-Curtain chip?). But that is just a guess. In the end I will wait to see what happens.
 
  • Like
Reactions: extide

Vattila

Senior member
Oct 22, 2004
817
1,450
136
Regarding 10nm, 10+ and further refinements — I think the key thing for any evaluation of their schedule is to assess the features added by the refinements in terms of readiness, complexity and yield implications.

Regarding readiness, if the feature required for the refinement of the process simply is not ready, they cannot simply jump to that refinement. If it now is ready, it may make sense. However, it depends on the complexity and yield implications of adding it.

Regarding complexity, if the feature adds complexity that makes solving the fundamental process problems harder, it hardly makes sense to introduce it until the primary problems are solved. On the other hand, if the feature decreases the complexity and makes it easier to solve the primary problems, then it makes a lot of sense.

Regarding yields, if the feature decreases yield, it makes little sense to introduce it while you struggle to get the primary problems solved and achieve acceptable yields on the primary process. On the other hand, if it increases yield, it makes sense.

(For example, EUV is planned for GlobalFoundries' 7LP+ process and will eliminate a lot of multi-patterning in the 7LP process, reducing the number of masks and processing steps, and hence potentially increasing yields. However, EUV has its own set of problems that need to be solved before it is ready for prime time. As another example, a taller transistor fin increases performance, but is more difficult to produce and hence yields worse than a shorter fin. It makes little sense to move to the taller, more difficult, fin until you master the shorter, and simpler, one.)
All that said, it is also a question of process naming and definition (features). Intel is of course free to rename and redefine their 10nm process and refinements, which may confuse the whole discussion. For this reason, a reference to a clear definition is needed, so that it is clear exactly what is meant. E.g. by "10nm+", do you just mean the process refinement following the initial "10nm" process, or do your refer to explicit features, e.g. those listed on Intel's slides?
 
Last edited:

John Carmack

Member
Sep 10, 2016
160
268
136
This thread feels a lot like the Intel 2017 Year in Review thread where any negative events were left out.
 

Spartak

Senior member
Jul 4, 2015
353
266
136
Why? Why can they not find out that process A is not working, then release process B? Why must process A be released in mass production before B can be manufactured? I honestly do not understand the point that you and Lodix are trying to make.

This whole thread is completely ridiculous. Toddlers trying to understand relativity theory before they can even read a clock.

People, don't try to debate something you have zero knowledge about.

FWIW there seem indeed to be fundamental problems to Intel's 10nm process. Whether they will be able to fix that or not is the core issue. They might call the 'fix' 10nm+ or they will still call it 10nm. It's semantics, just leave that discussion. But the real discussion is whether fixing 10nm is economically viable for Intel. Cobalt and quad patterning seems a marriage made in hell.

Intel is at the back of the line for EUV orders and removing cobalt from the interconnects will mean an entire new process, might as well move to 7nm and cut your losses.

They might pull a rabbit out of their hats but truthfully, I wouldnt place my bet on that.
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
16,491
6,983
136
You know, I was thinking about it, I think Intel might have thought the only issue was the cobalt and not 36/SAQP. So their backup plan, which was to go to straight to 10++ was toast since they only worked around the cobalt. I don't know if reworking and validating the layers to drop to 40/SADP would take two years, maybe it does, but they may have blown a bunch of time screwing around before deciding on a plan.

Intel is at the back of the line for EUV orders and removing cobalt from the interconnects will mean an entire new process, might as well move to 7nm and cut your losses.

Well, for one the Arizona fab isn't supposed to be online until 2020, and who knows how long it will take to get it up to speed. And the 7nm products are likely dependent on 10 nm being there.
 
  • Like
Reactions: Dayman1225

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
I remember reading somewhere that Intel was the largest purchaser of EUV equipment a year or so ago?
 
May 11, 2008
22,211
1,405
126
I remember reading somewhere that Intel was the largest purchaser of EUV equipment a year or so ago?

That was in this post :
https://forums.anandtech.com/threads/asml-claims-major-euv-milestone-a-250w-euv-source.2511470/

https://www.eetimes.com/document.asp?doc_id=1332012&
ASML has 14 development tools already in the field which have now exposed more than 1 million wafers, including more than 500,000 wafers in just the past 12 months, according to Lercel. The first shipments of ASML’s NXE:3400B production EUV tool began earlier this year.

As of April, ASML had a backlog of 21 EUV systems awaiting delivery, the majority of which are reportedly ticketed for Intel. The company is expected to provide an update of its EUV backlog when it announces its second quarter results next week.
 
May 11, 2008
22,211
1,405
126
The problem is that ASML does not manufacture all parts of the EUV equipment themselves.
Parts like for example lenses are bought from specialists in the field. If there is a delay somewhere in the chain, the backlog increases.


https://www.eetimes.com/document.asp?doc_id=1332420&page_number=2

Hutcheson and other sources contacted said they were not aware of any delay implementing EUV at Intel. The x86 giant itself declined to comment on its EUV timeline.

“We are committed to bringing EUV into production as soon as the technology is ready at an effective cost. The road to EUV lithography production is a long one. While there has been great progress, much work remains. It’s important that our industry partners and suppliers are engaged and pushing as hard as we are to meet the requirements for high volume manufacturing,” an Intel spokesman said via email.

In June Globalfoundries CTO Gary Patton said engineers still have to resolve several issues with EUV. Most importantly, defects in masks need to be reduced and protective pellicles for EUV wafers still need to be designed, he said.

“Intel’s probably not going to be the first to implement EUV, but they [initially] bought more EUV tools than anyone…[and] Intel will do a lot more pre-qualifications because they want the highest performing chips,” said Hutcheson. “I’m sure next year some people will announce manufacturing with EUV, but the ramp will really be in 2019,” he added.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,809
1,289
136
Looked through a couple Intel EUV slides for SPIE/etc.

Intel isn't looking at EUV true-HVM till 0.55NA. Which might be a while:
His colleague Roderik van Es showed an ASML product roadmap indicating that the high-NA generation of systems could be expected in late 2023 or 2024, by which time productivity is slated to have reached 185 wafers per hour. That should come after the company’s “NXE:3400C” tool appears, in around two years from now.
- http://optics.org/news/9/2/40

https://i.imgur.com/nE1GNTA.png
185 wafers per hour with EUV SADP/SAQP
or
200 wafers per hour with Direct Imprint

Also, if you check some linkedin profiles... for 1x Masks => That is JFIL.

NZ2C in 2018 compared to that image is here:
Throughputs of up to 90 wafers per hour were achieved by applying a multi-field dispense method. Mask like of up to 81 lots, using a contact test mask were demonstrated. The status of the tool overlay is discussed. Application of a High Order Distortion Correction (HODC) system to the existing magnification actuator has enabled correction of high order distortion terms up to K30. A mix and match overlay of 3.4 nm has been demonstrated and a single machine overlay across the wafer was 2.5nm.

So, NZ3C might exceed expectation and challenge in the previous post.
 
Last edited:

bobhumplick

Junior Member
Jun 29, 2018
8
1
41
I didn't say they would release CannonLake for desktop. I am just saying that if next year they launch something manufactured in 10nm it will most likely just be 10nm without the "+". They can call the product what they want.They are shipping broken tiny SOCs just for marketing. But 10nm is not in mass production.

intel arent shipping cannonlake parts for marketing. intel always sneaks in a current gen model number sku which is not the same process when working on a new process. like 4th gen desktop and mobile chips were 22nm but if you look really closely and actually check the whole lineup then there might be a CPUID string that says broadwell or skylake on one of them. and if you check the die size or power usage it will be 14nm.

i cant name any examples off the top of my head but i remember running across a couple of these.

and intel didnt really try to publicize it as far as im aware. somebody found it by checking cpus im pretty sure. the reason it made such big news when its a nothing chip, and intel do this all the time, is because EVERYBODY is losing their mind because 10nm isnt here. the world will end.

and it was too aggressive. maybe not in one metric but in everything it tried to deliver. the density for intel 10nm is the same as the density tsmc uses for cell phone socs and they get about the same clocks.


intels 10nm+ is the same density as the 7nm amd uses for ryzen except that when amd's custom librairies are taken into account its actually even less than 7nm hpc is listed as. but not a bad idea to get clocks up a bit.

im not sure that ***THE PROBLEM WITH*** 10nm was ***BEING*** late. more like not really as needed or as useful as needed to make the transition worth it. giving up to much for very little. server might be an exception but but those are coming soon. people overlook the main reason for new nodes from the businesses point of view and that is more transistors per litho machine. it takes double the litho steps to get half the transistors.

that is not an improvement. you can use that extra space to add more ipc but now clock yields are a lot wider spread with a lot of extreme outliers that rare before. honestly i wouldnt doubt if 80% of culled ryzen 2000 cores were fully working and within 100mghz of the spec they were meant to meet.

the average tested per core speed for zen 2 die before binning probably has double the range of variance (lets just use 200mghz for ryzen 2000 and 4-500 for zen 2). and its pretty obvious because this is exactly why some zen 2 dies might have cores that cant hit the single core turbo. and dont say the agesa update fixed it. no, the update fixed NEVER seeing single core boost on ANY cores. now you will see a single core at the single core boost...on one core at least. and the second die on the 2 core die cpus might not even be able to hit the single core.

and not all cores being able to hit single core boost is just one sign of this. the INSANE voltages that are pumped into these cores to get what we are getting. some people are going to say its just to get the max boost at any given time for more performance. but 40-50c idle temps, real idle where nothing is happening. and boost voltages that would be enough to get a 9900k to 5.5ghz single core. so volts hitting 1.5v or 1.6v left and right, convoluted temperature range charts showing at what temp, what voltage will be for what speed. this wasnt to get a bit more speed. this was to narrow the spread of per core clocks per die.

were i mean they set a deadline i guess and all that. but i dont think it was really supposed to even be used. at least not an EUV version. this was a goal, an idea. the metrics listed and everything are the implementation of that idea.

but mainly what 10nm was, way back when it was first announced was simply the density that they would need to make quad patterning worth going for over dual patterning. and i think in the back of their minds they hoped they wouldnt have to use it at all because EUV would be ready before 10nm was really worth the compromises it brought.

i think they were hoping the work with quad patterning and other new techniques would hopefully just be a practice run for the end of the "honey moon" stage of EUV when you were working your way back up to higher pattern counts.

but if EUV didnt come on time then this process would be taken off the backburner and the slow down in production from doubling all those stages would just have to be overcome by buying more equipment
 
Last edited:

bobhumplick

Junior Member
Jun 29, 2018
8
1
41


dude you skimmed through just to find a single misspoken or mistyped error to point it out when it was obvious i wasnt trying to say 10nm wasnt late. and then just linking a site without even a single word. redacted

did i not know that the deadline for 10nm was past? yes. yes i did.

why did i say it then?

part of it was poor phrasing on my part and leaving out some words.i was trying to look past a binary state of being, of either late or on time. i was trying to say that THE PROBLEM with 10nm wasnt in being late, but that it had only about half the normal benefits of a new node while at the same time not having the most important one from the chip makers standpoint. and that is the number of transistors produced per litho machine.

same with tsmc 7nm. amd needed the performance more than intel. they were doing well but really needed an advantage. and all they had to trade was production numbers which amd had too much of anyway if the prices of last years models are any indication

amd didnt have large numbers of customers with huge quotas to fill for cpus. if amd made and sold a million chips then amd made and sold a million chips. great!!! if amd made and sold 2 million 12nm chips , but they were just a bit behind intel and they had half of them left over to sell for clearance then how does that help amd?

but intel makes way over 90% of the cheap business class oem desktops, and around the same number of simpe to high end laptops. at light to moderate workloads, most of the time spent on a laptop, intel 14nm cpus actually use less power than amd 7nm because of the all the power savings features built. for chrome, youtube etc the igpu does most of the work and uses 1.5 watts doing it.

the consistancy of 14nm (and dual patterning) also means those low end i3 and i5 systems, lets say a i5 9400 system, will have higher clocks, all core etc, because these low end chips are where most of the worst dies go.

quad patterning and specifically 7nm is half the reason amd went with chiplets. the other half is the ability to make hedt chips from desktop die. with 14nm and 12nm, desktop parts only needed to be a single die. with 7nm it had to be 3. oh and btw all that stuff about things that dont scale well are better off on 14nm etc, maybe theres some truth to it but thats not the real reason. the real reasons are because they still had to make something at glofo because of wafer supply aggreements, to make sure they used as little of this expensive, slow to make, inconsistant, silicon. oh and for the same reason stayed with off die memory controllers way back in the day after amd had already put it on the die, so they could use different mem controllers (or io dies in amd's case)

they made a great product from it, im not saying they didnt. it was a great use of what they had and showed a lot of forethought. but mostly it was a very broke company that had been bleeding for years throwing a few million shares of amd stock worth 1 dollar at a genius of the cpu world and saying if you get the company to 30 dollars a share you have made 30 times what we paid you. first gen ryzen was just 2 die, the 8 core and the apu. same with second gen. they did a whole lineup with those 2 die and really they only needed the one die. all the costs of making masks and this and that, just one die

and with zen 2 its just one core die and one io die that also double as a chipset. brilliant.

but they traded off more for that then people realize. extra latencies, an L3 cache pool that is functionally much smaller than it looks. i mean a 3700x has double the cache of the 2700x but the way cache is laid out its really more like 2 pools of 4 and even those 2 pools are subdivided into 2 subsets. to get similar performance of a 9900k with 5ghz or even 4.9ghz in most things it took double the cache. and now L3 cache alone takes up 2/3 to 3/4 of the core die to overcome the layout of all this and the lower cache speed of the L3.

oh and that pesky consistancy spread of clocks probably effects the speed that the L3 cache will run at so the slowest speed of cache slice is the cache speed but now its way more variable from chip to chip. just pump a bit more voltage into it when under load, and cache uses a lot power as well.

being on 7nm while intel is on 14nm means that amd wont suffer from transistors that had to be thrown at the doubling L3 cache for now, but once intel get 10nm and beyond moving all those character points they put into cache, intel will be able to put into something else. L2 is the cpus local cache. L3 is there to tie the chip together. but instead its all so inconsistant.




Insulting members in the tech forums, is not allowed.


esquared
Anandtech Forum Director
 
Last edited by a moderator:

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
tl;dr


The problem was that Intel was unable to deliver any of its core or extended benefits at all, for at least 4 years.
You can't spin that in any logical way.
Any company only needs to release a new product if their old product stops making them (enough) money AMD already released 3 different ZEN spins while intel still makes more money with their 14nm CPUs,intel just had their best profit quarter since forever...
Revenue for the quarter came in at $19.2 billion, beating Q3 2018 by $27 million, which results in a mere 0.14% growth over last year, but enough to make this the highest revenue ever for the company.
 

Nothingness

Diamond Member
Jul 3, 2013
3,292
2,360
136
Any company only needs to release a new product if their old product stops making them (enough) money AMD already released 3 different ZEN spins while intel still makes more money with their 14nm CPUs,intel just had their best profit quarter since forever...
Huh no their income (and their gross margin) went down! But their revenues were up.
 

Gideon

Platinum Member
Nov 27, 2007
2,013
4,992
136
Any company only needs to release a new product if their old product stops making them (enough) money AMD already released 3 different ZEN spins while intel still makes more money with their 14nm CPUs,intel just had their best profit quarter since forever...

So Intel has been run by imbeciles between 1979- 1982 and 1987 - 2011 releasing new nodes every 2 years, despite being comfortably ahead multiple times in the timeframe?
And delaying 10nm from 2015 to 2019 is the best thing possible. An absolute genius move that nobody on earth could ever improve upon!

I shudder to think of the terrible dystopian future we would live in, if Intel had released 10nm in 2015, 7nm in 2017 and 5nm next year and actually made use of the R&D dollars they have poured into process tech.

It's glad that the financial geniuses finally figured out the way to success and that terrible terrible reality didn't come to pass ...

/s

OMq3C3I.png
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
22,696
12,649
136
Tech companies can't afford to slow down their pace of innovation. Someone is always waiting in the wings to upend their apple carts. Hector Ruiz made the mistake of letting AMD rest on their laurels, and look where it got them? It didn't even take very long for AMD to go down the toilet. Intel is lucky that they'll have more time to recover.