Here comes Intel 11nm Skymont... in 2015

Status
Not open for further replies.

zebrax2

Senior member
Nov 18, 2007
977
70
91
Really interesting article that seems to take the Intel roadmaps (at least the ones I've seen) just a step further.

http://www.h-online.com/newsticker/...rs-Of-dear-friends-and-engravers-1263024.html

And how do they do 11nm lithography with 193nm light anyway? Just another mystery! :eek:

BTW, as the OP I can state categorically that anyone stating Bulldozer will be faster than Skymont will be knighted as Sir Troll. :D

Aren't they concentrating light like a magnifier only in a different manner(immersion i believe)?
 

bridito

Senior member
Jun 2, 2011
350
0
0
Aren't they concentrating light like a magnifier only in a different manner(immersion i believe)?

I couldn't tell you but I'm sure that there are some people on this forum who can answer that. When I went to school I must have learned different physics as 11nm would be darn close to X-rays and to perform a function on that scale with UV-C would be exactly as the article stated: "that's like using a sledgehammer and stone chisel for fine engraving!"
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
the Haswell successors have already been named: Broadwell with a reduction (shrink) to 14 nm (P1272) and then Skylake with a new microarchitecture – probably again from Haifa – a decrease to 11 nm version (p1274) which is scheduled to roll out as Skymont in 2015.

The node-candence timing seems just a bit off.

22nm will come out 1H 2012.

Intel has mostly stuck with a 2.1-2.2 yr/node cadence.

So we should not be expecting 14nm until mid-2014.

And 11nm would not be expected until 2H 2016.
 

skipsneeky2

Diamond Member
May 21, 2011
5,035
1
71
Can't wait... Another batch of rumorware...

Gotta love that rumorware.

Most of it has come true but the naming schemes no like the gtx595 or how how about that e8700 everyone yacked about that had only a few made.

Betting Intel was feeding the rumorware trolls with that just making just a couple.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Really interesting article that seems to take the Intel roadmaps (at least the ones I've seen) just a step further.

http://www.h-online.com/newsticker/...rs-Of-dear-friends-and-engravers-1263024.html

And how do they do 11nm lithography with 193nm light anyway? Just another mystery! :eek:

BTW, as the OP I can state categorically that anyone stating Bulldozer will be faster than Skymont will be knighted as Sir Troll. :D

Hansome son you have their.

I tried to find it at the intel home but I couldn't find what I was looking for . It came out with the intel 3D gate tech . It discribed intels possiable next move at 15nm- 11nm . Me an IDC have differant views on this . But the fact intel had it posted on their site makes me believe they will use that method . IDC is thinking nano wire I believe and Intels site stated after 3D tri gate something about Quantuam wells. this is the way I think intel will go at 11nm. Its pretty neat stuff either way .
 
Last edited:

bridito

Senior member
Jun 2, 2011
350
0
0
Idontcare: Then it looks like I'll have more time to save my money!

Nemesis 1: He is sweet. His mom is one of the nicest Borgs you'll ever meet outside of a Cube. What I know about quantum wells is very limited but do you think that they'll be able to master that sort of tech in 5 years?
 
Dec 30, 2004
12,553
2
76
the sooner they hit 5nm the sooner they can lay off 80-90% of their engineers and reap some sweet, delicious profit.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
LOL, Intel might not be the company doing the layoffs but to be sure R&D jobs in CMOS process development is not a "growth" opportunity.

Software yes. IC layout/design/test/validate yes. Process tech development no.
 

smartpatrol

Senior member
Mar 8, 2006
870
0
0
BTW, as the OP I can state categorically that anyone stating Bulldozer will be faster than Skymont will be knighted as Sir Troll.

Is it too early to start the official "RUMOR: Bulldozer 2 to be 50% faster than Skymont" thread?
 

gevorg

Diamond Member
Nov 3, 2004
5,070
1
0
the sooner they hit 5nm the sooner they can lay off 80-90% of their engineers and reap some sweet, delicious profit.

Not really, there is always new frontiers to conquer, such as organic computing which might still mean they need to lay off 80%+ engineers and hire different engineers for a massive R&D project. :)
 

khon

Golden Member
Jun 8, 2010
1,318
124
106
Really interesting article that seems to take the Intel roadmaps (at least the ones I've seen) just a step further.

http://www.h-online.com/newsticker/...rs-Of-dear-friends-and-engravers-1263024.html

And how do they do 11nm lithography with 193nm light anyway? Just another mystery! :eek:

It's not a mystery, just damn hard.

To get to 11nm with 193nm light you need to do multiple exposures for each individual layer, which means your throughput and yield are going to take a major hit.

In any case I doubt it will happen as soon as 2015.
 

bridito

Senior member
Jun 2, 2011
350
0
0
It's not a mystery, just damn hard.

To get to 11nm with 193nm light you need to do multiple exposures for each individual layer, which means your throughput and yield are going to take a major hit.

In any case I doubt it will happen as soon as 2015.

All CPU launch dates have been sliding (from Intel & AMD to be fair) so it's reasonable to expect that it will be later, maybe much later than this 2015 date or even the 2016 date Idontcare stated. I just can't picture how electrons would bounce around in an 11nm "pipe". All I'm imagining is huge boulders rolling down a tight lava tube. The quantum interactions at that scale are probably amazing, with electrons popping out of their pathways on a regular basis! But what do I know... I mostly slept through my physics classes.
 

lol123

Member
May 18, 2011
162
0
0
All CPU launch dates have been sliding (from Intel & AMD to be fair) so it's reasonable to expect that it will be later, maybe much later than this 2015 date or even the 2016 date Idontcare stated. I just can't picture how electrons would bounce around in an 11nm "pipe". All I'm imagining is huge boulders rolling down a tight lava tube. The quantum interactions at that scale are probably amazing, with electrons popping out of their pathways on a regular basis! But what do I know... I mostly slept through my physics classes.
Delaying CPU launches and delaying process nodes are two very different things. If Intel says that they will deliver 11nm in 2015, then they probably already have working silicon (just at very low yields).
 

bridito

Senior member
Jun 2, 2011
350
0
0
Delaying CPU launches and delaying process nodes are two very different things. If Intel says that they will deliver 11nm in 2015, then they probably already have working silicon (just at very low yields).

What I'm wondering is if we may have reached that theoretical limit where silicon is just too darn complex to advance any further. Let's face it... Cougar Point and Bulldozer are two examples of rather serious problems which both Intel & AMD by all rights should have been able to nip in the bud with their multi-billion dollar R&D and fabs. My i7 940 is three years old and although it's not OCd it will still pretty well hold its own against the fastest CPUs from either Intel or AMD. It may be slower than a superexpensive 990X, but heck, core for core it's not that much slower. I remember when the GHz wars were raging and everyone expected for stock CPUs to run at 10GHz or more by now. It seems stock silicon hit a brick wall at around 4GHz so the onus shifted to multiple cores rather than making a single process run faster (which is generally "easier and better"). So are we seeing the end of the line in silicon design? If so what's next?
 

GammaLaser

Member
May 31, 2011
173
0
0
What I'm wondering is if we may have reached that theoretical limit where silicon is just too darn complex to advance any further. Let's face it... Cougar Point and Bulldozer are two examples of rather serious problems which both Intel & AMD by all rights should have been able to nip in the bud with their multi-billion dollar R&D and fabs. My i7 940 is three years old and although it's not OCd it will still pretty well hold its own against the fastest CPUs from either Intel or AMD. It may be slower than a superexpensive 990X, but heck, core for core it's not that much slower. I remember when the GHz wars were raging and everyone expected for stock CPUs to run at 10GHz or more by now. It seems stock silicon hit a brick wall at around 4GHz so the onus shifted to multiple cores rather than making a single process run faster (which is generally "easier and better"). So are we seeing the end of the line in silicon design? If so what's next?

Most likely chip stacking using through-silicon vias (TSV), first bringing more memory close to the die and perhaps later having multiple logic dies stacked on top of each other. The big showstopper for doing this has been in the realm of reliability, testability and thermal dissipation issues, but there's been a lot of research going into solving all these problems. In the longer term, I've seen mentions of alternative materials like III-V semiconductors and even graphene that would use transistor technology different from MOSFETs.

And to be clear, the Cougar Point problem wasn't a process scaling issue, but it is a testament to the difficulty of thoroughly testing the amazingly and increasingly complex system that goes into a typical PC.
 
Last edited:

bridito

Senior member
Jun 2, 2011
350
0
0
From what I understand a graphene single atom process would be way too expensive (at least now) for consumer CPU use. I agree that these systems may just be reaching the level of complexity where the process of testing may just be becoming too befuddling for any corporation to undertake!
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
All CPU launch dates have been sliding (from Intel & AMD to be fair) so it's reasonable to expect that it will be later, maybe much later than this 2015 date or even the 2016 date Idontcare stated. I just can't picture how electrons would bounce around in an 11nm "pipe". All I'm imagining is huge boulders rolling down a tight lava tube. The quantum interactions at that scale are probably amazing, with electrons popping out of their pathways on a regular basis! But what do I know... I mostly slept through my physics classes.

Understand the the node label (11nm node, 14nm node, 22nm node) is just a label that is intended to merely that the majority of active components that comprise the IC (the xtors, the contacts, the wires) have shrunk but not necessarily by any factor that is mathematically computed by the numbers in the node labels themselves.

"11nm node" does not mean the wires are 11nm wide. Nor does an 11nm node mean the wires have been shrunk by 11/14 of the wire diameter present in a 14nm node.

These are more just guidelines, rough boundary conditions intended to differentiate two nodes within any given company.

That said, when operating in DC mode and using electron carriers as the charge carriers, the compton wavelength is on the order of 10 pm (that's picometers, much much smaller than a nanometer).

We have a long ways to go before the conduction limitations of electrons as our charge carrier of choice become problematic.

And to be clear, the Cougar Point problem wasn't a process scaling issue, but it is a testament to the difficulty of thoroughly testing the amazingly and increasingly complex system that goes into a typical PC.

Yeah the issue with Cougar Point would be akin to building the 747 only to find out the legacy ashtrays that you included in the design because, heck they are ashtrays so why spend time re-inventing them, are a serious fire hazaard and the FAA grounds your fleet of 747's until you retrofit safety approved ashtray's into all the seats.
 

bridito

Senior member
Jun 2, 2011
350
0
0
Thanks for the clarification Idontcare, and I love the 747 ashtray analogy! Good one! +1
 

GammaLaser

Member
May 31, 2011
173
0
0
I'd say the big problem with thin wires is electromigration--electrons literally knocking metal atoms around due to the high current densities. Once some damage occurs, the current densities in the wire gets even higher leading to further more damage.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
What I'm wondering is if we may have reached that theoretical limit where silicon is just too darn complex to advance any further. Let's face it... Cougar Point and Bulldozer are two examples of rather serious problems which both Intel & AMD by all rights should have been able to nip in the bud with their multi-billion dollar R&D and fabs. My i7 940 is three years old and although it's not OCd it will still pretty well hold its own against the fastest CPUs from either Intel or AMD. It may be slower than a superexpensive 990X, but heck, core for core it's not that much slower. I remember when the GHz wars were raging and everyone expected for stock CPUs to run at 10GHz or more by now. It seems stock silicon hit a brick wall at around 4GHz so the onus shifted to multiple cores rather than making a single process run faster (which is generally "easier and better"). So are we seeing the end of the line in silicon design? If so what's next?

I believe it is more lack of competition than anything that has kept clock speeds relatively constant and # of cores where they are at. Plus, the push to keep power manageable has bee a factor too. Sure Intel could release a TDP 2800k @ 4.2ghz, but the power usage would be very high.
 
Status
Not open for further replies.