Here comes Intel 11nm Skymont... in 2015

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

bridito

Senior member
Jun 2, 2011
350
0
0
I believe it is more lack of competition than anything that has kept clock speeds relatively constant and # of cores where they are at. Plus, the push to keep power manageable has bee a factor too. Sure Intel could release a TDP 2800k @ 4.2ghz, but the power usage would be very high.

I agree that AMD has really not put up much of a fight on the high end in several years to push Intel at all... and I'm not overly happy about the rumors flying around about BD's apparent lack of competitiveness. I think that performance/watt is primarily a server farm concern. I have electric everything here, from central air/heat to dryer, stove, etc. I'd gladly buy a CPU that used up 300 or even 400 watts without blinking an eye or even noticing it much on my electric bill. If you didn't OC it, I'm sure something along the lines of a Noctua NH-D14 would be able to keep it nice and cool. And if you did OC it, then H2O would take care of it.
 

khon

Golden Member
Jun 8, 2010
1,318
124
106
In case anyone is interested, it's actually fairly easy to calculate how small a pattern your can make with a given lithography method.

It goes like this:

Pitch = 1/n*1/(1+sigma)*wavelength/NA

Where the pitch is the minimum distance between two features

n is the number of patterning steps for each layer(must be an integer number)

sigma is a number that indicates the shape of the incoming light, can be between 0 and 1 (highest realistic value is ~0.9)

wavelength for DUV is 193nm and for EUV it's 13.5nm

NA is a number that indicates the performance of the lens. Highest for dry lithography is ~0.93, highest for immersion is ~1.35, and highest for EUV is ~0.33 (thus far).

So for example DUV double patterning with max sigma and NA:

pitch = 1/2*1/1.9*193nm/1.35 = 38nm

The linewidth is not strictly limited by anything, but a good rule of thumb is that the limit is pitch/2. So for DUV double patterning the best possible linewidth would be 19nm.

That also happens to be the best anyone has done so far: http://www.electronicsweekly.com/Articles/2011/04/21/50946/toshiba-sandisk-sampling-19nm-nand.htm
 
Last edited:

firewolfsm

Golden Member
Oct 16, 2005
1,848
29
91
The node-candence timing seems just a bit off.

22nm will come out 1H 2012.

Intel has mostly stuck with a 2.1-2.2 yr/node cadence.

So we should not be expecting 14nm until mid-2014.

And 11nm would not be expected until 2H 2016.

Progress itself, progresses with time. The doubling time will continue to decrease, at an exponential rate. Die shrinks used to take up to 4 years before the 130um process.
 

Edrick

Golden Member
Feb 18, 2010
1,939
230
106
Progress itself, progresses with time. The doubling time will continue to decrease, at an exponential rate.

I do not think that will happen. There are many more factors that go into die shrinks other than "can we do it for progress sake". The current 2 year model is quite aggressive and should remain so for the near future.

I am sure if the world was going to end in 2 years unless Intel can release a 11nm chip, we would have an 11nm chip in 2 years. But as a business trying to maximize profits from each node, that would never happen. Big difference between "can they" and "would they".
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Progress itself, progresses with time. The doubling time will continue to decrease, at an exponential rate. Die shrinks used to take up to 4 years before the 130um process.

Die shrinks still take 4yrs.

Everyone runs two R&D teams in parallel, offset in project milestones by 2yrs, so they can deliver on a ~2yr node cadence.

Latency hiding, its not just for cpu cache fetches ya know :D
 

PlasmaBomb

Lifer
Nov 19, 2004
11,636
2
81
Progress itself, progresses with time. The doubling time will continue to decrease, at an exponential rate. Die shrinks used to take up to 4 years before the 130um process.
Die shrinks still take 4yrs.

Everyone runs two R&D teams in parallel, offset in project milestones by 2yrs, so they can deliver on a ~2yr node cadence.

Latency hiding, its not just for cpu cache fetches ya know :D

and it still takes a set length of time to build a fab for a new process tech or refurb an old fab with the latest technology...
 

firewolfsm

Golden Member
Oct 16, 2005
1,848
29
91
http://upload.wikimedia.org/wikipedia/commons/c/c5/PPTMooresLawai.jpg

If you check that graph, you'll clearly see that on a logarithmic plot there remains the elbow of an exponential curve. That means progress in progress.

None of this changes the fact that commercially available transistor density is increasing at an increasing rate. Automated algorithmic development of die shrinks has helped this along in the last few years. We are currently improving the methods of progress about as much as we directly progress.

They may soon step up to four concurrent R&D teams and change nodes every year, if competitive pressure is strong enough. If IBM can push graphene transistors into production fast enough, this may be the case.
 

GammaLaser

Member
May 31, 2011
173
0
0
They may soon step up to four concurrent R&D teams and change nodes every year, if competitive pressure is strong enough. If IBM can push graphene transistors into production fast enough, this may be the case.

I don't see this happening. Even at the current node shrink cadence, scaling with respect to feature size will very likely hit a wall before graphene can be commercialized.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
http://upload.wikimedia.org/wikipedia/commons/c/c5/PPTMooresLawai.jpg

If you check that graph, you'll clearly see that on a logarithmic plot there remains the elbow of an exponential curve. That means progress in progress.

None of this changes the fact that commercially available transistor density is increasing at an increasing rate. Automated algorithmic development of die shrinks has helped this along in the last few years. We are currently improving the methods of progress about as much as we directly progress.

They may soon step up to four concurrent R&D teams and change nodes every year, if competitive pressure is strong enough. If IBM can push graphene transistors into production fast enough, this may be the case.

You are conflating Moore's Law with node cadence. This sets up an obvious fallacy from the outset.

Show a timeline of node release versus node label and you'll see what I am talking about because that is exactly what I was talking about in the post of mine which you quoted above.

If you want to talk about Moore's Law then that is a whole other subject, not to be confused with node cadence. Node cadence is but one aspect to enabling Moore's law, but there is a whole host of other factors that contribute to Moore's Law above and beyond that of the underlying node cadence.

Regardless, the reason we are talking node cadence in this thread and not Moore's Law is that Skymont and 11nm will be here as dictated by node-cadence and not as dictated by Moore's Law.

Moore's Law gives us some idea of the expected performance capabilities of Skymont, but does nothing to tell us when to expect Skymont or when to expect 11nm. Ergo we are not talking about Moore's Law in this thread, thread is about the timeline for 11nm and Skymont.
 

Mopetar

Diamond Member
Jan 31, 2011
8,529
7,795
136
Moore's Law gives us some idea of the expected performance capabilities of Skymont . . .

Not really. Moore's Law is just an observation that approximately every two years, the number of transistors that can be packed into an area will roughly double. Based on our knowledge of physics we can gain some understanding of the physical properties of chips fabricated on such a process, as well as the technical hurdles that will need to be overcome in order to get there, but beyond that, we don't know how architectures will evolve.

By the time anyone reaches 11 nm, CPU architectures will have significantly changed, possibly going beyond their current form. The early APUs from both Intel and AMD are just a glimpse of the future. If either company is able to truly meld the CPU and GPU into one entity, the performance implications for some applications would be staggeringly different than what we could predict given current CPU architectures.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Not really. Moore's Law is just an observation that approximately every two years, the number of transistors that can be packed into an area will roughly double. Based on our knowledge of physics we can gain some understanding of the physical properties of chips fabricated on such a process, as well as the technical hurdles that will need to be overcome in order to get there, but beyond that, we don't know how architectures will evolve.

By the time anyone reaches 11 nm, CPU architectures will have significantly changed, possibly going beyond their current form. The early APUs from both Intel and AMD are just a glimpse of the future. If either company is able to truly meld the CPU and GPU into one entity, the performance implications for some applications would be staggeringly different than what we could predict given current CPU architectures.

If this was the year 1985 and you asked me "IDC, assuming Moore's Law continues to hold, what CPU performance can I reasonably expect to be able to purchase for my computer come 1995?" then I would have given you an answer that sure enough would have turned out to be more or less true.

Figure6.png


(^ I love this graph, if you extrapolate Moore's Law of Intel's processors back to the time it would predict was the invention of the transistor then it is only off by some 3 months!)

Don't get wrapped up in delineating the cause-and-effect, we all get that one does not cause the other, but within the context and framework of a set of conditions (assuming Moore's Law continues to hold, etc) then we can make projections.

If Moore's Law continues to hold then there are reasonable upper and lower bounds we can envision for the expected performance capabilities of whatever CPU is coming to market in 2016. If you tell me the CPU coming in 2016 is called skymont then that merely puts a name to the performance.

Figure5.png


It's all casual inference, not meant to be academically provable. We can ballpark these things and not be so dreadfully off-target as to have rendered the entire effort invalid.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Moore's Law is just an observation that approximately every two years, the number of transistors that can be packed into an area will roughly double.

Moore's law says nothing about area. Moore's law is about cost.

It says the number of devices that can be placed in an IC inexpensively will double about every two years.

It's just a happy coincidence that increasing the number of transistors in a CPU has a pretty linear relationship to performance.
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Moore's law says nothing about area. Moore's law is about cost.

It says the number of transistors that can be placed in an IC inexpensively will double about every two years.

You are right, and more specifically it had to do with the minimum of the cost-curve itself:

Graph1.png


Which in turn is the basis for maximizing gross margins at every point in the curve (itself an evolving function of time):
Graph3.png


(I spent a little time intimately involved in the economic cause-effects dynamics that underlies Moore's Law ;) Sort of a passion of mine, one that turned out to pay well too :p)
 

Mopetar

Diamond Member
Jan 31, 2011
8,529
7,795
136
Probably a poor choice of words on my part, but your definition is definitely more precise.
 

ed29a

Senior member
Mar 15, 2011
212
0
0
I believe it is more lack of competition than anything that has kept clock speeds relatively constant and # of cores where they are at. Plus, the push to keep power manageable has bee a factor too. Sure Intel could release a TDP 2800k @ 4.2ghz, but the power usage would be very high.

Problem with core count is that writing good multithreaded software to actually use those cores is extremely difficult. Since desktop/consumer software is still using a small number of cores, there is no real motivation from either AMD or Intel to push more and more cores.
 

Mopetar

Diamond Member
Jan 31, 2011
8,529
7,795
136
It's all casual inference, not meant to be academically provable. We can ballpark these things and not be so dreadfully off-target as to have rendered the entire effort invalid.

I only take issue with the fact that this line of though assumes that CPUs generally stay the same. Even though we've moved to multiple cores, we haven't seen a fundamental change in the CPU itself.

Right now APUs have moved the graphics onto the same die, whereas before they were merely on-package, and before that entirely discreet. Moving forward I think they're going to become increasingly intermingled, possibly to the point that it's no longer possible to easily distinguish where one ends and the other begins. This is the type of change that is going to result in a significant deviation from expected results.

It may not happen by the time Intel hits 11 nm, but I think it's fairly inevitable eventually.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I only take issue with the fact that this line of though assumes that CPUs generally stay the same. Even though we've moved to multiple cores, we haven't seen a fundamental change in the CPU itself.

Right now APUs have moved the graphics onto the same die, whereas before they were merely on-package, and before that entirely discreet. Moving forward I think they're going to become increasingly intermingled, possibly to the point that it's no longer possible to easily distinguish where one ends and the other begins. This is the type of change that is going to result in a significant deviation from expected results.

It may not happen by the time Intel hits 11 nm, but I think it's fairly inevitable eventually.

If you replace "GPU" with "FPU" or "SSE" and anything else that generally involves what are generically referred to as "ISA extensions" then I think you'll see, and possibly agree, that the special case you aim to make for the future of APU's is really no more or less special than the same old expansion of CPU capabilities that have been in play for decades...and who's existence is infact partly responsible for Moore's Law staying on track.

I don't see APU's as being revolutionary to Moore's Law, I see them as being necessary to its continuation.

x86ISAovertime.jpg


Without APU the curve flatlines, and with it the pace of Moore's Law. IMHO.
 

Mopetar

Diamond Member
Jan 31, 2011
8,529
7,795
136
Perhaps I'm just expecting an explosion rather than steady growth. It's unlikely that APUs will magically become some kind of magical hybrid combination overnight. The resulting changes may be more gradual, but I think that the end result will result in an upward trend in performance (at least for some workflows) not anticipated or predicted by historical results.

On the other hand, there are people who argue that most people already have more computational power than they need, so eventually we'll be able to pack this kind of performance into smaller devices such as phones. The SoC model also presents an interesting alternative where adding specialized hardware to solve common problems is also worthwhile. Either way, I see x86 changing a lot over the next five years.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Perhaps I'm just expecting an explosion rather than steady growth. It's unlikely that APUs will magically become some kind of magical hybrid combination overnight. The resulting changes may be more gradual, but I think that the end result will result in an upward trend in performance (at least for some workflows) not anticipated or predicted by historical results.

On the other hand, there are people who argue that most people already have more computational power than they need, so eventually we'll be able to pack this kind of performance into smaller devices such as phones. The SoC model also presents an interesting alternative where adding specialized hardware to solve common problems is also worthwhile. Either way, I see x86 changing a lot over the next five years.

Damn dude! You drive a hard bargain. You want an "explosion" in already otherwise exponential growth rate of improvements?

A continuation of the existing "it doubles every 2+ yrs" exponential rate is not enough :( LOL :D

I hope for all our sakes that the industry steps up and somehow delivers super-linear exponential growth! :thumbsup:

But seriously, wow, talk about having high expectations :eek: Status quo must be challenged, and you sir are certainly dropping the gauntlet :eek: Let's see if the challenge goes met or unmet :sneaky:
 

Mopetar

Diamond Member
Jan 31, 2011
8,529
7,795
136
It probably won't happen in every area. Some tasks are completely or largely linear and the best way to improve performance is to increase the clock speed. That's not something that I see improving beyond the current pace. That said, if or when such a bump in growth occurs, I don't expect that it will cause a shift in the rate of growth. It's much like graphing a linear equation, but introducing a shift at some point.

Consider the equations:

f(x) = 1.5x and g(x) = {1.5x, for x < 10; 1.5x + 5, for x >= 10;}

Both see similar rates of growth over a long period of time, but for the second, there's a nice, one-time bump in performance. For large enough values of x it's insignificant, but when it first occurs it's meaningful. Perhaps you're right in thinking that such increases are more gradual than sudden, but either way, I think that future APUs are going to change the game in a way that we can't fully appreciate at this point in time.

Whether a continuation of exponential growth is enough or not is irrelevant. If someone is capable of exceeding that, they're going to do it to get a leg up on the competition. Whether or not anyone is capable of doing so remains to be seen.
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
Die shrinks still take 4yrs.

Everyone runs two R&D teams in parallel, offset in project milestones by 2yrs, so they can deliver on a ~2yr node cadence.

Latency hiding, its not just for cpu cache fetches ya know :D

Well that's not painting a correct picture. The way you phrase it, you make it sound like.

Time 0: Team A starts Tock
Time 2: Team B starts Tick
Time 4: Team A finishes Tock, starts next Tock
Time 6: Team B finished Tick, starts next Tick

Although there is some reshuffling or responsibilities, the "main" design team owns the Tock and the next immediate Tick.
 

GammaLaser

Member
May 31, 2011
173
0
0
I have to agree with IDC on this one.

The way I see it, these underlying architectural changes have just been new ways to "implement" the transistor growth provided by Moore's Law.

It's diminishing returns to keep dumping transistors towards one particular performance feature. First it was no longer worth it to keep boosting ILP by making long pipelines or super wide issue CPUs, so the extra transistor budget got put towards multi-core for TLP. Soon multi-core will have diminishing returns (if not already), so now we put our extra transistors towards very wide vector processors (AVX/AVX2) akin to what's been used in graphics chips for years.

If you don't make these transitions, Moore's Law won't afford you the performance gains you would otherwise expect.
 

GWilkiejr

Junior Member
Nov 30, 2011
1
0
0
Guys,

Not that I am a giant conspiracy theorist or anything, but as is the case with most of us here we've all built computers from scratch right? I can't possible be the only one who's looked at the manufacturer date that is stamped on the top of every processor. Most recent example my Core I-7 2600k that I built earlier this year was manufactured 2 years ago.....
The stuff that is released to us as "Cutting Edge" is years old. Intel creates for the US gov't first and for the rest of us second. When we discuss quantum wells or 11nm technology we are merely speculating what scraps they will throw off their table to us. Take cell technology for example. CDMA technology which is what carriers like Sprint use today was around and in use during WWII. When they got something better they discarded it and it was then commercialized.
Let us dispel the question of "can they do it," and replace it with "What outdated technology will they allow us to use next."

-GW
 

tweakboy

Diamond Member
Jan 3, 2010
9,517
2
81
www.hammiestudios.com
The node-candence timing seems just a bit off.

22nm will come out 1H 2012.

Intel has mostly stuck with a 2.1-2.2 yr/node cadence.

So we should not be expecting 14nm until mid-2014.

And 11nm would not be expected until 2H 2016.


I totally agree with everything Idontcare said.

11nm in 2015 is incorrect. 2H 2016 if not 2017.

How many cores are these 11nm processors going to have for desktop versions ?

Gosh step it up Intel, make 8 core and 16 core desktops with HT 32 cores wow....
 

tweakboy

Diamond Member
Jan 3, 2010
9,517
2
81
www.hammiestudios.com
If you replace "GPU" with "FPU" or "SSE" and anything else that generally involves what are generically referred to as "ISA extensions" then I think you'll see, and possibly agree, that the special case you aim to make for the future of APU's is really no more or less special than the same old expansion of CPU capabilities that have been in play for decades...and who's existence is infact partly responsible for Moore's Law staying on track.

I don't see APU's as being revolutionary to Moore's Law, I see them as being necessary to its continuation.

x86ISAovertime.jpg


Without APU the curve flatlines, and with it the pace of Moore's Law. IMHO.

Thanks for the pic. I was there since the 80's. P1 486 DX 100 Mhz Turbo with 256KB RAM and a 50MB hard drive lol I think there was a Turbo button on the computer next to the on off and what not..,,, uh :(:(:( But those days were still cool, we used XTree Gold ,,, 4DOS , Descent 1 and 2 ,nnn if you had 8MB of ram you were the king of the neighborhood. lol
 
Status
Not open for further replies.