Intel roadmap promises 10nm by 2018

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SHAQ

Senior member
Aug 5, 2002
738
0
76
Corporations have to milk every mini improvement in tech. They are actually anti-innovation. That's why all this future talk will be delayed 10x longer than it really needs to be. It would be nice to have real competition where one company goes straight for graphene,nanotubes etc. Maybe Google, Facebook or Apple wll run with it huh? lol
 
Dec 30, 2004
12,553
2
76
Corporations have to milk every mini improvement in tech. They are actually anti-innovation. That's why all this future talk will be delayed 10x longer than it really needs to be. It would be nice to have real competition where one company goes straight for graphene,nanotubes etc. Maybe Google, Facebook or Apple wll run with it huh? lol

Meh, there's no point in that now though. We'll move when we have to, until then, these faster processors are good enough for us :)
 
Dec 30, 2004
12,553
2
76
I got a friend who swears its going into the hand or forehead i loved your response so much ill pay him a visit today to tell him how stupid he is for thinking the devil is gonna do what everyone knows he will do.

;) Love messing with this guy and your my new best friend.

Revelation is a spiritual book. We got sidetracked when we interpreted it literally. It should be interpreted spiritually. The mark of the beast is already here alive and well; the beast is greed.
 

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
I remember once Gordon Moore gave a speech and he said that at any point in his career he would look forward 10 years and think "I don't so how it's possible for scaling to continue" and then 10 years later we would find a way forward and things would move on. When he gave one his early "Moore's Law" speeches, he used to say that he thought we could double for 10 years plus or minus two years, which would mean the end of Moore's Law in 1975 or thereabouts. At some point, quite a few people in the industry thought it was impossible to make transistors with a minimum feature size smaller than 1um.

Clearly you can't keep on the path that we are on of shrinking transistor sizing forever, but to me, we have a path 10 years forward. Which is normal. Once we hit the scaling limit, I have no doubt, things will continue - either with new materials or we'll move to 3D with TSV or something else. If it all goes horribly wrong, I'm pretty good at gardening... which is always a useful skill.
 
Last edited:

PreferLinux

Senior member
Dec 29, 2010
420
0
0
LOL, Intel also predicted ten years ago we'd have 10ghz chips by now.
Yes, on Netburst. If they gave us a single core Netburst chip now, it probably would have that sort of speed.

Revelation is a spiritual book. We got sidetracked when we interpreted it literally. It should be interpreted spiritually. The mark of the beast is already here alive and well; the beast is greed.
If that was the case, it would be written as such. But it isn't written like that.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Yes, on Netburst. If they gave us a single core Netburst chip now, it probably would have that sort of speed.

Given the parametric transistor improvements Intel has made on each successive node since 90nm, I would not be surprised at all if a single-core netburst CPU could be clocked at 10GHz (or more) on 32nm.
 

Cogman

Lifer
Sep 19, 2000
10,286
147
106
I thought 11nm was the limit for silicon ...

No, No, that was 220nm! :p

The only absolute limit is the size of a single silicon atom. And even then, they might be able to get multiple transistors out of that through some method currently not known.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Quantum computers aren't some kind of magic bullet that will replace all other computers. That's a star trek type fantasy. They are unbeatable for processing some types of programs, and slow for others.

Theoretically spintronics can combine the best of both worlds in hybrid computers using both classical and quantum circuitry. Instead of pushing current through circuits, they use the spin of the electron. In this way they can approach the speed of light using very small amounts of energy and without frying everything in sight or be entangled for quantum computing.

The trick is engineering precision. Just as shrinking circuits or making them faster requires greater precision, so does designing spintronic devices. Modern hard disk drives are spintronic devices, but the ability to make complex circuits is still some years away. Room temperature superconductors, carbon nanotubes, graphine, plasmonics, etc. all have potential, but have their theoretical limitations that spintronics don't.

If room temperature superconductors are ever created then heat should cease to be a critical limiting parameter for ultra high end computing platforms. What will be the next limiting factor? Are we going to 60 thz optical computers?
 

Cogman

Lifer
Sep 19, 2000
10,286
147
106
I remember once Gordon Moore gave a speech and he said that at any point in his career he would look forward 10 years and think "I don't so how it's possible for scaling to continue" and then 10 years later we would find a way forward and things would move on. When he gave one his early "Moore's Law" speeches, he used to say that he thought we could double for 10 years plus or minus two years, which would mean the end of Moore's Law in 1975 or thereabouts. At some point, quite a few people in the industry thought it was impossible to make transistors with a minimum feature size smaller than 1um.

Clearly you can't keep on the path that we are on of shrinking transistor sizing forever, but to me, we have a path 10 years forward. Which is normal. Once we hit the scaling limit, I have no doubt, things will continue - either with new materials or we'll move to 3D with TSV or something else. If it all goes horribly wrong, I'm pretty good at gardening... which is always a useful skill.

:) I'm pretty sure the lack of scaling won't put engineers totally out of a job (I hope it doesn't at least :D). I would predict that if we hit a wall with scaling, the future of engineer work would be implementing application logic into hardware. In other words, we have MPEG4 decoders in hardware, I could see us starting to take it to the extreme. For example, an Email hardware component and a HTML5 translating component.

That would give us jobs forever.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Thank god, because I suck at gardening! Well, I'm not much of an engineer either, for that matter...
 

IntelEnthusiast

Intel Representative
Feb 10, 2011
582
2
0
Given the parametric transistor improvements Intel has made on each successive node since 90nm, I would not be surprised at all if a single-core netburst CPU could be clocked at 10GHz (or more) on 32nm.

The big problem is how would you keep it from burning a hole down to the center of the earth with the heat that it would generate? While netburst wasnt a great performance processor, the thing that really got me with it was the heat that it would generate. So thank god for the Intel® Core™ 2 Duo processors.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
If room temperature superconductors are ever created then heat should cease to be a critical limiting parameter for ultra high end computing platforms. What will be the next limiting factor? Are we going to 60 thz optical computers?

Superconductors only address power-consumption attributable to resistive heating (so-called joule heating) of the component they directly replace.

The do nothing to address power-consumption attributable to leakage, nor do they change the power-consumption attributes of the active components in the circuit which are still resistive.

Superconductive circuits will help, if they can ever become practical for ubiquitous computing purposes, but they are not a holy grail.

:) I'm pretty sure the lack of scaling won't put engineers totally out of a job (I hope it doesn't at least :D). I would predict that if we hit a wall with scaling, the future of engineer work would be implementing application logic into hardware. In other words, we have MPEG4 decoders in hardware, I could see us starting to take it to the extreme. For example, an Email hardware component and a HTML5 translating component.

That would give us jobs forever.

At a macro-economic perspective this position certainly holds true, but at the micro-economic level (think Detroit) or the individual-level (the laid off CMOS process engineer) this provides no job security or comfort that a changing future will hold a place for them in the job market.

This is the downside to having specialized skills and industries while not having much in the way of a social safety net. Look at the auto industry, or perhaps even more poignant the NASA space shuttle agency and ex-employees.

A few percent of them will transition well into the private sector, but how many thousands of refueling technicians and launch-pad specialists does the private sector really need?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
The big problem is how would you keep it from burning a hole down to the center of the earth with the heat that it would generate? While netburst wasnt a great performance processor, the thing that really got me with it was the heat that it would generate. So thank god for the Intel® Core™ 2 Duo processors.

True, true...on 90nm, Netburst was a very efficient device for converting electricity into heat :D

But as 65nm Cedar Mill showed, as did 130nm Northwood, the problems with Prescott and 90nm were a combination of both design AND process complimenting each other in the most undesirable ways imaginable.

The problem with Netburst was not the heat or power-consumption per se, but rather the lowly IPC performance of the microarchitecture itself which then necessitated all the higher clockspeeds to make it competitive at the price-points which AMD was defining for the market at the time.

On 32nm, a single-core netburst chip operating at 10GHz would probably not require more than 50W TDP.

The engineering challenge though would be the die-size, 50W coming from a die that is a mere 10mm^s in area would really challenging to keep the temperatures down (heat "density" drives temperature), but if they spread out the active components and made the chip be largish - say 50-75mm^2 - then temperatures would be more manageable.

None of that would address the lackluster IPC though, as such even at 10GHz the chip would simply be uncompetitive, so no sense bothering to create it in the first place.
 

Cogman

Lifer
Sep 19, 2000
10,286
147
106
At a macro-economic perspective this position certainly holds true, but at the micro-economic level (think Detroit) or the individual-level (the laid off CMOS process engineer) this provides no job security or comfort that a changing future will hold a place for them in the job market.

This is the downside to having specialized skills and industries while not having much in the way of a social safety net. Look at the auto industry, or perhaps even more poignant the NASA space shuttle agency and ex-employees.

A few percent of them will transition well into the private sector, but how many thousands of refueling technicians and launch-pad specialists does the private sector really need?

The older, more specialized engineers will be the ones that suffer. There is no doubt about that. however, in general, I could see it being a good thing for people in my generation. I could see it creating more jobs and perhaps making the total cost of personal chip manufacturing fall through the floor. (after all, if we stagnate at a process for a long period, the cost of creating stuff at that node will undoubtedly fall through the floor.)

The jobs created from it, I think, would be more than the jobs that would be lost.
 

tweakboy

Diamond Member
Jan 3, 2010
9,517
2
81
www.hammiestudios.com
Its going to have 50mb L3 cache;I dont know if thats ivy bridge , but the new 3850xx intels new one sorry for spelling is 12 cache L3 ,, I read intel talked about 50mb L3 cache , whatch out for that,, it will be a seriously hard core chip.

Do you buy
 

wuliheron

Diamond Member
Feb 8, 2011
3,536
0
0
If room temperature superconductors are ever created then heat should cease to be a critical limiting parameter for ultra high end computing platforms. What will be the next limiting factor? Are we going to 60 thz optical computers?

The biggest technical problem with room temperature superconductors is they don't exist and no one knows if is possible to make them at all, much less turn them into viable commercial products with billions of complex parts crammed into a few millimeters of space. Likewise, there are other ways around heat issues so it isn't an advantage exclusive to room temperature superconductors.

Circuits that approach the speed of light and quantum computing are already being developed by countless laboratories around the world and it is inevitable we will have computers that use them at some point. Exactly what might be the next big limiting factor I have no clue. Its a bit like asking who will win the world's series in 2040.

Right now the biggest limiting factor and darkest horse in this race is heterogeneous computing. The traditional cpu is being replaced with something more like the human brain with all its specialized parts and, yet, massive parallel processing capacity. The architectural theory for the cpu goes back to WWII, but we don't even have a basic theory for heterogeneous computing yet. People are rushing to develop all the specialized parts and parallel processors required, but otherwise blindly feeling their way forward as to how to combine them all in the most effective and efficient manner.

Thankfully the current research into the brain is progressing by leaps and bounds, but the mathematics behind this kind of computing are mind-numbingly complex and related to physical theories of everything. I'm reminded of Einstein drawing the first diagram of a laser on a napkin and then telling his dinner companion it would be another fifty years before it could be built. We don't even have the theory yet, much less any way of telling how long it might take to implement.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
The biggest technical problem with room temperature superconductors is they don't exist and no one knows if is possible to make them at all, much less turn them into viable commercial products with billions of complex parts crammed into a few millimeters of space. Likewise, there are other ways around heat issues so it isn't an advantage exclusive to room temperature superconductors.

Circuits that approach the speed of light and quantum computing are already being developed by countless laboratories around the world and it is inevitable we will have computers that use them at some point. Exactly what might be the next big limiting factor I have no clue. Its a bit like asking who will win the world's series in 2040.

Right now the biggest limiting factor and darkest horse in this race is heterogeneous computing. The traditional cpu is being replaced with something more like the human brain with all its specialized parts and, yet, massive parallel processing capacity. The architectural theory for the cpu goes back to WWII, but we don't even have a basic theory for heterogeneous computing yet. People are rushing to develop all the specialized parts and parallel processors required, but otherwise blindly feeling their way forward as to how to combine them all in the most effective and efficient manner.

Thankfully the current research into the brain is progressing by leaps and bounds, but the mathematics behind this kind of computing are mind-numbingly complex and related to physical theories of everything. I'm reminded of Einstein drawing the first diagram of a laser on a napkin and then telling his dinner companion it would be another fifty years before it could be built. We don't even have the theory yet, much less any way of telling how long it might take to implement.

I'm might be alone on this but I don't see heterogenous computing to be any more complex or challenging than the circuit-balancing efforts that go into managing microarchitectures that have >700 instructions in the ISA already.

x86ISAovertime.jpg


Implementing the fetch, decoders, schedulers, pipeline, etc as needed to manage the fact that code can be requesting the CPU to issue any number of the instructions in the ISA within a given core is, itself, a rather heterogenous computing environment.

The same goes for the programmers and the compilers, they somehow already manage a world that offers the flexibility of these ridiculously large instruction sets.

The challenge I see facing the industry is not "how do we do hetergenous computing?" but rather "how do we convince people to want to pay for our heterogenous computing solution?".
 

Cogman

Lifer
Sep 19, 2000
10,286
147
106
I'm might be alone on this but I don't see heterogenous computing to be any more complex or challenging than the circuit-balancing efforts that go into managing microarchitectures that have >700 instructions in the ISA already.

x86ISAovertime.jpg


Implementing the fetch, decoders, schedulers, pipeline, etc as needed to manage the fact that code can be requesting the CPU to issue any number of the instructions in the ISA within a given core is, itself, a rather heterogenous computing environment.

The same goes for the programmers and the compilers, they somehow already manage a world that offers the flexibility of these ridiculously large instruction sets.

The challenge I see facing the industry is not "how do we do hetergenous computing?" but rather "how do we convince people to want to pay for our heterogenous computing solution?".
:), not only that, but if you think the large Instruction set of CPUs are bad, wait until you see the stuff that goes on in GPU land. There, almost nothing is really standard, yet somehow games are still being produced and even with platform specific code.

Though, I would say that current compilers for the CPU are somewhat disappointing in their use of CPU extensions. The Intel compiler is pretty much the only one out there that really makes good use of the SSE instruction set. The GCC is particularly bad at this.

Though, the actual need for SIMD really is overstated. There aren't a WHOLE lot of places where the need for SIMD is really large. That being said, things like AES on chip can be pretty useful for corporate environments.
 

wuliheron

Diamond Member
Feb 8, 2011
3,536
0
0
I'm might be alone on this but I don't see heterogenous computing to be any more complex or challenging than the circuit-balancing efforts that go into managing microarchitectures that have >700 instructions in the ISA already.

Implementing the fetch, decoders, schedulers, pipeline, etc as needed to manage the fact that code can be requesting the CPU to issue any number of the instructions in the ISA within a given core is, itself, a rather heterogenous computing environment.

The same goes for the programmers and the compilers, they somehow already manage a world that offers the flexibility of these ridiculously large instruction sets.

The challenge I see facing the industry is not "how do we do hetergenous computing?" but rather "how do we convince people to want to pay for our heterogenous computing solution?".

That's the challenge for any new industry. When the laser was finally developed people said, "That's nice, but what can you do with it?" It took decades to figure out widespread commercial applications, but now they are ubiquitous and we are still rapidly finding new commercial applications for the damned things.

The first widespread commercial applications for heterogeneous computing are likely to be in video games and other consumer products. The ability to run physics and AI on the cpu and not lug the graphics card is one possible application. Gamers want that extra eyecandy and if they don't have to pay an arm and leg to get it they'll demand it. Its only been in the last few weeks that MS released its new C++AMP so the idea that multicore computing and parallel processing programming are somehow mature sciences is laughable. So far the question has been how to justify developing the technology for specific applications, but soon it may be can we justify ignoring the possibilities.
 
Last edited:

Cogman

Lifer
Sep 19, 2000
10,286
147
106
That's the challenge for any new industry. When the laser was finally developed people said, "That's nice, but what can you do with it?" It took decades to figure out widespread commercial applications, but now they are ubiquitous and we are still rapidly finding new commercial applications for the damned things.

The first widespread commercial applications for heterogeneous computing are likely to be in video games and other consumer products. The ability to run physics and AI on the cpu and not lug the graphics card is one possible application. Gamers want that extra eyecandy and if they don't have to pay an arm and leg to get it they'll demand it. Its only been in the last few weeks that MS released its new C++AMP so the idea that multicore computing and parallel processing programming are somehow mature sciences is laughable. So far the question has been how to justify developing the technology for specific applications, but soon it may be can we justify ignoring the possibilities.

Sorry, but the bolded is false. Multicore and parallel processing has been around for a LONG time. Longer than most might expect. Most of the research into the subject happened in the 1960's! ( see, http://en.wikipedia.org/wiki/Dining_philosophers_problem )

Super computer programmers have been dealing with this stuff for a LONG time, it is only recently that consumer applications have started to seriously use it. To say it is its infancy because a new tool comes out is laughable.
 

SHAQ

Senior member
Aug 5, 2002
738
0
76

They have 8 years while Intel milks silicon and where money and opportunity collide anything can happen. Too many conservative stuffed shirts running companies. Apple brought just a tiny bit of innovation and there stock is through the roof.