20nm SoC's in 2015, GPU wait could be awhile!

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

know of fence

Senior member
May 28, 2009
555
2
71
As the process nodes get smaller and smaller,what happens when they get down to zero?o_O
Will they get down to 1 nano or so then decimal points of 1?

We've got 6 - 7 shrinks left until the circuits consists of a few atoms wide wires and gates, it will likely take decades to get there.
It should go 14nm or 16 nm - 10 nm - 7 nm - 5 nm - 3.5 nm - 2.5 nm, though there is no research past 7 nm, from what I understand.
A single Si atom would be 0.22 nm across just for reference.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
They have faced challenges getting smaller before, and they don't typically have research telling them they can do it more than 6 years out anyway. That isn't to say it can continue forever, it seems the limit of the size of the atom is going to cause presumably be a permanent barrier to this approach but I don't bet against Intel they have been doing this longer than I have been alive.
 

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
Moore's "law" (more like observation) is dead. It is becomming increasingly uneconomical to push through the nodes shrinks to improve transistor density.

We need to get off silicon, and fast. Samsung's recent research on graphite was encouraging, IBM has come far on carbon nanotubes. But both are still some years off, especially on graphite. On the other hand, once the world devotes itself with a singular focus on a technological challenge, rapid progress almost always follows. However, the R&D money for old tech materials is more than an order of a magnitude more, and that is a conservative estimate, which is the main stumbling block.

Im guessing we'll get to 10 nm at the end of this decade before it stops going further on silicon. Not that we can't get to 7 or even 5 technologically, it is more a matter of cost.
 

Mand

Senior member
Jan 13, 2014
664
0
0
As the process nodes get smaller and smaller,what happens when they get down to zero?o_O
Will they get down to 1 nano or so then decimal points of 1?
Or in other words...How low can Joe go?:p

There is a limit, yes. And that limit will be because of quantum tunneling. The reason wires work is that the insulator around the wire prevents the electrons from moving through it. When you get small enough, even an infinite potential (perfect insulator) no longer stops electrons from passing through it. It's a quantum mechanics thing, where the electron will just end up on the other side of the insulator - which is another wire. And it's not supposed to be there, and ALL of the wires are doing this, spilling their electrons to their neighbors.

The result is that the tightly controlled voltages and currents that processors rely on to do their work become significantly less controlled. And there's ABSOLUTELY no way around it. That is the limit of Moore's Law, and it is a hard limit. We may be able to get to the high single digits of nanometers for process width, but not much lower.

Just so you have some appreciation of the scope involved, the typical separation between atoms in most things, including the materials we make processors out of, is 4 angstroms, or 0.4 nanometers. An 8 nanometer process width therefore makes a strip of metal 20 atoms wide. Eventually it stops being a wire, and starts being a cluster of atoms, and the physics that govern how things work changes radically - from bulk properties to quantum properties.
 

realibrad

Lifer
Oct 18, 2013
12,337
898
126
Which is why eventually quantum computing will take over, and we will all connect through what are essentially dummy terminals. The terminal will be leaps ahead of today, but relatively weak as most of the computational work will be done by other of site areas. We wont be stuck in Binary, and as such so much more can be done. Why live in a 01 world when you can do so much more.
 

Bubbleawsome

Diamond Member
Apr 14, 2013
4,834
1,204
146
How does optical computing fit in here? I understand graphine and nanotubes, but is optic computing pretty much tiny fiber optics? I read somewhere that optic tech in today's process node would give us the equivalent of something like a 30,000,000ghz processor. o_O
If that's anywhere near true, and anywhere near the next 20 years that would be awesome.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Why do we believe this, exactly?

Why can't they do both?

It's a question worth asking. My gut feeling is that it's an assumption made by those who aren't technically fluent in the matter and it spreads across the internet like wild fire. Not that i'm technically fluent in all things related to node design, but i've seen similar claims made in the past which didn't necessarily turn out to be the case. I'm really curious about this as well. I don't see any reason to necessarily believe they can't do both; unless someone is an expert in the field nobody can really state that definitively, either.

I dunno. We'll see I guess.
 
Feb 4, 2009
35,706
17,248
136
As the process nodes get smaller and smaller,what happens when they get down to zero?o_O
Will they get down to 1 nano or so then decimal points of 1?
Or in other words...How low can Joe go?:p

**I think** around .09 they will need to figure out a better way of moving electricity threw them. Something about the circuits are too close and interfere with each other. Can someone more knowledgeable explain better please.
 

Mand

Senior member
Jan 13, 2014
664
0
0
How does optical computing fit in here? I understand graphine and nanotubes, but is optic computing pretty much tiny fiber optics? I read somewhere that optic tech in today's process node would give us the equivalent of something like a 30,000,000ghz processor. o_O
If that's anywhere near true, and anywhere near the next 20 years that would be awesome.

Optical computing has a number of distinct advantages, but also some drawbacks.

The biggest advantage is bandwidth. In an electronic circuit, you can only have one signal in it at a time. The voltage and current have one value at any given point in a wire. If you want another signal, you need another wire.

In an optical system, you can have a HUGE number of signals in the same physical space. Typical counts for fiber telecom systems are 40x multiplexing, where each individual piece of glass carries 40 distinct, separable, independently switchable signals through it. The same sort of thing can theoretically be applied to an optical processor setup, which can get around some of the size issues involved. If you can carry 40 signals in one channel, that channel can be 40 times the size and still have the same overall throughput.

The downside is that electronics is a very well-developed system at this point, from design to manufacture. There's a reason we use it. It's easy to make switches, transistors, etc, and we know how to set them up to do what we want. Optical computing not only requires new manufacturing processes, it requires new individual components, design architectures, everything. What does a de-multiplexer on the scale of an integrated circuit look like? How do we leverage the advantages of optical components to take advantage of their strengths and mitigate their weaknesses? There's a lot of engineering to be done before we even approach the level of existing semiconductor-based electronics.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
As the process nodes get smaller and smaller,what happens when they get down to zero?o_O
Will they get down to 1 nano or so then decimal points of 1?
Or in other words...How low can Joe go?:p

We don't know, no one knows. My current bet is that it will stop somewhere around 3nm. What I know for sure, though, is that TSMC won't reach 3nm any time soon, or any other dedicated foundry. The costs will be astronomical.

Fortunately, transistor size isn't the only thing that matters for performance. Maybe we'll ever buy a processor with carbon nanotubes with clock speeds of 1THz.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
We've got 6 - 7 shrinks left until the circuits consists of a few atoms wide wires and gates, it will likely take decades to get there.
It should go 14nm or 16 nm - 10 nm - 7 nm - 5 nm - 3.5 nm - 2.5 nm, though there is no research past 7 nm, from what I understand.
A single Si atom would be 0.22 nm across just for reference.

Intel's roadmap:

intel_rd.jpg
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
They have faced challenges getting smaller before, and they don't typically have research telling them they can do it more than 6 years out anyway. That isn't to say it can continue forever, it seems the limit of the size of the atom is going to cause presumably be a permanent barrier to this approach but I don't bet against Intel they have been doing this longer than I have been alive.

The biggest problem is leakage. Leakage/quantum tunneling caused the end of Dennard's Law. But I think, even after transistor shrinking will have stopped, there will still be a lot of innovation left. I'm not sure if people notice, but this 2-year cadence is actually quite fast, so that's been the biggest focus for semiconductor companies.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Moore's "law" (more like observation) is dead. It is becomming increasingly uneconomical to push through the nodes shrinks to improve transistor density.

We need to get off silicon, and fast. Samsung's recent research on graphite was encouraging, IBM has come far on carbon nanotubes. But both are still some years off, especially on graphite. On the other hand, once the world devotes itself with a singular focus on a technological challenge, rapid progress almost always follows. However, the R&D money for old tech materials is more than an order of a magnitude more, and that is a conservative estimate, which is the main stumbling block.

Im guessing we'll get to 10 nm at the end of this decade before it stops going further on silicon. Not that we can't get to 7 or even 5 technologically, it is more a matter of cost.
Moore's law isn't dead, not for Intel at least. For other companies it isn't looking as good. Don't worry about materials. Intel's obviously researching those things. For example, HKMG was being developed for half a decade before a product was released. I'm in fact amazed by how good this article predicted the future 12 years ago: The Amazing Vanishing Transistor Act. The post-silicon materials are already being researched by Intel, since 2009 at least. Also:

different-transistor-topologies.jpg


Source: 7nm, 5nm, 3nm: The new materials and transistors that will take us to the limits of Moore’s law
 

tolis626

Senior member
Aug 25, 2013
399
0
76
Yield issues come from node issues . . . unless you mean something else?

Nope, different things. There aren't enough usable transistors being produced. Those that are usable would function properly in a product. Not the best description, but that's about it.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Yield issues come from node issues . . . unless you mean something else?

If they didn't have those yield issues, there wouldn't be any problems with 14nm. There are no other problems like no density improvements, unlike other foundries' 14/16nm.

9402d1386033562-intel-yield.jpg


I would call a node issue something like Global Foundries' gate-first.
 

Fire&Blood

Platinum Member
Jan 13, 2009
2,333
17
81
Moore's law lives a little longer?
http://global.samsungtomorrow.com/?p=35576

Graphene has one hundred times greater electron mobility than silicon, the most widely used material in semiconductors today. It is more durable than steel and has high heat conductibility as well as flexibility, which makes it the perfect material for use in flexible displays, wearables and other next generation electronic devices.

The new method developed by SAIT and Sungkyunkwan University synthesizes large-area graphene into a single crystal on a semiconductor, maintaining its electric and mechanical properties. The new method repeatedly synthesizes single crystal graphene on the current semiconductor wafer scale.
 

Hitman928

Diamond Member
Apr 15, 2012
6,599
12,071
136
Nope, different things. There aren't enough usable transistors being produced. Those that are usable would function properly in a product. Not the best description, but that's about it.

If they didn't have those yield issues, there wouldn't be any problems with 14nm. There are no other problems like no density improvements, unlike other foundries' 14/16nm.

9402d1386033562-intel-yield.jpg


I would call a node issue something like Global Foundries' gate-first.

I guess it's just semantics at this point, but I see what you mean. I view it differently, I guess. It doesn't matter if you can get working transistors out of a node, you can do that with crazy exotic stuff, what proves a node is getting high enough yields to make it viable. The other stuff (density and such) comes down to what you define the node to be, which, at this point is becoming pretty arbitrary. That's how I was looking at it, anyway.
 

know of fence

Senior member
May 28, 2009
555
2
71
I'm not sure if people notice, but this 2-year cadence is actually quite fast, so that's been the biggest focus for semiconductor companies.

2007, 2009, 2011 - 2014 with the delays of 22nm and now 14 nm Intel is already off, along with everyone else. Though their 14 nm SoCs are about to shake up the mobile market, in a quake like fashion.

Nvdia's 750 Maxwell was released as a 28nm chip and so was the Snapdragon 801 shipping in the flagship phones on the front page. The weird 20 nm half step isn't really reducing cost per transistor, so it's at best an option for the big TDP limited 250 W monstrosities, yet both graphics makers just announced their dual GPU cards again in 28 nm. At least everything is out there right now, the next thing to be released will have to be something new, right?