I'm confused about Intels 14nm process lead

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SAAA

Senior member
May 14, 2014
541
126
116
Actually if you compare the SRAM parts of the chips, 4MB L3 for Broadwell and same amount for A8, they are very close in size and density.
But the Broadwell cache runs at almost 3GHz in highest end core-M SKUs and the same exact design is used in 15-28W parts that run up to 3.5GHz... vs 1.5 for Cyclone+? (yeah lower TDP , but by how much? and can it really reach those speeds?)

OK. So much noise for a process that is light years ahead of any other current competitor, and it's not like finfet version of the latters will magically catch this.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Well, only 20% of Broadwell U is CPU+L3 running at high frequencies.
The other 80%, mostly iGPU, runs at much lower frequencies and still
the average transistor density is lower.

Intel-5th-Gen-Core-Die-Map.jpg

Look at all that dead space on that die.
 

jdubs03

Golden Member
Oct 1, 2013
1,497
1,086
136
Look at all that dead space on that die.

Good spot, there is easily enough space to increase the cpu/gpu size. for the cpu core the dead space accounts for ~20% of the total area. any reason why they wouldn't utilize all of the space?
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
Good spot, there is easily enough space to increase the cpu/gpu size. for the cpu core the dead space accounts for ~20% of the total area. any reason why they wouldn't utilize all of the space?

I can't speak for the graphic cores but it's more like they took a weird die photo (not really sure what that top strip is). Maybe it's the clearance to cut the die. The box for the CPU is much larger than it should. Refer to this Core M die photo for a better diagram.

http://www.extremetech.com/wp-content/uploads/2014/09/intel-core-m-broadwell-y-die-diagram-map.jpg
 
Last edited:

Fjodor2001

Diamond Member
Feb 6, 2010
4,622
745
126
Or get a trusted, independent third party to independently measure/determine/decide the true nm dimensions. Like IEEE standards, and similar. Or at least create a framework, for strictly defining how nm size is determined.

Yes, I agree. And I find it very strange that this isn't already in place. Anyone from the semiconductor industry that can explain why a common official and independent definition hasn't been agreed upon?

Anyway, when looking at the info provided by Hans de Vries, it for sure looks like TSMC 20 nm has higher density than Intel 14 nm. And that is still valid even when taking into consideration the differences in what type of blocks (iGPU/CPU/...) are mostly present on the dies of Apple A8/A8x vs Broadwell U/Y.

Judging by this, maybe Intel should change the name of its 14 nm process back to 22 nm or similar, and TSMC could keep calling theirs 20 nm? :p
 

NTMBK

Lifer
Nov 14, 2011
10,524
6,050
136
The performance of a chip is (roughly speaking) determined by its architecture, while its power consumption is by the process node.

Not at all true! Just compare 65nm Pentium D with the 65nm Conroe chips made on the exact same process.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Yes, I agree. And I find it very strange that this isn't already in place. Anyone from the semiconductor industry that can explain why a common official and independent definition hasn't been agreed upon?

There is no official or industry-wide standard definition for process nodes because it would serve no purpose to every company that wasn't #1.

Businesses aren't in the habit of hiring people and paying them an annual salary just so those people can wile away their days and months arguing with other people over a standardized process node definition.

A standardized definition would not lead to generating revenue, it will not sell more products, it would makes no money. But it would cost money.

So your question is backwards, instead of asking "why haven't they" you should be asking "why would they?".

It should be "why would a common official definition be agreed upon?" because the only time standards are agreed upon is when it lowers the cost of doing business (making it easier to make money).
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,622
745
126
There is no official or industry-wide standard definition for process nodes because it would serve no purpose to every company that wasn't #1.

Businesses aren't in the habit of hiring people and paying them an annual salary just so those people can wile away their days and months arguing with other people over a standardized process node definition.

A standardized definition would not lead to generating revenue, it will not sell more products, it would makes no money. But it would cost money.

So your question is backwards, instead of asking "why haven't they" you should be asking "why would they?".

It should be "why would a common official definition be agreed upon?" because the only time standards are agreed upon is when it lowers the cost of doing business (making it easier to make money).

I'm not sure I agree. We have common standards and definitions in many (or most actually) other technical areas, even though it does not strictly reduce cost.

Just to name a few: SPECint, Gflops, display contrast and brightness, display color space, disk IOPS, the list could be made huge...
 

dahorns

Senior member
Sep 13, 2013
550
83
91
I'm not sure I agree. We have common standards and definitions in many (or most actually) other technical areas, even though it does not strictly reduce cost.

Just to name a few: SPECint, Gflops, display contrast and brightness, display color space, disk IOPS, the list could be made huge...

It isn't about reducing cost, it is about creating additional value. All of the things you mentioned can change product to product. Not only can a company use it to compare their products to their competitors', but they can also use it to justify product segmentation, (i.e., higher prices for better products). The same wouldn't be true for manufacturing process.

Anyway, when looking at the info provided by Hans de Vries, it for sure looks like TSMC 20 nm has higher density than Intel 14 nm. And that is still valid even when taking into consideration the differences in what type of blocks (iGPU/CPU/...) are mostly present on the dies of Apple A8/A8x vs Broadwell U/Y.
I guess if you ignore everything else others have said?
 

III-V

Senior member
Oct 12, 2014
678
1
41
Yes, I agree. And I find it very strange that this isn't already in place.
Again, it used to be. It used to be defined by the minimum feature size, i.e. the size of the smallest thing you could print on that node. The measuring stick was the metal 1 half pitch.

Things started to diverge, though. Foundries started tailoring various dimensions of their transistors and metal layers to what would benefit them best, rather than every foundry having identical dimensions. Gate length started scaling faster -- Intel's 130 nm had a gate length of 60 nm. Metal pitch scaling has slowed down.

Now, things are smaller (e.g. fin width) or larger (gate length) depending where you look. Overall density has still scaled about 0.7x each node, a bit more with Intel's 14 nm (hence Intel's switch from naming it 16 nm), and up until TMSC/et al. started calling 20FF 16FF or 14FF for kicks, foundries at least stuck loosely to the ITRS roadmap's naming scheme.

Here's an article explaining everything a bit better:
http://spectrum.ieee.org/semiconductors/devices/the-status-of-moores-law-its-complicated
 
Last edited:

Fjodor2001

Diamond Member
Feb 6, 2010
4,622
745
126
It isn't about reducing cost, it is about creating additional value. All of the things you mentioned can change product to product. Not only can a company use it to compare their products to their competitors', but they can also use it to justify product segmentation, (i.e., higher prices for better products). The same wouldn't be true for manufacturing process.
So you mean a foundry cannot charge more for dies made on a later process node (assuming the common definition of process node number is relevant)?
I guess if you ignore everything else others have said?
If we take density as the primary metric to the define process node number, then I believe it is correct. If we also consider other metrics, most likely not. But isn't the process nude number (XX nm) just that, a definition of of the finest features that could be drawn on the chip? I.e. it is actually not taking max frequency or similar into account.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,622
745
126
I found this article on the subject:

http://spectrum.ieee.org/semiconductors/devices/the-status-of-moores-law-its-complicated

An interesting read. Here's an excerpt:

In the mid-1990s, when such chips were the state of the art, 0.35 µm was an accurate measure of the finest features that could be drawn on the chip. This determined dimensions such as the length of the transistor gate, the electrode responsible for switching the device on and off. Because gate length is directly linked to switching speed, you’d have a pretty good sense of the performance boost you’d get by switching from an older-generation chip to a 0.35-µm processor. The term “0.35-µm node” actually meant something.

But around that same time, the link between performance and node name began to break down. In pursuit of ever-higher clock speeds, chipmakers expanded their tool kit. They continued to use lithography to pattern circuit components and wires on the chip, as they always had. But they also began etching away the ends of the transistor gate to make the devices shorter, and thus faster.

After a while, “there was no one design rule that people could point to and say, ‘That defines the node name,’” says Mark Bohr, a senior fellow at Intel. The company’s 0.13-µm chips, which debuted in 2001, had transistor gates that were actually just 70 nm long. Nevertheless, Intel called them 0.13-µm chips because they were the next in line.
So it seems that up until 0.35µm the definition was quite clear and common.
 

dahorns

Senior member
Sep 13, 2013
550
83
91
So you mean a foundry cannot charge more for dies made on a later process node (assuming the common definition of process node number is relevant)?

They can and do, but they don't need a standardized metric to do it. It would be a pain in the ass to create and wouldn't add any additional value.

If we take density as the primary metric to the define process node number, then I believe it is correct. If we also consider other metrics, most likely not. But isn't the process nude number (XX nm) just that, a definition of of the finest features that could be drawn on the chip? I.e. it is actually not taking max frequency or similar into account.
Haven't we already established that the theoretical density of Intel's process for SRAM sizes is higher than the competitors? Remember, I was responding to your comment suggesting that the comparison of transistor densities between two different designs was somehow indicative of the denser node. We know that is not the case because design concerns (e.g., targeted performance) affect the density of the design.
 

III-V

Senior member
Oct 12, 2014
678
1
41
They can and do, but they don't need a standardized metric to do it. It would be a pain in the ass to create and wouldn't add any additional value.
Right. If one's interested in what the actual dimensional parameters of a node are, they are publicly available in most cases.
So the amount of silicon die area that's actually used is less than the reported number (possibly deflating Intel's 14nm density even more)?
Yeah, transistors/mm2 is rather meaningless, and we hopefully all know.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
There is tons of dead space above the cores + igp, beside the memory controller, and between the two slices of the igp.

Just because the picture is intentionally blurred and/or tampered doesnt mean those are dead space areas.
 

III-V

Senior member
Oct 12, 2014
678
1
41
Just because the picture is intentionally blurred and/or tampered doesnt mean those are dead space areas.
I don't think it's been tampered with, other than colorizing it.

The dead space is largely a side-effect of IP reuse for their different die variants. The time spent optimizing that space is better spent creating additional variants.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,622
745
126
They can and do, but they don't need a standardized metric to do it. It would be a pain in the ass to create and wouldn't add any additional value.
You could argue the same for standardized metrics in most other areas too, but there are reasons we still have them.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,622
745
126
Haven't we already established that the theoretical density of Intel's process for SRAM sizes is higher than the competitors? Remember, I was responding to your comment suggesting that the comparison of transistor densities between two different designs was somehow indicative of the denser node. We know that is not the case because design concerns (e.g., targeted performance) affect the density of the design.

In that case measuring theoretical SRAM size is not sufficient either. Because one process tech could have a very small theoretical SRAM size, but despite this it may be impossible to make use of the high density in practice since the clock speed could be too low (otherwise it would overheat) or it would consume too much power.

In that light of this, transistor density on actual chips using similar functionality blocks and clock speeds ought to be more interesting to compare. TSMC/Apple A8/A8x vs Intel Broadwell U/Y-series should be quite close to compare then? Or do you have some other chips that would result in a more accurate comparison? Maybe AMD Zen on Samsung 14 nm vs Intel desktop Broadwell/Skylake on Intel 14 nm will be better to compare, but they are not out yet unfortunately.
 
Last edited: