Transistor Density

LegionZeta

Junior Member
Jan 28, 2015
3
0
16
Is it just me? Or is nobody talking about the nice little ancillary benefit AMD got from ATi? Specifically, complex design libraries appear to have paid off.

Transistors / die size = transistor density per mm^2

Carrizo = 3.1 billion transistors / 250mm^2 = 12,400,000 transistors per mm^2

Broadwell (Large die) = 1.9 billion transistors / 133mm^2 = 14,285,714 transistors per mm^2

Broadwell (Small die) = 1.3 billion transistors / 82mm^2 = 15,853,659 transistors per mm^2

The reason I bring this up is that Intel has a node and a half “advantage” (14nm vs. 28nm). Essentially, it appears AMD has achieved massive transistor density relative to Intel. I know this doesn’t take into account the layers of design, but it’s still a tremendous accomplishment. Makes me wonder what Zen is going to boast at 14nm/10nm?!?


HTML:
http://www.anandtech.com/show/8995/amd-at-isscc-2015-carrizo-and-excavator-details

“AMD is also disclosing the die size and transistor counts for Carrizo. Whereas Kaveri weighed in at 2.3 billion transistors in a 245mm2 die, Carrizo will come in at a much larger 3.1 billion transistors in a 250mm2 die. “

HTML:
http://www.anandtech.com/show/8814/intel-releases-broadwell-u-new-skus-up-to-48-eus-and-iris-6100

“Broadwell-U will be derived from two main dies. The larger design contains the full 48 EU (two common slices with 6x8 EU sub-slices all in) configuration for 1.9 billion transistors in 133 mm2, while the 24 EU design (one common slice, 3x8 sub-slices) will measure 1.3 billion transistors in 82 mm2.”
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
This shows that Intel is still in the first place a CPU company, even though the IGP may be 80% of the die area, while Carrizo is a GPU with integrated CPU; you won't buy an AMD APU for its (lack of) leading edge CPU performance.

Density may be improved, even competitive with 14nm, but how about frequency?
 

LegionZeta

Junior Member
Jan 28, 2015
3
0
16
Design philosophy, frequency, and process node are talked about all the time in forums.

I just think it’s amazing that the transistor density per mm^2 of a 28nm processor is almost as high as a 14nm processor. I would think that intel would have at least twice the density since they are a node and a half ahead in manufacturing.

Would you happen to know how many layers Broadwell/Carrizo are? I haven’t found it yet…
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
The design libraries used didnt come from ATI. And they are common tools.

Its simply a trade off. The same reason why Carrizo doesnt go higher than 35W and completely avoids the desktop. While Kaveri refresh takes over there.

CS794_Fig2.jpg
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
The design libraries used didnt come from ATI. And they are common tools.

Its simply a trade off. The same reason why Carrizo doesnt go higher than 35W and completely avoids the desktop. While Kaveri refresh takes over there.
I think 4.5W BDW-Y is very similar to 140W Xeon; it seems there is quite a bit of untapped potential due to the necessity to be used also in high-end desktops and the data center, creating a bottleneck for area efficiency. I think an SoC like 5Y10 wouldn't get hurt a lot with a much higher density, but this is where Atom comes into play, which has a higher density than Core.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,009
442
126
The design libraries used didnt come from ATI. And they are common tools.

Its simply a trade off. The same reason why Carrizo doesnt go higher than 35W and completely avoids the desktop. While Kaveri refresh takes over there.

CS794_Fig2.jpg

Shouldn't Intel have been able to design Broadwell Y Core M at a huge huge density then, given that it's only 4.5 W?

Currently I believe the cores are reused for high frequency and TDP desktop CPUs too. But why not "recompile" the cores with high density libraries when intended for ultra-low power CPUs, like Core M? I know it will likely take some effort to do that, but given the volumes that Intel's CPUs sell at, maybe it would be worth considering.
 
Last edited:

Yuriman

Diamond Member
Jun 25, 2004
5,530
141
106
If I were to guess, it's because they have plenty of extra capacity. I assume there are tradeoffs other than frequency for increasing density?
 

el etro

Golden Member
Jul 21, 2013
1,584
14
81
If I were to guess, it's because they have plenty of extra capacity. I assume there are tradeoffs other than frequency for increasing density?

No.

Of course you can make the denser chip clock as high as the less denser chip, but surely you will get a hotter and power-hungry processor.
 

LegionZeta

Junior Member
Jan 28, 2015
3
0
16
I should have put this in my original post.

HTML:
http://www.anandtech.com/show/6201/amd-details-its-3rd-gen-steamroller-architecture/2

"This one falls into the reasons-we-bought-ATI column: future AMD CPU architectures will employ higher levels of design automation and new high density cell libraries, both heavily influenced by AMD’s GPU group. "

So, from the posts so far, it appears AMD sacrificed frequency for density (power savings). However, the Broadwell designs I mentioned in my original posts are "U" or mobile designs. So, I would expect intel to make the same tradeoff and have significantly higher densities. Especially at 14nm.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Shouldn't Intel have been able to design Broadwell Y Core M at a huge huge density then, given that it's only 4.5 W?

Currently I believe the cores are reused for high frequency and TDP desktop CPUs too. But why not "recompile" the cores with high density libraries when intended for ultra-low power CPUs, like Core M? I know it will likely take some effort to do that, but given the volumes that Intel's CPUs sell at, maybe it would be worth considering.

Core M uses the same chip at 15W Broadwell GT2 U chips.
 

jj109

Senior member
Dec 17, 2013
391
59
91
Shouldn't Intel have been able to design Broadwell Y Core M at a huge huge density then, given that it's only 4.5 W?

Currently I believe the cores are reused for high frequency and TDP desktop CPUs too. But why not "recompile" the cores with high density libraries when intended for ultra-low power CPUs, like Core M? I know it will likely take some effort to do that, but given the volumes that Intel's CPUs sell at, maybe it would be worth considering.

You can't simply "recompile" a design. There's a large personnel and time cost involved in redesigning a chip to run off of slower cells, sending the design to the fab, manufacturing it, and powering it on. It's pretty much spinning up a new chip, because only a portion of the RTL can be shared.

So why would Intel design two chips when they can stuff a generic design into two power envelopes and still come out ahead?
 

Abwx

Lifer
Apr 2, 2011
11,783
4,691
136
This shows that Intel is still in the first place a CPU company, even though the IGP may be 80% of the die area, while Carrizo is a GPU with integrated CPU; you won't buy an AMD APU for its (lack of) leading edge CPU performance.

Density may be improved, even competitive with 14nm, but how about frequency?

FUD at the first answer, and completely of topic, you really should abstain to bring your viral marketing once you have an occasion to do so, all AMD related thread are already filled with this kind of pollution and it would be great for the Sanity of the forum that theses bad habits are put to a rest definitly.

On topic, and this is adressed to the OP, the two competing chips are comparable layout wise since Core M has vast area devoted to the IGP and comparable to what is devoted in AMD s Carrizo so the usual excuse that GPU have bigger density doesnt hold in this case, even worse given that Finfets should allow for higher density than planar transistors.




You need to stop insulting the members. Now you're accusing one of being a shill.

At the rate you are going, you will be into permaban discussion territory if you do not stop.




esquared
Anandtech Forum Director
 
Last edited by a moderator:

Abwx

Lifer
Apr 2, 2011
11,783
4,691
136
Currently I believe the cores are reused for high frequency and TDP desktop CPUs too. But why not "recompile" the cores with high density libraries when intended for ultra-low power CPUs, like Core M? I know it will likely take some effort to do that, but given the volumes that Intel's CPUs sell at, maybe it would be worth considering.

I guess that you re implying that high frequency design guidelines will yield a smaller density, that there was a trade off between density and performance.

That s quite possible but then how much higher is clocked Core M in comparison of Carrizo wich is 28nm and with a density that is supposed to be a disadvantage for high frequencies.?.
 

Abwx

Lifer
Apr 2, 2011
11,783
4,691
136
The design libraries used didnt come from ATI. And they are common tools.

Its simply a trade off. The same reason why Carrizo doesnt go higher than 35W and completely avoids the desktop. While Kaveri refresh takes over there.

What is the area of a BDW core.?.
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
Shouldn't Intel have been able to design Broadwell Y Core M at a huge huge density then, given that it's only 4.5 W?

Currently I believe the cores are reused for high frequency and TDP desktop CPUs too. But why not "recompile" the cores with high density libraries when intended for ultra-low power CPUs, like Core M? I know it will likely take some effort to do that, but given the volumes that Intel's CPUs sell at, maybe it would be worth considering.

Note that to "recompile" the physical design of a Core M will require Intel to probably hire well north of several hundred engineers to do so... but sure what your hand wavy reason why it's worth it?

Have you weighed the cost of a fabbing out a couple new chips + steppings and all those engineers against the die area saving and better bins to see if this is worth the investment?
 

videogames101

Diamond Member
Aug 24, 2005
6,783
27
91
Shouldn't Intel have been able to design Broadwell Y Core M at a huge huge density then, given that it's only 4.5 W?

Currently I believe the cores are reused for high frequency and TDP desktop CPUs too. But why not "recompile" the cores with high density libraries when intended for ultra-low power CPUs, like Core M? I know it will likely take some effort to do that, but given the volumes that Intel's CPUs sell at, maybe it would be worth considering.

Just for some background: Within a single cell library, just because I can use a very small transistor doesn't mean I should. In many cases you'll use wider standard-cells (transistors) in order to drive larger capacitances at high frequencies, if the gate has a large fanout or a large interconnect capacitance. These cells (transistors) will be larger than the process's nominally "small" standard-cell (transistors). So high density cell libraries typically have lower heights and lower drive currents than the high performance libraries for the same cell width. If you're running slower, you can use these libraries and pack more cells in because of the height difference. They'll also use less power. I can assure you that if Intel did not re-run their back-end implementation flows targeting lower frequency and power targets someone would be fired. You can't vary your targets that much and get any sort of reasonable results out of synthesis and apr tools. Whether or not they used different cell libraries likely depends on if they had them available, not if it was worth the effort or not. Usually the nominal libraries are released before HD libs.

As for the discrepancy in density when compared to quoted process density advantage, the explanation is likely because of several factors, but Intel won't be telling us. No on runs at 100% cell density, so it could simply be that for some reason AMD is running 80% cell density and Intel is running 75% cell density or something, maybe some of Intel's designs are interconnect limited or they were scared of hold fixing blowing up their density. Or like I said maybe Intel's high density standard cell library wasn't ready for Core M. Or maybe something completely different, I'm just guessing.

EDIT:

Note that to "recompile" the physical design of a Core M will require Intel to probably hire well north of several hundred engineers to do so... but sure what your hand wavy reason why it's worth it?

Have you weighed the cost of a fabbing out a couple new chips + steppings and all those engineers against the die area saving and better bins to see if this is worth the investment?

Dropping the frequency target by over 1 GHz? I have to imagine they will have re-run their physical design flows. I mean, you work at Intel so I differ to you but I can't believe that's how it goes. If so, well I guess I know one reason Core M didn't deliver as expected compared to ARM designs which are always implemented for particular targets. I mean you have an extra 170ps going between 3 and 2 GHz, does process variation eat all of that up in binning? Maybe it does, I guess I don't know 14nm parameters. I am completely aware of the cost in man-hours, but the cost of releasing sub-par products seems awful high - well something to the tune of $4.21 billion I think :D
 
Last edited:

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
You bet you would have to rerun the physical design flow and rebuild an incredible number of custom designs again using whatever new library cells and design rules and frequency/voltage target you want to target. It can be done but it's incredibly expensive and the return is questionable. 170ps is a monstrous amount of time and it'll get eaten up by lower voltages, higher density cells/low power libraries and eventually the rest by the design.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
I find it very unusual that Broadwell 2+3 (GT3) with a huge iGPU has lower density than the smaller 2+2 die.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,009
442
126
This shows that Intel is still in the first place a CPU company, even though the IGP may be 80% of the die area, while Carrizo is a GPU with integrated CPU; you won't buy an AMD APU for its (lack of) leading edge CPU performance.

Density may be improved, even competitive with 14nm, but how about frequency?

Well both AMD and Intel dedicate more and more die area to the iGPU. Many APUs are as you say more to be considered GPUs with iCPUs nowadays. So having an efficient and high density GPU core design will be ever more important going forward. AMD has an edge here.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,009
442
126
Note that to "recompile" the physical design of a Core M will require Intel to probably hire well north of several hundred engineers to do so... but sure what your hand wavy reason why it's worth it?

Have you weighed the cost of a fabbing out a couple new chips + steppings and all those engineers against the die area saving and better bins to see if this is worth the investment?

Yes, I know you just don't recompile it. That's why I wrote "recompile" within quotes, and also said that it will require work to be done. But I hoped you all would get the idea I was hinting at anyway.

Also, apparently AMD will do it with Carrizo. So why shouldn't Intel be able to do it too, given that they have a much larger R&D budget and higher volumes? I.e. have one design for low power, low frequency, high density CPUs such as the U/Y and Core M models. And then another one for the opposite, i.e. the high performance CPUs.

Ideally Intel should also have different process tech for those as well, just like TSMC has High Performance vs Low Power process tech variants. As I understand it Intel will e.g. only have one common 14 nm process, otherwise I've missed something. I'm surprised that this is the case, given the vast range of CPUs that Intel produces. Anything from 4.5 W Core M to 100W+ Xeon CPUs, and also dies that have various area size allocated to iGPU which benefits from high density.
 
Last edited:

Nothingness

Diamond Member
Jul 3, 2013
3,292
2,357
136
Ideally Intel should also have different process tech for those as well, just like TSMC has High Performance vs Low Power process tech variants. As I understand it Intel will e.g. only have one common 14 nm process, otherwise I've missed something. I'm surprised that this is the case, given the vast range of CPUs that Intel produces.
According to Bohr there should be two different 14nm processes (P1272 CPU and P1273 SoC) as is the case for 22nm (P127[01]).
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,009
442
126
The design libraries used didnt come from ATI. And they are common tools.

Its simply a trade off. The same reason why Carrizo doesnt go higher than 35W and completely avoids the desktop.

http://wccftech.com/amd-godavri-apu...june-2015-carrizo-desktop-apu-arrive-1h-2016/

So the first new piece of information we got was the fact that AMD actually had a desktop Carrizo planned, something that was later canceled due to management’s decision. It was originally designed to be put onto the FM2+ socket and FP4 packaged and relieve the Kaveri, being a true successor to the older APU. However, to cut costs and following the profit motive (something rather understandable) they decided to reiterate Kaveri with a refresh named Godavri.
[...]
Interestingly he also mentioned that Carrizo still might end up on the desktop segment and AMD is currently debating this very question.
So it seems that the Kaveri might end up on desktop after all. And the reason AMD so far has has decided not do provide that is because of cost reasons, and not because it will not be able to clock that high etc.