Intel 10nm and GF 7nm at IEDM 2017

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

french toast

Senior member
Feb 22, 2017
988
825
136
Fin shape and "height" have an effect on performance. Shape determines how it will behave. The more rectangular the shape the better it will function. Fin height does effect frequency. Taller fins also create area efficiency, meaning you can pack more closer together.
hQ3waGU.png
Thanks, that explains it well.
That glofo 7nm really is a quantum leap over 14nm.
By the time 7nm EUV lands in 2019 glofo will be ahead of intel by the looks of it.
Pretty amazing really, also uses cobalt like intel, just not on the interconnects.
All in all glofo will essentially have a leading edge intel process.
 

goldstone77

Senior member
Dec 12, 2017
217
93
61
Thanks, that explains it well.
That glofo 7nm really is a quantum leap over 14nm.
By the time 7nm EUV lands in 2019 glofo will be ahead of intel by the looks of it.
Pretty amazing really, also uses cobalt like intel, just not on the interconnects.
All in all glofo will essentially have a leading edge intel process.

GlobalFoundries will have the smaller cell, but Intel will still have about ~13.8-19.6% better logic density, (published estimates)Intel's ~102.9-103 mn tr per sq mm to GoFlo's ~86-90.5, because Intel is using contact over active gate. That being said, GlobalFoundries have made significant improvements to their transistors! And they will be on as close to level playing field that we have seen, and will end up being a contest mainly between who has the best design.

Edit: Intel reports 100.8 MTr/mm² for their own process making it only about 11% logic density gap.
 

goldstone77

Senior member
Dec 12, 2017
217
93
61
Dayman1225 originally posted this article on page 7, which explains the contact over active gate.
IEDM 2017: Intel’s 10nm Platform Process
Leave a reply
By Dick James
The dummy gates at the cell boundaries have gone, replaced by a single gate spacing; and the gate contact is now over the active gate, ending the need for isolation space to fit in the contact.

The 14-nm process had a dummy gate at the edge of each cell, on the end of adjacent fins, similar to this image of a 22-nm device;



The 10-nm cell uses a dummy gate spacing between fin ends, which saves a gate pitch when packing two cells together, a claimed 20% cell area saving.







In actual fact there is no dummy gate in the finished product, just the fin etched where a single dummy gate would be. This was shown in the presentation, but it is not in the paper, but Samsung did something very similar in their 10-nm offering:





In fact, a dummy polySi gate is used, allowing source/drain formation without risking the fin edge; but for these particular gates the polySi removal etch goes a bit further, and etches the fin to separate the cells.

The second layout change is to shift the gate contact into the active transistor area, over the functional part of the gate (see below).





Such tight alignment with the source/drain (diffusion) contacts requires the development of self-aligned contacts to the gate, and modification of the self-aligned diffusion contacts that were already in use at 14-nm and 22-nm.


Diffusion contacts (left) and gate contacts

To do this, two etch-stop materials and two selective etches are used. After gate formation it is etched back and the cavity is filled with silicon nitride, as in earlier generations; the contact is then put in and also etched back, and the cavity is filled with silicon carbide. Then there is a selective etch to open the gate contact, which does not touch the SiC in the contact cavity, and a second selective etch removes the SiC from the contact cavity, but does not affect the gate contact periphery. Clearly this sequence is reliant on excellent etch selectivity between the different materials.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
So Intel finally had to concede that 10nm HVM is now only in 2019. Whats scary is they did not want to commit to H1 2019. This makes it look like a nightmare scenario. If Intel cannot ship 100-150 sq mm client chips by mid 2019 the outlook for 400 sq mm ICL-SP in late 2019 is really bad. BK finally accepted that Intel went too aggressive on density.

https://www.cnbc.com/2018/04/26/intel-execs-on-10nm-chip-delays-bit-off-a-little-too-much.html

I never understood what Intel gained by going from 40nm MMP to 36nm MMP . A 10% scaling gain was not really worth the added risk given that it forced SAQP on lowest 2 metal layers. I also think the decision to go cobalt was also linked to the same decision to go with MMP=36nm.

https://www.semiwiki.com/forum/content/7191-iedm-2017-intel-versus-globalfoundries-leading-edge.html

"The specific linewidth where cobalt becomes a lower resistance interconnect solution depends on several factors but is right around the linewidths being utilized here. My belief is that Intel used cobalt because they have a 36nm MMP and it made sense for them to do so. GF published a paper on 7nm process development with IBM and Samsung at IEDM in 2016 and that process had 36nm MMP and used cobalt for one level of interconnect. My belief is that with a 40nm MMP in the GF 7nm process cobalt wasn't needed and it is more expensive than copper, so GF didn't use it. Cobalt also offers higher electromigration resistance than copper and GF did use cobalt liners and caps around their copper lines to meet their electromigration goals.

The bottom line is Intel used cobalt because it makes sense for their process and GF didn't because it didn't make sense for their process. As we move to foundry 5nm and below processes I do expect to see more cobalt use and eventually ruthenium
."

Interestingly we have TSMC and GF who went with DUV only first gen 7nm process, 40nm MMP and SADP for metal layers. Samsung went with 7nm EUV which avoided SAQP for lowest metal layers and 36nm MMP. Samsung is brute forcing its way to EUV on the strength of its DRAM/NAND cash machine. Intel is caught in no man's land and chosen what now seems to be in hindsight the route with most risk and it backfired badly. This experience should be a hard lesson for not only Intel but all foundries of how its very important to manage risk when developing leading edge process node by making pragmatic technology choices and decisions.
 

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
Maybe intel was betting on their smartphone cpu and IoT but since all of it fail really hard it's backfire.
 

goldstone77

Senior member
Dec 12, 2017
217
93
61
So Intel finally had to concede that 10nm HVM is now only in 2019. Whats scary is they did not want to commit to H1 2019. This makes it look like a nightmare scenario. If Intel cannot ship 100-150 sq mm client chips by mid 2019 the outlook for 400 sq mm ICL-SP in late 2019 is really bad. BK finally accepted that Intel went too aggressive on density.

https://www.cnbc.com/2018/04/26/intel-execs-on-10nm-chip-delays-bit-off-a-little-too-much.html
Intel (INTC) Q1 2018 Results - Earnings Call Transcript
Apr. 26, 2018 8:59 PM ET

So I'm just going to correct you. You said that supposedly we have the solutions. We do understand these, and so we do have confidence that we can go and work these issues, Stacy. Right now, like I said, we are shipping. We're going to start that ramp as soon as we think the yields are in line. So I said 2019. We didn't say first or second half, but we'll do it as quickly as we can based on the yield.
https://seekingalpha.com/article/4166652-intel-intc-q1-2018-results-earnings-call-transcript

Sounds pretty unsure about when they are going to have yields fixed. This could just be more song and dance!
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
Sounds pretty unsure about when they are going to have yields fixed. This could just be more song and dance!

Agree. Sounds like they have no clue. If you don't know it is first or second half, you also can't know it's in 2019....
 

jpiniero

Lifer
Oct 1, 2010
14,584
5,206
136
I never understood what Intel gained by going from 40nm MMP to 36nm MMP . A 10% scaling gain was not really worth the added risk given that it forced SAQP on lowest 2 metal layers. I also think the decision to go cobalt was also linked to the same decision to go with MMP=36nm.

Hmm, Intel did say they are going to re-do some of the layers at 10++ although don't remember specifics. Almost wonder if they might regress the density and reduce the cobalt.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Hmm, Intel did say they are going to re-do some of the layers at 10++ although don't remember specifics. Almost wonder if they might regress the density and reduce the cobalt.

Yeah Intel says 10++ will have a rearchitected metal stack so maybe (i am just guessing here) they are backing off from aggressive cobalt use along with a slight density reduction to MMP=40nm

0vAz547.png
 

Beemster

Member
May 7, 2018
34
30
51
it does look like AMD will do 64C /16MB/core / 225mm^2 chip and a 4 chip MCM at less than 175W. Their Power vs Freq curves suggest as much. What is Intel's best response to this in a monolithic die at 10nm?............since active Power goes as 1/2fCV^2.......and I don't see Intel dropping the voltage or wanting to regress on frequency, they need all the required power scaling to come from the die shrink..........but 10/14 is just 29% linear shrink so C will only go down by about that much and only if they achieve full 10/14 scaling......and if they relax the GR.....they WON'T....but they need at least a 50% power reduction per core to double the number of cores and keep the power and frequency about the same.....After two device iterations to 14nm++, I don't believe they have anything left from a device point of view at 10nm that would allow a lower voltage operation......so I think power/core scaling is a bigger issue for them at 10nm that die size............consider this.....Skylake I7 8180M 28core 2.5/3.8 using 14nm+ is about 400mm^2 and already 205W.......so power/core limits the number of cores here......not necessarily the die size as 14nm yields are reportedly very good.......I think the key will be how the 14nm++ iteration looks in August.........we might then be able to tell what scaling is required at 10 nm to compete wih 64 core EPYC 2...........AMD gained much more moving from 14nm Samsung to 7 nm GF...........Go AMD!........No Prisoners!
................your thoughts??





So Intel finally had to concede that 10nm HVM is now only in 2019. Whats scary is they did not want to commit to H1 2019. This makes it look like a nightmare scenario. If Intel cannot ship 100-150 sq mm client chips by mid 2019 the outlook for 400 sq mm ICL-SP in late 2019 is really bad. BK finally accepted that Intel went too aggressive on density.

https://www.cnbc.com/2018/04/26/intel-execs-on-10nm-chip-delays-bit-off-a-little-too-much.html

I never understood what Intel gained by going from 40nm MMP to 36nm MMP . A 10% scaling gain was not really worth the added risk given that it forced SAQP on lowest 2 metal layers. I also think the decision to go cobalt was also linked to the same decision to go with MMP=36nm.

https://www.semiwiki.com/forum/content/7191-iedm-2017-intel-versus-globalfoundries-leading-edge.html

"The specific linewidth where cobalt becomes a lower resistance interconnect solution depends on several factors but is right around the linewidths being utilized here. My belief is that Intel used cobalt because they have a 36nm MMP and it made sense for them to do so. GF published a paper on 7nm process development with IBM and Samsung at IEDM in 2016 and that process had 36nm MMP and used cobalt for one level of interconnect. My belief is that with a 40nm MMP in the GF 7nm process cobalt wasn't needed and it is more expensive than copper, so GF didn't use it. Cobalt also offers higher electromigration resistance than copper and GF did use cobalt liners and caps around their copper lines to meet their electromigration goals.

The bottom line is Intel used cobalt because it makes sense for their process and GF didn't because it didn't make sense for their process. As we move to foundry 5nm and below processes I do expect to see more cobalt use and eventually ruthenium
."

Interestingly we have TSMC and GF who went with DUV only first gen 7nm process, 40nm MMP and SADP for metal layers. Samsung went with 7nm EUV which avoided SAQP for lowest metal layers and 36nm MMP. Samsung is brute forcing its way to EUV on the strength of its DRAM/NAND cash machine. Intel is caught in no man's land and chosen what now seems to be in hindsight the route with most risk and it backfired badly. This experience should be a hard lesson for not only Intel but all foundries of how its very important to manage risk when developing leading edge process node by making pragmatic technology choices and decisions.
So Intel finally had to concede that 10nm HVM is now only in 2019. Whats scary is they did not want to commit to H1 2019. This makes it look like a nightmare scenario. If Intel cannot ship 100-150 sq mm client chips by mid 2019 the outlook for 400 sq mm ICL-SP in late 2019 is really bad. BK finally accepted that Intel went too aggressive on density.

https://www.cnbc.com/2018/04/26/intel-execs-on-10nm-chip-delays-bit-off-a-little-too-much.html

I never understood what Intel gained by going from 40nm MMP to 36nm MMP . A 10% scaling gain was not really worth the added risk given that it forced SAQP on lowest 2 metal layers. I also think the decision to go cobalt was also linked to the same decision to go with MMP=36nm.

https://www.semiwiki.com/forum/content/7191-iedm-2017-intel-versus-globalfoundries-leading-edge.html

"The specific linewidth where cobalt becomes a lower resistance interconnect solution depends on several factors but is right around the linewidths being utilized here. My belief is that Intel used cobalt because they have a 36nm MMP and it made sense for them to do so. GF published a paper on 7nm process development with IBM and Samsung at IEDM in 2016 and that process had 36nm MMP and used cobalt for one level of interconnect. My belief is that with a 40nm MMP in the GF 7nm process cobalt wasn't needed and it is more expensive than copper, so GF didn't use it. Cobalt also offers higher electromigration resistance than copper and GF did use cobalt liners and caps around their copper lines to meet their electromigration goals.

The bottom line is Intel used cobalt because it makes sense for their process and GF didn't because it didn't make sense for their process. As we move to foundry 5nm and below processes I do expect to see more cobalt use and eventually ruthenium
."

Interestingly we have TSMC and GF who went with DUV only first gen 7nm process, 40nm MMP and SADP for metal layers. Samsung went with 7nm EUV which avoided SAQP for lowest metal layers and 36nm MMP. Samsung is brute forcing its way to EUV on the strength of its DRAM/NAND cash machine. Intel is caught in no man's land and chosen what now seems to be in hindsight the route with most risk and it backfired badly. This experience should be a hard lesson for not only Intel but all foundries of how its very important to manage risk when developing leading edge process node by making pragmatic technology choices and decisions.
 
Last edited:

Beemster

Member
May 7, 2018
34
30
51
That's a lotta words.

CPCH leak sez' it's 225-240W for 64C MCM.


...of course you could be right.....but is the ~235W at the same freq (s) (nominal and turbo) as EPYC 1?.......It could well be higher for competitive reasons or because the process sweet spot is at somewhat higher freq than Samsung 14nm......I'm just guessing.
 
  • Like
Reactions: raghu78

Beemster

Member
May 7, 2018
34
30
51
So Intel finally had to concede that 10nm HVM is now only in 2019. Whats scary is they did not want to commit to H1 2019. This makes it look like a nightmare scenario. If Intel cannot ship 100-150 sq mm client chips by mid 2019 the outlook for 400 sq mm ICL-SP in late 2019 is really bad. BK finally accepted that Intel went too aggressive on density.

https://www.cnbc.com/2018/04/26/intel-execs-on-10nm-chip-delays-bit-off-a-little-too-much.html

I never understood what Intel gained by going from 40nm MMP to 36nm MMP . A 10% scaling gain was not really worth the added risk given that it forced SAQP on lowest 2 metal layers. I also think the decision to go cobalt was also linked to the same decision to go with MMP=36nm.

https://www.semiwiki.com/forum/content/7191-iedm-2017-intel-versus-globalfoundries-leading-edge.html

"The specific linewidth where cobalt becomes a lower resistance interconnect solution depends on several factors but is right around the linewidths being utilized here. My belief is that Intel used cobalt because they have a 36nm MMP and it made sense for them to do so. GF published a paper on 7nm process development with IBM and Samsung at IEDM in 2016 and that process had 36nm MMP and used cobalt for one level of interconnect. My belief is that with a 40nm MMP in the GF 7nm process cobalt wasn't needed and it is more expensive than copper, so GF didn't use it. Cobalt also offers higher electromigration resistance than copper and GF did use cobalt liners and caps around their copper lines to meet their electromigration goals.

The bottom line is Intel used cobalt because it makes sense for their process and GF didn't because it didn't make sense for their process. As we move to foundry 5nm and below processes I do expect to see more cobalt use and eventually ruthenium
."

Interestingly we have TSMC and GF who went with DUV only first gen 7nm process, 40nm MMP and SADP for metal layers. Samsung went with 7nm EUV which avoided SAQP for lowest metal layers and 36nm MMP. Samsung is brute forcing its way to EUV on the strength of its DRAM/NAND cash machine. Intel is caught in no man's land and chosen what now seems to be in hindsight the route with most risk and it backfired badly. This experience should be a hard lesson for not only Intel but all foundries of how its very important to manage risk when developing leading edge process node by making pragmatic technology choices and decisions.
 

Beemster

Member
May 7, 2018
34
30
51
FWIW....more things to consider: Intel Coffee Lake I7 8700K 95W 6 core 3.7/4.7 (~150mm) gives us the first look at 14nm++............we can compare with Ryzen 2700X 8 core 3.7/4.3 105W 210mm^2..........I7 still a bit faster in single thread but trails in multi thread performance..."In AMD’s internal tests, the new top-end $330 Ryzen 7 2700X chip was 1% slower than a comparable Intel Core i7 8700K chip, which costs $370, in a benchmark of 12 video games. And on a benchmark of apps commonly used in creative industries the AMD chip was 21% faster than the Intel chip"......So these two are very close......I7 is a very dense chip......probably 15% denser than 2700X...........and the best Intel can do at 14nm.......now can they scale 10/14 from here to achieve 30% C and thus 30% power reduction from scaling alone?.........it does not appear they can............and will they get any power benefit from improved 10 nm device.........I doubt that either................so how then can they increase the number of cores enough on a large monolithic die and compete with a 64 core Epyc 2?............now look at 2700X ....I'm assuming what GF calls 12nm uses the same criteria as what they call 7nm...........if correct, then the scaling is approx 7/12 or 58%.....this is approx what they claim (>55%)......so scaling alone should allow the doubling in core count at approx the same power and frequency without any significant device improvements......the 12nm device has seen significant improvements at GF ....I recall somewhere they claimed improved DIBL allowing a 4nm reduction in minimum channel length at high drain bias ..... however, we don't know if it got all the device improvements detailed in the GF 7nm IEDM paper . They were significant. I'm assuming the power vs freq curves from AMD were comparing 14nm Samsung to 7nm GF..........if correct, then the device benefit AMD will get from 12nm going to 7nm (from the paper) is less than expected because some/most of those device benefits went into the 12nm process........the 12nm process is from process benefit alone as there was no shrink...........now the Intel comparisons really have to be made between 14nm++ and 10nm.............if they are made against original 14nm (Broadwell), I think they are meaningless..........I can't even rationalize Broadwell being a 14nm chip in terms of density and power scaling from 22nm. .........note Intel in 2013 claimed they had four (4) different 14nm processes all of which differed in density. How does one square that one?..............................a final question for anyone........what is the chance that Intel realized as soon as 14nm Ryzen IPC was leaked (2016) that the MCM approach with infinity fabric would bite them hard at 10nm?........what is the chance that we do not see 10nm from Intel until they copy the AMD approach by late 2019 or 2020?.........................final note..........I'm an AMD fan and investor.
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Skylake I7 8180M 28core 2.5/3.8 using 14nm+ is about 400mm^2 and already 205W.......

What are you talking about? First, it isn't an i7, but Xeon, and second the die size is in the 600mm2 range not 400mm2.

Anandtech has estimated the 18-core version closes in at 500mm2, and the 28-core Xeon is close to 700mm2.

That's a lotta words.

It's a lot of unorganized words.
 
  • Like
Reactions: pcp7

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
Anandtech has estimated the 18-core version closes in at 500mm2, and the 28-core Xeon is close to 700mm2.
Qualcomm (heh) also estimated XCC at 698mm^2.
Aaaaaaaaaaanyway, with 10 still being deader than dead, Intel's prospects going into 2019 are grim.
Like, really grim.
 
  • Like
Reactions: raghu78

Beemster

Member
May 7, 2018
34
30
51
Yeah Intel says 10++ will have a rearchitected metal stack so maybe (i am just guessing here) they are backing off from aggressive cobalt use along with a slight density reduction to MMP=40nm

0vAz547.png
So Intel finally had to concede that 10nm HVM is now only in 2019. Whats scary is they did not want to commit to H1 2019. This makes it look like a nightmare scenario. If Intel cannot ship 100-150 sq mm client chips by mid 2019 the outlook for 400 sq mm ICL-SP in late 2019 is really bad. BK finally accepted that Intel went too aggressive on density.

https://www.cnbc.com/2018/04/26/intel-execs-on-10nm-chip-delays-bit-off-a-little-too-much.html

I never understood what Intel gained by going from 40nm MMP to 36nm MMP . A 10% scaling gain was not really worth the added risk given that it forced SAQP on lowest 2 metal layers. I also think the decision to go cobalt was also linked to the same decision to go with MMP=36nm.

https://www.semiwiki.com/forum/content/7191-iedm-2017-intel-versus-globalfoundries-leading-edge.html

"The specific linewidth where cobalt becomes a lower resistance interconnect solution depends on several factors but is right around the linewidths being utilized here. My belief is that Intel used cobalt because they have a 36nm MMP and it made sense for them to do so. GF published a paper on 7nm process development with IBM and Samsung at IEDM in 2016 and that process had 36nm MMP and used cobalt for one level of interconnect. My belief is that with a 40nm MMP in the GF 7nm process cobalt wasn't needed and it is more expensive than copper, so GF didn't use it. Cobalt also offers higher electromigration resistance than copper and GF did use cobalt liners and caps around their copper lines to meet their electromigration goals.

The bottom line is Intel used cobalt because it makes sense for their process and GF didn't because it didn't make sense for their process. As we move to foundry 5nm and below processes I do expect to see more cobalt use and eventually ruthenium
."

Interestingly we have TSMC and GF who went with DUV only first gen 7nm process, 40nm MMP and SADP for metal layers. Samsung went with 7nm EUV which avoided SAQP for lowest metal layers and 36nm MMP. Samsung is brute forcing its way to EUV on the strength of its DRAM/NAND cash machine. Intel is caught in no man's land and chosen what now seems to be in hindsight the route with most risk and it backfired badly. This experience should be a hard lesson for not only Intel but all foundries of how its very important to manage risk when developing leading edge process node by making pragmatic technology choices and decisions.



....OK......I looked at all this again and want to update my guess.....FWIW.....I'm thinking now AMD plays it safe on ZEN 2 and goes 12 core and doubles the L3 cache plus adds the AVX 512 extensions. I didn't realize ZEN 3 is only a year behind........so now I think they go 16 core on ZEN 3.........I doubt they had enough firm conviction in the 7 nm process (to risk 16 core) when they committed to 12 core design. Again from total P ~ 1/2 fCV^2 + W (total) I (leak.)
I'm assuming no voltage change at 7nm.........I'm also assuming they want to increase f as the new design is probably designed for higher freq. nominal operation............now their power vs frequency curves for 7nm would indicate they could double the core count to 12 at the same V and f and keep the power about the same. But if they want to increase f at least 15% and add AVX, I think they can only go to 12 cores safely and maintain a power level around 200W in a 4 chip MCM at perhaps 2.6/3.1/3.7 base/all core/max for the 48 core flagship MCM. ...........I don't know how much area the AVX 512 extensions take....(anyone?)....but without them and if they only go 12 core, I would expect a smaller die of maybe 180 mm^2 each........so the AVX extensions will add to that ....also have no feel for the added power running AVX (anyone?)
............anyway, that's my best guess and reasons.............comments and your predictions.....anyone?.................and in regards to Intel:
....I think these are the relevant comparisons to make my point.


Core i7 8700K 6 12 3.7GHz 4.7GHz 12MB 95W 14nm++
Core i7 7700K 4 8 4.2GHz 4.5GHz 8MB 91W 14nm+
R7 2700X 8 16 3.7GHz 4.3GHz 16MB 110W 12nm
R5 1600X 6 12 3.6GHz 4.1GHz 12MB 95W 14nm




.....let's look at what they did in going from 14nm+ to 14nm++...............Intel claims another 10 % additional drive current using the 14nm++....from a purported 4nm increase in fin height from 42nm to 46nm and maybe a small further optimization of short channel effects or strain.......and they increased contacted poly pitch from 70nm to 84nm.....so why the increase.................for one thing (among others), it decreases the PC-Contact capacitance in all the logic blocks and which apparently is significant.........so from active power ~ 1/2 fCV^2..............they added a lot of C by adding two more cores (maybe 30% more on a total chip basis) but reduced C in all logic blocks a bit by relaxing the contacted poly pitch......they dropped all core frequency SIGNIFICANTLY from 4.2GHz to 3.7GHz (12% decrease) even though all core freq from the device improvement alone could have gone up perhaps 6% had they not added 2 more cores (it went up 4.5 % on single core).....finally they increased power to 95W (5%)...........don't get hung up on the actual numbers......but everything they did goes in the right direction to support my contention that power/core is the REAL ISSUE.....it is not yield alone because at 14nm where the yield is very good...... Intel could have easily gone with a larger chip by going 8 core vs 6 IF they could have maintained a 3.7GHz all core and about 100W.....THEY COULD NOT..............AMD does 8 cores at 3.7GHz all core while Intel can only do 6 at 3.7GHz all core..............and we know if they increased core count to 8, the all core freq would have to come down significantly AGAIN as it did from 4 core to 6 core................also note that power/core issue bites Intel hard at the high core count / high freq / high performance regime .............however is masked for low power and mobile where I bet they drop the supply voltage significantly (remember V^2 dependence on power).................this 14nm++ IS the 10nm device.....there will be NO additional device improvement going to 10nm.............so power/core reduction must come from scaling alone.........but 10/14 linear scaling is only 71%...........so 30% on C does not allow them to double the number of cores because C must be cut in half at the same f and voltage to maintain the same approx power..........BUT.........they scale the LOGIC quite a bit more by adding the two process features at 10nm of dummy gate and contact over active silicon......this could improve the logic density by maybe 15% beyond (like maybe equal to a 8% further linear reduction in the logic) what they would get with just the 10/14 linear scaling alone.........even if EVERYTHING worked at 10nm, Intel can' get close to doubling the number of cores............while based on the power vs freq curves AMD showed, they at least could get close.......but I think now, they play it safe and only go to 12C per chip but with a higher freq basic design (ZEN2), increase f perhaps 15%. ...........those power vs freq curves must have had Intel dropping turds everywhere................................does some/all/any of this make sense to you guys??..........comments please................you guys hiding??
 
Last edited:

Beemster

Member
May 7, 2018
34
30
51
youtube.com

AMD's "Starship". It's also known as Rome.

It's 48C / 96T on 7nm technology. It's coming to venders near you in early 2019.

Nothing's Gonna Stop Us Now.
 
Last edited:

TheGiant

Senior member
Jun 12, 2017
748
353
106
I wonder, how about performance penalty of those moar coarz....

As I understand it, Intel is going monolithic die and the mesh because at 20+ cores things like latencies etc ae going to show more than pure core performance.

Are there any benchmark that show the difference of monolithic die advantage from the AMD lego style ? The direct question is- are AMD 48 cores more than intel 3x cores?

whatever the answer is, AMD went the right marketing way- moar coarz mean "more powerful" even if its not, especially from corporate manager approval helicopter view (where everything else that the result is important)

lets see how it ends
 

moinmoin

Diamond Member
Jun 1, 2017
4,944
7,656
136
How would monolithic help in any way vs MCM wrt scalability of cores performance? It all depends on the interconnects in both cases. And if nothing else AMD showed that their higher core count chips are capable of higher all cores frequencies relative to their lower core count chips than Intel so far, even if the absolute frequencies are behind.
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
I wonder, how about performance penalty of those moar coarz....

As I understand it, Intel is going monolithic die and the mesh because at 20+ cores things like latencies etc ae going to show more than pure core performance.

Are there any benchmark that show the difference of monolithic die advantage from the AMD lego style ? The direct question is- are AMD 48 cores more than intel 3x cores?

whatever the answer is, AMD went the right marketing way- moar coarz mean "more powerful" even if its not, especially from corporate manager approval helicopter view (where everything else that the result is important)

lets see how it ends

Almost certainly depends on the use case. If you want a server for heavy multi-threaded computations, where all n cores work on the same task? Yes then inter core latency probably matters and monolithic die probably has an advantage.

However if your server is just a run off the mill application server that will host a lot of VMs each of which running a different application server, then it doesn't matter as the cores don't really need to exchange data over the different dies.
 

Gideon

Golden Member
Nov 27, 2007
1,625
3,649
136
Almost certainly depends on the use case. If you want a server for heavy multi-threaded computations, where all n cores work on the same task? Yes then inter core latency probably matters and monolithic die probably has an advantage.

However if your server is just a run off the mill application server that will host a lot of VMs each of which running a different application server, then it doesn't matter as the cores don't really need to exchange data over the different dies.
Yeah i still find it ridiculous that at least 2 guys here were seriously arguing that EPYC will gain 0% server market share (it will remain at 0%, really?!) and there are 0 use cases for the additional PCI-E Lanes or memory bandwidth on offer. Or the very least all of that is countered by the additional latency :p

By the same logic all dual-socket intel Xeon processors should be useless as well, because the terrible latency between the 2 sockets. The fact is that for caching servers (memory bound) or storage (PCIe bound) there is no real alternative to EPYC, at least if you're buying new servers anyway. The same is true for most Virtual machines running simple web services (very few of those even need 4 cores per instance, which is just as fast on EPYC).

Obviously there are many use-cases that do still work better with Xeon (many transactional databases for instance) but many of the arguments here were totally delusional.

If AMD continues to build up ecosystem as they do currently, and manages to deliver on 7nm EPYC intel will lose considerable server market-share.
 
  • Like
Reactions: raghu78

Beemster

Member
May 7, 2018
34
30
51
LOOK AT THIS. This is the NEW EPYC based 2RU Cisco box. It contains (4) 2 socket nodes or 4x64cores/node = 256 cores.

The corresponding Intel 2RU configuration contains (2) 2 socket nodes or 2X56cores/node =112 cores.

The corresponding Intel 4RU configuration contains (4) 2 socket nodes or 4X56cores/node = 224 cores.

The EPYC chip is 32 cores with 180W TDP.
The Intel chip is 28 cores with 205W TDP.

EPYC has 14% more cores per chip. EPYC has 128% more cores per 2RU configuration.

This is EXACTLY how Cisco advertises the EPYC based C4200 rack server against their own Intel based C240 M5 Rack Server. WOW!

Both chips are at 14nm. It's roadkill now. But, wait till 7nm. AMD takes NO PRISONERS!


AMD's 2RU box
1527498747591.png



datasheet-c78-739279_0.jpg

Intel 2RU rack


datasheet-c78-739291_0.jpg


Intel 4RU rack
 
Last edited: