How is AMD releasing 7nm CPUs next year and Intel's still stuck on 14nm?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

epsilon84

Senior member
Aug 29, 2010
932
105
136
#26
Intel got overconfident with their ability to deliver 10nm and are now left with no choice but to milk everything out of 14nm until they sort out this mess.

In fairness to Intel, they have done well with optimising 14nm to this point, and the 9900K (or its 10th gen deritivatives) should prove competitive with 8C Ryzen 3000, but AMD would win the 'moar corez' game for sure if they wanted to go down that route for desktop.
 

TheELF

Platinum Member
Dec 22, 2012
2,817
110
126
#27
That's an awfully rosy picture of how things are going at Intel these days.
That's how it is though,all intel had to do was to add two cores to the 7700k to make the 8700k and then add two cores to that to make the 9700k they keep selling the same tech, that keeps getting cheaper and cheaper to produce,for the same price and a bit more each time.
No matter how big their problems with 10nm really are (or aren't) they don't seem to care much because 14nm still sells like hot cakes.
 

NTMBK

Diamond Member
Nov 14, 2011
8,281
247
126
#28
That's how it is though,all intel had to do was to add two cores to the 7700k to make the 8700k and then add two cores to that to make the 9700k they keep selling the same tech, that keeps getting cheaper and cheaper to produce,for the same price and a bit more each time.
No matter how big their problems with 10nm really are (or aren't) they don't seem to care much because 14nm still sells like hot cakes.
When they are competing against rivals on 14/12nm technology, sure. Let's see how well that goes when their rivals have a manufacturing advantage.
 

epsilon84

Senior member
Aug 29, 2010
932
105
136
#29
When they are competing against rivals on 14/12nm technology, sure. Let's see how well that goes when their rivals have a manufacturing advantage.
Ignoring pricepoints for a second, on a strictly equal core/thread level Intel is ahead of AMD by 15 - 20%:
https://www.techpowerup.com/reviews/Intel/Core_i9_9900K/19.html

In 'relative' terms, with the 9900K as 100% baseline:

6C/12T
2600X - 72.8%
8700K - 83.9%
Difference = 15.25%

8C/12T
2700X - 83.2%
9900K - 100%
Difference = 20.2%

Now, it is entirely possibly that Zen 2 could completely wipe out this advantage. But even if Intel is merely competitive with Zen 2 on a core/thread level, wouldn't you call that impressive considering we are talking about a (by next year) 4 year old architecture stuck on the 14nm process compared to a refreshed Zen architecture on a brand new 7nm process?

Sure, AMD can definitely take a competitive advantage by upping the core count to 12 or even 16 cores. But that is of questionable benefit to the average desktop consumer. I've said it before, that if the rumoured leaks are true (or at least partially true in terms of core count) then I am most excited by the $100 6C/12T Zen 2 chips because that will bring an unprecedented level of value to the desktop market that we haven't seen for a very long time.
 

Qwertilot

Golden Member
Nov 28, 2013
1,411
37
106
#30
Quite a power draw gap isn't there? Intel can do that well but the 9900k is driven wayyy past that.

I've got the impression that Zen was quite efficient.
 

epsilon84

Senior member
Aug 29, 2010
932
105
136
#31
Quite a power draw gap isn't there? Intel can do that well but the 9900k is driven wayyy past that.

I've got the impression that Zen was quite efficient.
Not with TPUs testing:
https://www.techpowerup.com/reviews/Intel/Core_i9_9900K/16.html

Of course, power figures vary depending on the review. In general the 8700K is more efficient than the 2600X, and the 9900K trades blows with the 2700X but both are clocked above their optimal efficiency window, especially the 'unlimited TDP' 9900K @ 4.7GHz

Zen is most efficient with low to mid 3GHz clocks. Hence the 65W 1700 and 2700 chips. Up the clocks on these to 4GHz+(3.7 - 3.8GHz on original Zen) and power consumption quickly climbs to 100W+ territory, aka 1700X/1800X and 2700X chips.

Of course, Intel 14nm also has an optimal efficiency window, and that is in the low 4GHz range. Think 65W i7 8700 or '95W' 9900K under strict TDP limits: https://www.gamersnexus.net/guides/3389-intel-tdp-investigation-9900k-violating-turbo-duration-z390
 
Apr 27, 2000
11,480
818
126
#32
No matter how big their problems with 10nm really are (or aren't) they don't seem to care much because 14nm still sells like hot cakes.
That worked out great for Krzanich. Oh wait, no it didn't.

Ditto for many of his former lackeys.

Heads are rolling at Intel. Coincidence? I think not.
 

ozzy702

Senior member
Nov 1, 2011
936
166
136
#33
Quite a power draw gap isn't there? Intel can do that well but the 9900k is driven wayyy past that.

I've got the impression that Zen was quite efficient.
Not when looked at from a performance/power perspective. The AMD fanatics seem to think that somehow Zen2 will clock past 5ghz but sip power. Guess what, even on a cutting edge process, it's going to suck down juice clocked that high. It may suck a great deal less than the 9900k (I believe it will be stellar), but that remains to be seen.

The 9900k is actually extremely power efficient in terms of work done when limiting power consumption to reasonable levels. Yes that means lower clocks that put it in a more efficient range but we're still talking mid to high 4ghz on all eight cores. Pretty amazing considering how old the architecture and fab is.
 

ozzy702

Senior member
Nov 1, 2011
936
166
136
#34
That worked out great for Krzanich. Oh wait, no it didn't.

Ditto for many of his former lackeys.

Heads are rolling at Intel. Coincidence? I think not.
Hopefully enough heads roll that Intel comes back with a vengeance, becomes leaner, more focused and more innovative. I want to see 10+ cores, new architectures with larger L2, L3 and maybe even L4 paired with DDR5 and PCIE 5.0 in 2021/2022 when I'm due for my next upgrade.

If AMD does the above, or something similar, they will of course be an option assuming they get their latency problems fixed and gaming performance is at parity or above Intel's.
 
Last edited:

Atari2600

Senior member
Nov 22, 2016
731
171
106
#35
Ignoring pricepoints for a second, on a strictly equal core/thread level Intel is ahead of AMD by 15 - 20%:
https://www.techpowerup.com/reviews/Intel/Core_i9_9900K/19.html

In 'relative' terms, with the 9900K as 100% baseline:
Compare something that matters - Xeon vs. EPYC.

Enthusiasts rattling on about Core X vs. Ryzen7/Threadripper are like folks on the Titanic arguing about rearranging the deckchairs after the iceberg has ripped the side out of the ship.

Profit (not revenue) from professional lines dwarfs all others.


When EPYC is run at ~3.0 GHz, Intel really struggle to get near in terms of perf/power in non AVX workloads.
 

TheGiant

Senior member
Jun 12, 2017
348
35
76
#36
Not when looked at from a performance/power perspective. The AMD fanatics seem to think that somehow Zen2 will clock past 5ghz but sip power. Guess what, even on a cutting edge process, it's going to suck down juice clocked that high. It may suck a great deal less than the 9900k (I believe it will be stellar), but that remains to be seen.

The 9900k is actually extremely power efficient in terms of work done when limiting power consumption to reasonable levels. Yes that means lower clocks that put it in a more efficient range but we're still talking mid to high 4ghz on all eight cores. Pretty amazing considering how old the architecture and fab is.
well there are no believable leaks from the zen 7nm parts
and I agree with you on the 9900K, best chip in years, but in my country it still is way above MSRP so I am waiting to get it, the Christmas bundles don't seem to work for this chip ATM :)
 

epsilon84

Senior member
Aug 29, 2010
932
105
136
#37
Compare something that matters - Xeon vs. EPYC.

Enthusiasts rattling on about Core X vs. Ryzen7/Threadripper are like folks on the Titanic arguing about rearranging the deckchairs after the iceberg has ripped the side out of the ship.

Profit (not revenue) from professional lines dwarfs all others.


When EPYC is run at ~3.0 GHz, Intel really struggle to get near in terms of perf/power in non AVX workloads.
High margin / low volume vs low margin / high volume. Interesting to see what the profit breakdown of that is, do you have stats proving enterprise profits eclipse everything else AMD makes?

Regardless, the majority of forum members here are as you rightly called it, enthusiasts. I make no apologies for caring more for Zen 2 than EPYC, since the latter means absolutely nothing for my own personal computing. I would be happy for AMD to make more profits from enterprise, they deserve all the success they will get there. Nevertheless, my own purchasing decisions will not be based on Xeon vs EPYC comparisons, it will be based on how Zen 2 compares with my 8700K and whether it would be worthwhile for me to upgrade next year.
 

Atari2600

Senior member
Nov 22, 2016
731
171
106
#38
High margin / low volume vs low margin / high volume. Interesting to see what the profit breakdown of that is, do you have stats proving enterprise profits eclipse everything else AMD makes?
Actually, I have to correct myself!


For Intel:
In Q2 2018, client had a revenue of $8.7B, with operating income of $3.2B (margin = 37%).
In Q2 2018, enterprise had a revenue of $5.5B with operating income of $2.7B (margin = 49%).

I'm shocked that the enterprise margin isn't something like 3-5x that of client - does this point toward surprisingly crap yield rates for the LCC Xeons?

With EPYC having much less market recognition, no doubt AMD's results will be worse.


I'll go sit in the corner and keep quiet for a while :oops::D
 
Jun 23, 2004
27,463
598
126
#39
while trying to dial in 7nm...when they couldn't even dial up 10nm.
On that note, it is possible that underlying tech / production is different for 10nm and 7nm. A failure in one line does not necessarily mean a delay or failure in the other.
 

Mopetar

Diamond Member
Jan 31, 2011
4,465
390
126
#40
On that note, it is possible that underlying tech / production is different for 10nm and 7nm. A failure in one line does not necessarily mean a delay or failure in the other.
If this were completely true, Intel would probably have a 7nm rollout very soon based on their initial schedule for 10nm. I do believe that they always planned on using EUV for 7nm though, so there are some differences.
 
Apr 27, 2000
11,480
818
126
#41
Hopefully enough heads roll that Intel comes back with a vengeance, becomes leaner, more focused and more innovative. I want to see 10+ cores, new architectures with larger L2, L3 and maybe even L4 paired with DDR5 and PCIE 5.0 in 2021/2022 when I'm due for my next upgrade.

If AMD does the above, or something similar, they will of course be an option assuming they get their latency problems fixed and gaming performance is at parity or above Intel's.
I agree that Intel needs to step up their game. The PC world has been a lot less interesting over the last few years, thanks to some of their problems.

Not when looked at from a performance/power perspective. The AMD fanatics seem to think that somehow Zen2 will clock past 5ghz but sip power.
5 GHz keeps showing up in leaks, but I'll believe it when I see it. 4.6 GHz seems more rational as a boost cap for normal cooling (read: not LN2/LHe/phase change)

Actually, I have to correct myself!


For Intel:
In Q2 2018, client had a revenue of $8.7B, with operating income of $3.2B (margin = 37%).
In Q2 2018, enterprise had a revenue of $5.5B with operating income of $2.7B (margin = 49%).

I'm shocked that the enterprise margin isn't something like 3-5x that of client - does this point toward surprisingly crap yield rates for the LCC Xeons?

With EPYC having much less market recognition, no doubt AMD's results will be worse.


I'll go sit in the corner and keep quiet for a while :oops::D
It's hard to get an exact read on how well Intel is doing due to the way they report unit sales. Or rather, due to the fact that they often obscure unit sales for anything other than the client computing group. I don't know that Intel would have to have poor yields on their larger dice to explain their discrepancy (though it might explain it). If you harvest nothing but massive 28-core dice off a wafer, one would think that you'd get more waste on the edges, even with perfect yields.
 
Last edited:

poohbear

Platinum Member
Mar 11, 2003
2,285
0
81
#42
Interesting, thanks for the insight guys.

AMD does not do much work in foundry processes if at all; any out-engineering AMD did is all in the chip architecture, and how it fits into the process TSMC developed.

Another point that I think many have failed to realize is that Intel measures their processes differently than TSMC, Samsung, and GF. At this point 7nm, 10nm, 12nm, and 14nm are just marketing terms. It's been said that TSMC 12nm or something is comparable to Intel's 14nm processes (or something like that).

Who knows how TSMC 7nm will compare with Intel's 7nm?
How is it a marketing term? For consumers? cause they barely know the difference between a CPU & GPU let alone 7nm & 10nm. Samsung & Qualcomm are already creating 7nm CPUs for mobile, those are also different?
 

Thunder 57

Senior member
Aug 19, 2007
648
117
136
#43
Interesting, thanks for the insight guys.


How is it a marketing term? For consumers? cause they barely know the difference between a CPU & GPU let alone 7nm & 10nm. Samsung & Qualcomm are already creating 7nm CPUs for mobile, those are also different?
My guess? Investors? Power users that people go to for computer advice. Your average Joe won't doesn't have to know what it means, just that people they trust said it's better.
 

TheELF

Platinum Member
Dec 22, 2012
2,817
110
126
#44
Interesting, thanks for the insight guys.


How is it a marketing term? For consumers? cause they barely know the difference between a CPU & GPU let alone 7nm & 10nm. Samsung & Qualcomm are already creating 7nm CPUs for mobile, those are also different?
Consumers are being told that less nm is better and that's the only thing they need to know.

For anybody else,intel keeps using the definition of node size that is established while other manufacturers use smaller node names than they actually produce -in accordance to the established meaning- as a marketing gimmick to chip designers.
https://semiengineering.com/nodes-vs-node-lets/
Things began to fall apart after 28nm, however. Intel continues to follow the 0.7X scaling trend. But at 16nm/14nm, others deviated from the traditional equation and relaxed the metal pitch. “Node names used to mean something. They used to be pinned to metal pitches,” Wei said. “At some point, we started to drift away from the pitch, focusing more on the next node and features.”
 

CatMerc

Golden Member
Jul 16, 2016
1,111
39
106
#46
It's not really a story of AMD vs Intel in the nanometer war, though obviously it's the most direct comparison we see. Who really outdid Intel was TSMC, through a combination of reasonable and flexible goals on the side of TSMC, and outlandish ones from Intel, on top of not having any back up plan. 7nm is about the same as Intel's 10nm, but one is actually working while the other is... something?

As for if Intel will be fine against 7nm chips... Have you seen the size of the 8c 7nm chiplet in Rome? Have you considered what kind of insane efficiency they're going to get out of it with 7nm?
 
Apr 27, 2000
11,480
818
126
#48
Consumers are being told that less nm is better and that's the only thing they need to know.

For anybody else,intel keeps using the definition of node size that is established while other manufacturers use smaller node names than they actually produce -in accordance to the established meaning- as a marketing gimmick to chip designers.
https://semiengineering.com/nodes-vs-node-lets/
That used to actually be a problem for anyone that was not Intel. Now TSMC and Samsung are achieving actual shrinks (even if it doesn't follow the .7x rule) while GF and Intel are not. So the real test of what is or is not good for the consumer is, "can the foundry make progress?" I don't think anyone really cares now that TSMC is fudging a little when they call their process 7nm (versus Intel's 10nm). As long as it is a shrink and as long as it works, they can call it whatever they want.
 

Similar threads



ASK THE COMMUNITY

TRENDING THREADS