Question How ahead is Intel in CPU design compared to AMD?

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Adonisds

Member
Oct 27, 2019
98
33
51
If we could get both companies to design a CPU core now on the same process and using the same number of transistors, who would be ahead and by how much? I'm assuming Intel would be ahead because Skylake has a similar IPC to Zen 2, but Zen 2 uses a more dense node, and Skylake is from 2015 while Zen 2 is from 2019.

The both companies using the same process scenario is just to illustrate the problem and I know it will never happen, but maybe there is a smart way to compare current CPU designs and find out who is ahead and by how much. Wikichip has die sizes and die pictures of Sunny Cove and Zen 2, but I'm not sure how to interpret and compare them.

Another question: If Intel had to design future cores using the same process and the same die size as Skylake, how much further could the Skylake design be improved? Also assuming it will have to have to be x86 too and have all the same functions.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Games are not common workloads, you can't design some cpu to be good at all games, each game have it's own characteristics, they do completely random workloads.

As Zucker2K said, a low latency memory subsystem is very beneficial to gaming. Actually, low latency memory access is extremely beneficial to the vast majority of end consumer applications in general.

That's why the majority of mainstream CPUs use dual channel memory. More memory channels are almost useless for the average consumer, but for server and databases bandwidth is also immensely important which is why those CPUs come with lots of memory channels.

Contrary to some group of applications, be them all the file archiving software, all the image processing software, all the video processing software, all the cad software, all the database software, ... all the groups they all share the same kind of data workload.

You can predict those better when designing a cpu. Now you want intel waste time doing some cpu that will work very well with Doom and also be good with The Sims, they don't.

All I said is that Intel's design decisions has been very beneficial for gaming and that they do application testing during development. There's nothing controversial about this statement. You think these engineers are stupid or something, that they don't know what works. But if you won't take my word, take the word of Ronal Singh, the lead architect for Sunny Cove. In this interview with Gamers Nexus, he SPECIFICALLY says and I quote:

"When we're doing our analysis, what kind of workloads do I look at to say how do I tell if I'm actually increasing performance? So we look at a lot of GAMES, a lot of multimedia applications, we look at a lot of things also on the server side."

Starts at 4:00


Now you are confusing bad marketing with cpu design.

You might consider it to be bad marketing, but the intention is clear on AMD's part. They are obviously catering to the gaming crowd.

Skylake-X is asking about you.

I already answered back several pages ago. Skylake-X is slower in games for specific reasons, namely the mesh interconnect, smaller (and slower) L3 cache size per core, and a non inclusive L3 cache. However, these can be overcome somewhat with tweaking and Skylake-X can become a really good gaming CPU.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
What if Zen 3 CPUs will give AMD the same gaming performance increase, over Zen 2, similar to what Zen 2 give them over Zen+?

How will Intel respond to that, considering they will be, in that scenario, 100% behind AMD, in all cases?

I've been saying all along in this thread and others that Zen 3 will be AMD's moment to truly break Intel. Zen 2 brought them to parity for the most part, but Zen 3 is the real shell shocker.

As for what Intel will do, I haven't the slightest clue. Intel will survive and thrive though no matter how hard of a bitch slap Zen 3 gives them because of their size, wealth and influence.

Whatever happens, you can't count on Intel to stay down for too long.
 

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
All I said is that Intel's design decisions has been very beneficial for gaming and that they do application testing during development. There's nothing controversial about this statement. You think these engineers are stupid or something, that they don't know what works. But if you won't take my word, take the word of Ronal Singh, the lead architect for Sunny Cove.

That statement is not controversial, it is just wrong because it confuses intent with coincidence. The only thing gamers get from Intel and AMD both is overclocking. Every other bit of gaming performance is just spillover from design decisions made for actual big money customers.

As for why an Intel honcho isn't going to spill that cold hard truth to gamers while being interviewed by "Gamers Nexus", well like you said, he isn't stupid.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
AHAHAHAHAHAHAHAHAHA

Even ignoring the faith statement on inevitable process leadership (which is 100% delusional), everything you said is utter bunk.

If you're going to quote me, at least do it properly. I said process parity or superiority. Given Intel's historic leadership in process node, I can't see how you think I'm being delusional for expecting them to at least return to being competitive on that front.

High clock speed and high IPC work against each other, your statement is akin to saying the goal is "high performance", i.e. it is nonsense.

Yes, high clock speed and high IPC work against each other, but they're not mutually exclusive. There's a sweet spot, and past that diminishing returns.

An integrated memory controller and SMT both hide memory latency, which from what I understand is the main reason higher clock speeds lower IPC and vice versa. Those are just two examples, and I'm sure there are many other techniques already being used or patented or researched.

- Large monolithic dies are dead for obvious reasons.

Does this go for desktop CPUs as well?

Gaming performance is at best an afterthought, if it is even considered at all.

See the YouTube video I posted above. It's not an afterthought at all, according to one of Intel's lead architects.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
That statement is not controversial, it is just wrong because it confuses intent with coincidence. The only thing gamers get from Intel and AMD both is overclocking. Every other bit of gaming performance is just spillover from design decisions made for actual big money customers.

As for why an Intel honcho isn't going to spill that cold hard truth to gamers while being interviewed by "Gamers Nexus", well like you said, he isn't stupid.

Right, so I should trust the word of a nameless forumite armchair CPU architect over an actual real CPU architect that has been in that profession for over two decades. Not happening. :cool:

And why would he be stupid to tell the truth? It's not some marketing interview. He's just giving the viewers some technical insight into how the design and optimization process occurs.
 

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
If you're going to quote me, at least do it properly. I said process parity or superiority. Given Intel's historic leadership in process node, I can't see how you think I'm being delusional for expecting them to at least return to being competitive on that front.

History is meaningless. Intel is behind. I have my reasons to believe they will stay behind in the foreseeable future. And in my humble opinion, Intel's historic process supremacy is the single most important factor in its competitiveness in this market. You can extrapolate their lack of supremacy to the market landscape in the next five years.

Yes, high clock speed and high IPC work against each other, but they're not mutually exclusive. There's a sweet spot, and past that diminishing returns.

Then why did you say you can have both without qualifications?

SMT hide memory latency

No, it doesn't.

Does this go for desktop CPUs as well?

It's possible. Given a choice between more parts to sell versus anything else, I know what Intel and AMD will pick.
 
Last edited:

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
Right, so I should trust the word of a nameless forumite armchair CPU architect over an actual real CPU architect that has been in that profession for over two decades. Not happening. :cool:

Well dude, I sat in an office chair and worked in CPU design for a decade and half before moving onto better things. But you can believe whatever you want.

It's not some marketing interview.

I beg to differ. My take is that you are reading into that single sentence way too much.
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
And why would he be stupid to tell the truth? It's not some marketing interview. He's just giving the viewers some technical insight into how the design and optimization process occurs.
He said multiple times Sunny Cove will go into a wide range of products, including standard PCs (mentioned desktops separately too, for some reason) among a lot of things. That right there was the first straight up lie in so many ways.
The vulnerability mitigations is a pure gem too. After being deliberately silent and building cores for 5-10 years using degenerate speculation, suddenly a pretty, professionally looking, deep blue colored .ppt slide that says 'ongoing research to help us stay ahead of evolving security landscape' is enough for you?
Should we really pick apart the whole interview?
 
Last edited:

mikegg

Golden Member
Jan 30, 2010
1,740
406
136
One of the interesting things that happened over the last 10 years is TSMC utilizing the capital made from mobile to pull ahead of Intel in process development.

In other words, AMD's recent success owes a lot to Apple pumping money into TSMC over the last 10 years.

As a pure chip manufacturer, TSMC has the same market cap as Intel. All of TSMC's R&D dollars go into manufacturing chips while Intel spends a significant portion of its budget on chip design.

TSMC is the great equalizer that AMD needed going against Intel. Intel will have to win purely on chip architectural design from now on. They might have to compete with a lesser node process for a long time.

I don't see Intel ever having a node advantage ever again over AMD. TSMC will continue to have an advantage from now on because of how much money they get from mobile.

It won't surprise me if, in 5 years, Intel uses TSMC to manufacture their top chips.
 

coercitiv

Diamond Member
Jan 24, 2014
6,151
11,682
136
I don't see Intel ever having a node advantage ever again over AMD. TSMC will continue to have an advantage from now on because of how much money they get from mobile.

It won't surprise me if, in 5 years, Intel uses TSMC to manufacture their top chips.
Beware of these prophetic temptations, we had people thinking the same thing about Intel back when they had the node advantage and other foundries simply couldn't keep up anymore. There could be only one and Intel was bound for the finals.

What we know so far (good & bad):
  • Few foundries manage to make a come-back once they fall behind. (history)
  • Pressure on Intel's business will only increase, from all vectors. (competition)
  • Intel may yet have good short term momentum with their 7nm node to get back into the game. (capabilities)
  • Intel's overall business is very profitable (including their 14nm foundries), so they do not lack resources. (funding)
I would wage we have better chances of predicting the next POTUS over the rise or fall of Intel Foundry Group in the next 5 years.
 

scannall

Golden Member
Jan 1, 2012
1,944
1,638
136
One of the interesting things that happened over the last 10 years is TSMC utilizing the capital made from mobile to pull ahead of Intel in process development.

In other words, AMD's recent success owes a lot to Apple pumping money into TSMC over the last 10 years.

All that was because Intel turned down Apple when they were putting together the iPhone. Seems like a particularly bad decision 10 years down the road.
 

moinmoin

Diamond Member
Jun 1, 2017
4,933
7,619
136
A couple days ago IC Insights published its numbers about the amount of wafers foundries handle.
bulletin20200213fig01q9j5p.png


These top 5 companies cover 53% of the global wafer capacity. The rest of the numbers are available in their paid report, but some numbers have been quoted by the press elsewhere (these are from German ComputerBase, if you know other sources adding more number please share!): Intel does 817.000 wafers per month, UMC 753.000 w/m. TSMC, GlobalFoundries, UMC, SMIC and Powerchip (incl. Nexchip) as pure play foundries cover 24% of the global wafer capacity.

----

Why I mention this here: With only 1/3 of TSMC's wafer capacity Intel is at a huge disadvantage compared to TSMC. Where TSMC does the small node steps which both helps them to ensure every steps actually works as well as having a predictable yearly cadence of them, Intel still tries to go all in (or essentially nothing as it turns out with 10nm) with new major nodes since with their way smaller wafer capacity minor node step just are not financially feasible for them. But such smaller steps would be an "easy" way to get out of the node troubles Intel currently faces. Instead TSMC by all accounts continues this approach, ensuring that they are able to keep offering improved nodes at a fairly predictable cadence, which again ensures that the improvements add up over time and the bleeding edge node more than likely stays ahead of the competition. While Intel is questionable, Samsung should still be able to compete with that, though it seems they are not as stringent with their roadmap leaving questions about their exact timetable.
 

awesomedeluxe

Member
Feb 12, 2020
69
23
41
Beware of these prophetic temptations, we had people thinking the same thing about Intel back when they had the node advantage and other foundries simply couldn't keep up anymore. There could be only one and Intel was bound for the finals.

What we know so far (good & bad):
  • Few foundries manage to make a come-back once they fall behind. (history)
  • Pressure on Intel's business will only increase, from all vectors. (competition)
  • Intel may yet have good short term momentum with their 7nm node to get back into the game. (capabilities)
  • Intel's overall business is very profitable (including their 14nm foundries), so they do not lack resources. (funding)
I would wage we have better chances of predicting the next POTUS over the rise or fall of Intel Foundry Group in the next 5 years.

This is so true. Intel was once an infallible titan. Now that's TSMC.

Whether or not Intel manages to get their manufacturing roadmap together does seem to be the big question for x86 processor wars. It's fun to speculate.

I think Intel took the extraordinary step of bringing Tiger Lake to CES because they know people have lost confidence in their ability to deliver. I actually am optimistic we will see Tiger Lake this year, but I don't think Intel's problems getting 10nm to scale to higher frequencies will go away.

As for 7nm... Intel claims they have EUV working. I guess they are testing it first on a GPU, which makes sense because those are a little more resistant to yield issues. I really hope it comes together for them. Like you say, there are just not that many fabs with the resources Intel has. There are bound to be serious obstacles on the road to ever-smaller transistors, but I don't want this ride to end.
 

awesomedeluxe

Member
Feb 12, 2020
69
23
41
IIRC Intel showed a Broadwell chip in action one full year before launch. Showing TGL in early 2020 while getting ready for a mid 2020 launch doesn't seem that much of an extraordinary step.
I take your point. Maybe "extraordinary" is a bit strong. But to me, Intel working with partners (under "Project Athena") culminating in an announcement at CES of 50 Tiger Lake laptops this year is pretty aggressive. The presence of an actual wafer on the floor underscores the message that Tiger Lake is here and it's happening. I think it's a positive sign that they've found their footing, at least for parts aimed at the ultrabook space.

Anecdotally, it certainly changed my perception. Pre-CES, I would not have expected to actually see much of Tiger Lake this year.
 

uzzi38

Platinum Member
Oct 16, 2019
2,565
5,572
146
I take your point. Maybe "extraordinary" is a bit strong. But to me, Intel working with partners (under "Project Athena") culminating in an announcement at CES of 50 Tiger Lake laptops this year is pretty aggressive. The presence of an actual wafer on the floor underscores the message that Tiger Lake is here and it's happening. I think it's a positive sign that they've found their footing, at least for parts aimed at the ultrabook space.
I disagree for a rather simple reason.

Renoir has over 100 designs releasing over the course of the year.

Granted, there's a difference in time-frame between the two, it's still a big sign that - while yes, Tiger Lake is an improvement from the 35-ish devices of Ice Lake - it's not a sign of a big enough improvement in 10nm IMO. To me - it says that improvements are there, they're coming, but still a little on the slow side. Certainly not what I'd call Intel finding their footing though. In the end, TGL-U will still be sharing the mobile space with CML-U and RKL-U from Intel's side, and there's a good chance that - like with Ice Lake - they'll make up a large portion of the volume.
 
  • Like
Reactions: awesomedeluxe

awesomedeluxe

Member
Feb 12, 2020
69
23
41
I disagree for a rather simple reason.

Renoir has over 100 designs releasing over the course of the year.

Granted, there's a difference in time-frame between the two, it's still a big sign that - while yes, Tiger Lake is an improvement from the 35-ish devices of Ice Lake - it's not a sign of a big enough improvement in 10nm IMO. To me - it says that improvements are there, they're coming, but still a little on the slow side. Certainly not what I'd call Intel finding their footing though. In the end, TGL-U will still be sharing the mobile space with CML-U and RKL-U from Intel's side, and there's a good chance that - like with Ice Lake - they'll make up a large portion of the volume.

Remember too that Renoir also spans higher frequencies, which count for an unknown portion of those wins. I think we agree that Intel's problems scaling cove cores on 10nm to higher core counts / frequencies are not going away -- I am only crediting Intel for their progress in the 15-28W ultrabook space, where I think they have responded to AMD effectively. A 50% increase in shader cores on your APU and direct assistance to manufacturers to make sure their ultrabooks are shipping with that new APU this year is a pretty good response, and will cement Intel's lead over AMD in terms of real-world performance for that segment.

I don't really think I substantially disagree with you or others though; you are right to be skeptical of Intel after so many years of disappointment. I may be just slightly more bullish than others on Intel's offerings this year.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,683
1,218
136
A hybrid CMP/SMT architecture is more efficient than both alone.
CMT w/o Fixed Assignment is more efficient than CMP+SMT, CMP, etc. The only issue w/ Fixed Assigned CMT is that it doesn't utilize the whole architecture like SMT. So, a thread that uses three ALUs on SMT, is stuck with two ALUs on FA-CMT. However, that isn't an issue in UA-CMT architecture.

Clustered architectures bypass the problem of 2-issue(4-issue/8-issue) being twice as slow as 1-issue(2-issue/4-issue) on the same technology.

Zen is 10-issue and Excavator is 11-issue. However, on the previous node 28nm, Excavator achieves a much higher Fnom than Zen w/ much more resources. With that if Excavator wasn't capped with cores being fixed to threads. Then, there is potentially for a single logical thread on an un-assignmented Excavator, actually executing better than the Zen core.

CMT also has the benefit of pushing the higher wire length northbridge intercore interconnect into an architectual intercore interconnect. Thus, speeding up intercore communication speed w/o increasing power from I/O w/ increasing core counts.
 
Last edited:

nicalandia

Diamond Member
Jan 10, 2019
3,330
5,281
136
CMT w/o Fixed Assignment is more efficient than CMP+SMT, CMP, etc. The only issue w/ Fixed Assigned CMT is that it doesn't utilize the whole architecture like SMT. So, a thread that uses three ALUs on SMT, is stuck with two ALUs on FA-CMT. However, that isn't an issue in UA-CMT architecture.

Clustered architectures bypass the problem of 2-issue(4-issue/8-issue) being twice as slow as 1-issue(2-issue/4-issue) on the same technology.

Zen is 10-issue and Excavator is 11-issue. However, on the previous node 28nm, Excavator achieves a much higher Fnom than Zen w/ much more resources. With that if Excavator wasn't capped with cores being fixed to threads. Then, there is potentially for a single logical thread on an un-assignmented Excavator, actually executing better than the Zen core.

CMT also has the benefit of pushing the higher wire length northbridge intercore interconnect into an architectual intercore interconnect. Thus, speeding up intercore communication speed w/o increasing power from I/O w/ increasing core counts.
Thanks, I am still learning about CPU design.

Here is a nice read about CMP, SMP and Hybrid System(proposed as I have to see one yet)
 

zinfamous

No Lifer
Jul 12, 2006
110,511
29,092
146
That statement is not controversial, it is just wrong because it confuses intent with coincidence. The only thing gamers get from Intel and AMD both is overclocking. Every other bit of gaming performance is just spillover from design decisions made for actual big money customers.

As for why an Intel honcho isn't going to spill that cold hard truth to gamers while being interviewed by "Gamers Nexus", well like you said, he isn't stupid.

well, the gaming industry is expected to reach ~$300 billion in value by 2025


It's already quite valuable and has been for some time now. ....so regardless of who your major client is, you don't survive as any size of a company if you ignore a $100-300 billion market when you are effectively 1 of 2 possible companies in the world with any chance to own an indispensable and determinant piece of the hardware package that is absolutely required to power that market.

The market is still primarily southeast-Asia/Asia, and a considerable amount of that market is tied to the type of games that still demand low power, single core monsters for those low-res competitive FPS, and of course serious parallel performance for the competitive RTS (which is seriously growing, I think).

Neither Intel nor AMD would scoff at this market, and not organize at least a small team of engineers dedicated to that kind of design, right? Obviously there's a lot of overlap, but architecture from the ground up already requires multiple teams involved in specific packages of each design (After listening to that fascinating Keller interview linked in the other thread), so it makes sense that there is overlap and, at least, some time dedicated for those considerations among all the teams that already have to make sure their section is going to work well with the others.

I mean....300 billion dollars. Just for games.
 

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
well, the gaming industry is expected to reach ~$300 billion in value by 2025


It's already quite valuable and has been for some time now. ....so regardless of who your major client is, you don't survive as any size of a company if you ignore a $100-300 billion market when you are effectively 1 of 2 possible companies in the world with any chance to own an indispensable and determinant piece of the hardware package that is absolutely required to power that market.

The market is still primarily southeast-Asia/Asia, and a considerable amount of that market is tied to the type of games that still demand low power, single core monsters for those low-res competitive FPS, and of course serious parallel performance for the competitive RTS (which is seriously growing, I think).

Neither Intel nor AMD would scoff at this market, and not organize at least a small team of engineers dedicated to that kind of design, right? Obviously there's a lot of overlap, but architecture from the ground up already requires multiple teams involved in specific packages of each design (After listening to that fascinating Keller interview linked in the other thread), so it makes sense that there is overlap and, at least, some time dedicated for those considerations among all the teams that already have to make sure their section is going to work well with the others.

I mean....300 billion dollars. Just for games.

Your numbers are way off. Go look at channel sales for DIY gamers versus data center revenue, it's orders of magnitude in favor of the latter.
 

maddie

Diamond Member
Jul 18, 2010
4,722
4,626
136
well, the gaming industry is expected to reach ~$300 billion in value by 2025


It's already quite valuable and has been for some time now. ....so regardless of who your major client is, you don't survive as any size of a company if you ignore a $100-300 billion market when you are effectively 1 of 2 possible companies in the world with any chance to own an indispensable and determinant piece of the hardware package that is absolutely required to power that market.

The market is still primarily southeast-Asia/Asia, and a considerable amount of that market is tied to the type of games that still demand low power, single core monsters for those low-res competitive FPS, and of course serious parallel performance for the competitive RTS (which is seriously growing, I think).

Neither Intel nor AMD would scoff at this market, and not organize at least a small team of engineers dedicated to that kind of design, right? Obviously there's a lot of overlap, but architecture from the ground up already requires multiple teams involved in specific packages of each design (After listening to that fascinating Keller interview linked in the other thread), so it makes sense that there is overlap and, at least, some time dedicated for those considerations among all the teams that already have to make sure their section is going to work well with the others.

I mean....300 billion dollars. Just for games.
The problem with just saying gaming is like saying productivity. Too broad a brush. Individual programs are just that, individual, with specific needs. Some games run faster with more threads, others don't.
 

mikegg

Golden Member
Jan 30, 2010
1,740
406
136
Beware of these prophetic temptations, we had people thinking the same thing about Intel back when they had the node advantage and other foundries simply couldn't keep up anymore. There could be only one and Intel was bound for the finals.

What we know so far (good & bad):
  • Few foundries manage to make a come-back once they fall behind. (history)
  • Pressure on Intel's business will only increase, from all vectors. (competition)
  • Intel may yet have good short term momentum with their 7nm node to get back into the game. (capabilities)
  • Intel's overall business is very profitable (including their 14nm foundries), so they do not lack resources. (funding)
I would wage we have better chances of predicting the next POTUS over the rise or fall of Intel Foundry Group in the next 5 years.
Sure, nothing ever happens for certain. But we know these facts:
  • TSMC is as big as Intel in market cap today
  • TSMC's whole focus is on manufacturing chips
  • TSMC manufactures more chips which means a better economy of scale for every process node they invest in
We just can't understate the importance of an entire company from the CEO down to the intern focused manufacturing and process node improvements.

Meanwhile, Intel has its hands in many cookie jars. It lacks focus. Intel's CEO might delay a meeting on a process node so he can meet with Tim Cook to try to convince him not to go ARM for Macbooks. Boom, a few weeks delay. A process node engineer might be pulled off to help out with chip design. Boom, one week delay. Over the course, this lack of focus adds up.

I expect TSMC's advantage to gain even more over Intel in the future - to the point where Intel cannot compete with AMD, Apple and Qualcomm chips if they don't switch to TSMC themselves.

Apple is about to unleash a 5nm iPhone this year. TSMC's execution seems unstoppable.
 
Last edited:

coercitiv

Diamond Member
Jan 24, 2014
6,151
11,682
136
Intel's CEO might delay a meeting on a process node so he can meet with Tim Cook to try to convince him not to go ARM for Macbooks. Boom, a few weeks delay. A process node engineer might be pulled off to help out with chip design. Boom, one week delay. Over the course, this lack of focus adds up.
The entire Intel Custom Foundry management waiting for manufacturing decisions because the CEO is having tea with Cook?! Sounds like fantasy corporate MMORPG. All major corp CEOs juggle their time between numerous tasks. All foundries need to allocate time and focus to make sure their clients implement designs properly (more clients, more time spent with each on their products). Arguing a standalone foundry will inherently succeed over an integrated one comes down to vertical integration as a successful business strategy. Time and time again it has been shown that vertical integration can bring great advantage for those able to master it properly.

Nobody is unstoppable, whether they seem like that or not. In movies we hack and defeat alien battleships, in reality a virus from our own planet might push us back into recession or worse.
 
  • Haha
Reactions: lobz
Status
Not open for further replies.