Coffeelake thread, benchmarks, reviews, input, everything.

Page 60 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
I am not downplaying Intels huge initial lead, and no doubt we are in an era of diminishing returns.

But Intel has pretty much had no recent improvements in architecture. Has the architecture team been doing nothing in the *Lake era?

They're busy fixing 10nm.
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
Which I noted. It's nice to get more cores, but it doesn't hide how stagnant intel has been on IPC/Process.

Well considering their 10nm struggles that shouldn't come as a surprise to anyone unless you expected a new architecture on 14nm as a Plan B which was never going to happen.

Despite this stagnation Intel still holds a sizeable ST advantage due to higher clocks and IPC, so it was never their weakness and the fact that it hasn't increased doesn't matter as much compared to them being behind in core count and falling behind in MT tasks IMO.

Basically Intel has addressed it's major weakness rather than build on a strength, which I'm happy about.
 
  • Like
Reactions: frozentundra123456

SlowBox

Member
Jul 4, 2018
80
5
16

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Despite this stagnation Intel still holds a sizeable ST advantage due to higher clocks and IPC, so it was never their weakness and the fact that it hasn't increased doesn't matter as much compared to them being behind in core count and falling behind in MT tasks IMO.

Unlike most people, I am not viewing everything through AMD vs Intel lenses.

I am merely pointing out the stagnation which is a very real problem, it would still be a problem if there were no AMD at all.

There is a very real question about WTF the architecture team has been doing all this time. They essentially have been running identical architecture for years.
 
  • Like
Reactions: pcp7

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
As I was saying, trying to fix 10 nm.

That's true, but this isn't a process problem per se. You need the chip basically redone.

Just to be clear, by architecture, I mean core design. Pipelines, instruction decoders, execution units, etc...

Unless a new architecture design needs a MASSIVE increase in transistors, there is no reason they couldn't implement new architecture in 14nm, and a massive increase in transistors/core is VERY unlikely, given the mobile focus of the market.

So there is really no excuse for the architecture stagnation, except that Intel is resting on it's laurels.

The have essentially had the same 14 pipeline stages since Sandy Bridge. It's mostly been small tweaks since Sandy Bridge. Though this is not unreasonable.

But with Skylake even tweaks stopped, because all *Lake chips have essentially the exact same core design with the exact same IPC.

It looks like Intel Generation 9 will mostly be generation 8 chips rebranded, and some of those are actually generation 7 chips rebranded. But I guess since there have been no real core improvements since Skylake, that hardly matters.
 
  • Like
Reactions: elpokor

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
Unless a new architecture design needs a MASSIVE increase in transistors, there is no reason they couldn't implement new architecture in 14nm, and a massive increase in transistors/core is VERY unlikely, given the mobile focus of the market.

For when Intel would have had to made the decision to say backport Icelake back to 14 nm, they were still under the impression 10 nm would be fixed in a timely fashion. I don't think you quite appreciate the lead time for this stuff.
 
  • Like
Reactions: mikk

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
For when Intel would have had to made the decision to say backport Icelake back to 14 nm, they were still under the impression 10 nm would be fixed in a timely fashion. I don't think you quite appreciate the lead time for this stuff.

I appreciate lead times. I often bring it up when People think they are going to quickly see new competing products every time some manufacture releases an advance.

But 10nm has been a long running train wreck, following issues at 14nm. There was external discussion about signs 10nm was running into issues in 2015. Since this is all internal, these ongoing failures don't come from an impenetrable vacuum. This shouldn't be some kind of total blindside for Intel. They had years since 2015 to do risk mitigation with a new 14nm architecture.
 
  • Like
Reactions: PingSpike

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
Unlike most people, I am not viewing everything through AMD vs Intel lenses.

I am merely pointing out the stagnation which is a very real problem, it would still be a problem if there were no AMD at all.

There is a very real question about WTF the architecture team has been doing all this time. They essentially have been running identical architecture for years.

Obviously they have a new architecture in the works but it will be launched with 10nm. If there were no delays with 10nm we most likely would have got the new arch 2 years ago already.

I don't see how the ST / IPC stagnation is a real problem since almost all computationally intensive tasks are already very multi-threaded.

Can you name a task where a 4GHz+ 'Lake' chip bottlenecks the user experience because of limited ST performance? I can't think of any.
 

Vattila

Senior member
Oct 22, 2004
799
1,351
136
I don't see how the ST / IPC stagnation is a real problem since almost all computationally intensive tasks are already very multi-threaded. Can you name a task where a 4GHz+ 'Lake' chip bottlenecks the user experience because of limited ST performance?

Big models in Autodesk AutoCAD? I have heard that it is still mostly sequential, and that poor utilisation of processor cores has been an issue with much CAD software. However I may be wrong about this, and/or it may have changed by now.

Edit: Found this on Autodesk's support site, dated 2017-04:

"To fully benefit from multi-core processors, you need to use multi-threaded software; AutoCAD is predominantly a single-threaded application."

https://knowledge.autodesk.com/supp...t-for-multi-core-processors-with-AutoCAD.html

Edit 2: I also was disappointed to learn that the flight simulator DCS World does not use multi-threading and does not plan to do so (I somewhat doubt the claim about "gain" — I guess it is more about "pain"):

"CPU multi-threading is not being pursued as it will provide little if any gain."

https://forums.eagle.ru/showthread.php?t=135473

PS. Sadly, I suspect there is a lot of legacy software out there (including mine) that is not multi-threaded, and maybe never will be, due to the effort involved in rewriting the software.
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
But 10nm has been a long running train wreck, following issues at 14nm. There was external discussion about signs 10nm was running into issues in 2015. Since this is all internal, these ongoing failures don't come from an impenetrable vacuum. This shouldn't be some kind of total blindside for Intel. They had years since 2015 to do risk mitigation with a new 14nm architecture.

Guessing Intel only realized the current 10 nm strategy wasn't going to work roughly when BK applied to sell his shares.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Obviously they have a new architecture in the works but it will be launched with 10nm. If there were no delays with 10nm we most likely would have got the new arch 2 years ago already.

I don't see how the ST / IPC stagnation is a real problem since almost all computationally intensive tasks are already very multi-threaded.

Can you name a task where a 4GHz+ 'Lake' chip bottlenecks the user experience because of limited ST performance? I can't think of any.

Resting on your laurels is seldom a good thing.

Stagnation is a signal that things are going wrong. You might not feel the performance effects today, but in 5 years from now when half the new Mac/Windows Laptops are running ARM CPUs, you can look back for the seeds of what went wrong, and see it in the years of Intel stagnation.

There is significant inertia in the industry, so you can have a stumble and not falter immediately, but the years of Intel stagnation may kill it's PC x86 golden goose years into the future.
 
  • Like
Reactions: ozzy702

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
Resting on your laurels is seldom a good thing.

Stagnation is a signal that things are going wrong. You might not feel the performance effects today, but in 5 years from now when half the new Mac/Windows Laptops are running ARM CPUs, you can look back for the seeds of what went wrong, and see it in the years of Intel stagnation.

There is significant inertia in the industry, so you can have a stumble and not falter immediately, but the years of Intel stagnation may kill it's PC x86 golden goose years into the future.

Realistically. IPC improvements have come at a snails pace for the past 7 years since Sandy Bridge, and a lot of those 'IPC' improvements are actually related to increased memory bandwith from DDR4. If you limit a 8700K to DDR3 type bandwith, it actually won't be much faster than a 2011 era 3930K if both are clocked identically.

It's not about resting on your laurels though, but doing what you can (feasibly, of course, everything comes down to the bottom line) with what you have. We have no 10nm still, so again, I ask, what could Intel have done in the past couple of years? Do a refresh of the 'Lake' processors and eek out an extra percent or 3 in IPC? What good would that have done when the competition is doubling your core count with Ryzen?

Thats what I meant by Intel prioritising MT over ST performance in this CFL refresh as a good move, as a few % higher IPC won't really affect anything. A 33% increase in core count, on the other hand... Intel suddenly goes from 20% behind in MT tests to 10% ahead.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Realistically. IPC improvements have come at a snails pace for the past 7 years since Sandy Bridge, and a lot of those 'IPC' improvements are actually related to increased memory bandwith from DDR4. If you limit a 8700K to DDR3 type bandwith, it actually won't be much faster than a 2011 era 3930K if both are clocked identically.

It's not about resting on your laurels though, but doing what you can (feasibly, of course, everything comes down to the bottom line) with what you have. We have no 10nm still, so again, I ask, what could Intel have done in the past couple of years? Do a refresh of the 'Lake' processors and eek out an extra percent or 3 in IPC? What good would that have done when the competition is doubling your core count with Ryzen?

Thats what I meant by Intel prioritising MT over ST performance in this CFL refresh as a good move, as a few % higher IPC won't really affect anything. A 33% increase in core count, on the other hand... Intel suddenly goes from 20% behind in MT tests to 10% ahead.

So that's it? IPC has halted on the architecture side. There no more improvements to be had from increasing the issue width of the processor. Nothing from pipeline improvements. Nor increasing parallel back-end functional units. No improvements in branch prediction or scheduling.

Nothing more can be done? This is peak architecture?

I'll agree there have been marginal architecture gains since Sandy Bridge, that was my point. Intel is just tweaking the same architecture rather than doing something substantive that might increase performance like increasing issue width, or add more parallel back end units.

I have a hard time believing there are no more significant gains to be had with better architecture.
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
So that's it? IPC has halted on the architecture side. There no more improvements to be had from increasing the issue width of the processor. Nothing from pipeline improvements. Nor increasing parallel back-end functional units. No improvements in branch prediction or scheduling.

Nothing more can be done? This is peak architecture?

I'll agree there have been marginal architecture gains since Sandy Bridge, that was my point. Intel is just tweaking the same architecture rather than doing something substantive that might increase performance like increasing issue width, or add more parallel back end units.

I have a hard time believing there are no more significant gains to be had with better architecture.

No, thats not what I'm saying. Of course there are improvements to be made - I'm saying those improvements should have come in 2016 with 10nm and a new uarch if Intel had continued with the 'tick tock' approach. Instead here we are in 2018 and still waiting on 10nm... that is the real reason for the stagnation. I'm not absolving Intel of any fault here, they clearly f*cked up on 10nm and this has a roll on effect on future products too, as you said. I'm just saying that, increasing core count in an architecture that already has (by industry standards) high IPC / clockspeeds makes the most sense in terms of Intel's 'Plan B' to stay competitive whilst on 14nm. They were lacking in cores, not ST performance.
 

jpiniero

Lifer
Oct 1, 2010
14,510
5,159
136
I'll agree there have been marginal architecture gains since Sandy Bridge, that was my point. Intel is just tweaking the same architecture rather than doing something substantive that might increase performance like increasing issue width, or add more parallel back end units.

That's what Sapphire Rapids is rumored to be, the substantial improvement you are looking for. Based upon the Lenovo leak it was originally scheduled to be released next year. Obviously it won't now.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
No, thats not what I'm saying. Of course there are improvements to be made - I'm saying those improvements should have come in 2016 with 10nm and a new uarch if Intel had continued with the 'tick tock' approach. Instead here we are in 2018 and still waiting on 10nm... that is the real reason for the stagnation. I'm not absolving Intel of any fault here, they clearly f*cked up on 10nm and this has a roll on effect on future products too, as you said.

There is nothing magical about 10nm. A new architecture that could be implemented at 10nm could be implemented at 14nm.

It's not like 10nm is 6 months late. It's years late. Plenty of time to plan b new architecture onto 14nm.

Unless the architecture is also running late, and that gets hidden by the very late process side.

I'm just saying that, increasing core count in an architecture that already has (by industry standards) high IPC / clockspeeds makes the most sense in terms of Intel's 'Plan B' to stay competitive whilst on 14nm. They were lacking in cores, not ST performance.

It isn't about still having the fastest x86 in town. It's about convincing your most important partners (Apple and Microsoft) that you have have forward momentum, that they don't need to seriously consider moving to ARM, and on that front, they may have already dropped the ball for too long. If Apple and Microsoft get serious about shifting to ARM or moving dual architecture, it's easy to see a future 5 years from now where most new Windows/Apple laptops are running ARM chips.

So good enough IPC and adding some more cores can be viewed as important in the short term, but loses them Apple/Microsoft laptop business in the longer term.

That hardly makes more cores the most important thing to do.
 
Last edited:
  • Like
Reactions: Lodix

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
For major architectural changes that make a noticeable impact on IPC, you really have to do something special. X86 has some inherent limitations in its implementation. While most modern x86 compatible CPUs use an instruction decoder that essentially converts x86 CISC into a proprietary internal RISC, and anything that demands very high performance has access to a slew of higher level commands that have their own qwerks, fundamentally, there's only so much that a CPU can do per cycle with a CISC instruction. If you look at the last 10 years of x86 development, the focus has been on improving and removing instruction and data throughput bottlenecks. The very strategies that they've been using for that have proven to be problematic from a security standpoint and will likely require performance harming extra checks going forward on all major computing platforms.

Look around the industry outside of x86. There are no other architectures out there that do a significant more amount of actual computing work per cycle than what Intel and AMD have with their top of the line x86 CPUs. If you look at the actual cores and the risc like instructions that they actually work with, they're all moving similar amount of instructions and data per cycle per core when well fed with data and instructions.

So, given that perspective, where does the extra performance come from? Clock speeds are not predicted to go far north of 5Ghz for any current or predicted near future process without exotic cooling. Future processes promise higher circuit density and the possibility of power savings, which can help with thermal management, enabling minor improvements in single core turbo frequency. But, it's not going to be a significant uplift in throughput. Faster DDR is soon to arrive, but, even if it doubles the bandwidth of dual channel DDR, given how modern CPU caches work, the total performance impact on 90% of code will be less than 10%, and in those few cases where code is memory throughput bound, it'll rise until some other portion of the memory subsystem bottlenecks. To truly use the maximum throughput of dual channel DDR-4 as it is, in all but a precious few special cases, the code has to be VERY parallel and involve many cores all working as fast as possible already. A lot of what's done today isn't as much core IPC bound as it is processor data throughput bound.

Without a major rethinking of how a computer works, specifically related to its memory subsystem, but also in general, we're not going to see major IPC bumps in the near future (and, by major, I'm talking over 20% better than previous generations) especially with the ever increasing focus on data security requiring more and more of the data moving around in the CPU to be protected and possibly encrypted at all times until it's actually in the core itself. All of that encryption, decryption and bounds checking is not free. It will require CPU cycles and thermal load from the encryption engines.
 
  • Like
Reactions: VirtualLarry
May 11, 2008
19,303
1,129
126
I think the architecture design groups at Intel have been very busy for the at least the last six months, since they have been made aware of spectre and meltdown.
The challenge is to keep at least the same performance numbers as current coffee lake generation without the security flaws.
I think they are very busy rethinking how they are going to solve all those issues. Besides, since it takes years to design a new core architecture and since spectre and meltdown came up,
despite the 10nm issues, the architecture team has more time to solve all the security issues for ice lake.