Discussion Intel current and future Lakes & Rapids thread

Page 162 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ajc9988

Senior member
Apr 1, 2015
278
171
116
It is a compound. AMD Zen was 7% IPC behind coffee, putting it somewhere around Haswell and Broadwell. Here shows AMD only got about 3% or so IPC with Zen+:

And at some point, I will tell you do your own darn research at this point, especially since I even searched out the old Intel roadmaps another member was referencing.

If you are too lazy to read the articles by anandtech then speak in their forum and call into question someone citing their work, there is something wrong.
 
  • Like
Reactions: Thunder 57

ajc9988

Senior member
Apr 1, 2015
278
171
116
  • Like
Reactions: lightmanek

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
It is a compound. AMD Zen was 7% IPC behind coffee, putting it somewhere around Haswell and Broadwell. Here shows AMD only got about 3% or so IPC with Zen+:

And at some point, I will tell you do your own darn research at this point, especially since I even searched out the old Intel roadmaps another member was referencing.

If you are too lazy to read the articles by anandtech then speak in their forum and call into question someone citing their work, there is something wrong.
Haha. Just eyeball it. Go back to the Zen 2 review and look at the delta between Zen+ and CFL-R in the Spec tests. You're welcome!
 

ajc9988

Senior member
Apr 1, 2015
278
171
116
Haha. Just eyeball it. Go back to the Zen 2 review and look at the delta between Zen+ and CFL-R in the Spec tests. You're welcome!
Read what the average is. If you notice, eyeballing doesn't fully work out with their numbers. They already consolidated the information from Spec testing as an overall average IPC. You're welcome!
 
  • Like
Reactions: Markfw

mikk

Diamond Member
May 15, 2012
4,131
2,127
136

You can clearly see that Zen 2 was 15-17% IPC over Zen+, which was only 3-4% IPC behind Coffelake, meaning I was sand bagging, if you want me to be honest.

0%: https://www.overclock.net/forum/10-amd-cpus/1728758-strictly-technical-matisse-not-really.html

Also when you refer to mobile Zen2 you have to consider that Zen won't have a memory advantage unlike in some of the desktop tests, anandtech for example.
 

ajc9988

Senior member
Apr 1, 2015
278
171
116
0%: https://www.overclock.net/forum/10-amd-cpus/1728758-strictly-technical-matisse-not-really.html

Also when you refer to mobile Zen2 you have to consider that Zen won't have a memory advantage unlike in some of the desktop tests, anandtech for example.
My initial analysis used 6-7% IPC. I later showed the reality was closer to 11-13%, depending on task, considering different versions of SPEC showed 15% and 17%. So you think that isn't enough of a sandbag to account for that in my original analysis?
 

mikk

Diamond Member
May 15, 2012
4,131
2,127
136
My initial analysis used 6-7% IPC. I later showed the reality was closer to 11-13%, depending on task, considering different versions of SPEC showed 15% and 17%. So you think that isn't enough of a sandbag to account for that in my original analysis?


Bassed on SPEC only or mixed workloads by using different apps and games?
 

jpiniero

Lifer
Oct 1, 2010
14,571
5,202
136
It's either Ice Lake server or some form of Willow Cove due to the cache configuration. Sky Lake server based designs are 1MB L2$ and 1.375MB L3 $ per core. This is 1.25MB and 1.5MB per core.

Skylake Server is 1.25 MB L2/core I thought. Possible that the L3/core was increased I guess.
 

FriedMoose

Member
Dec 14, 2019
48
28
51
Skylake Server is 1.25 MB L2/core I thought. Possible that the L3/core was increased I guess.
Nope. Sky Lake, Cascade Lake, and Cooper Lake are all 1MB L2 and 1.375MB L3 per core:

 

ajc9988

Senior member
Apr 1, 2015
278
171
116
Bassed on SPEC only or mixed workloads by using different apps and games?
SPEC does mixed workloads, which can show separate integer and floating point, etc. This is why I pointed out above that the graphs don't always look like what the final average is. I'm the one that started the argument awhile back with Andrei on whether holding the memory to similar bandwidth and latency or allowing it to run the memory at spec was proper and allow the SPEC software to show IPC with the memory as configured. Not rehashing that argument.

My point is SPEC does test so that you have a consistent result on IPC, which is why it is the industry standard. Doesn't matter if you agree or disagree or like or dislike the included tasks in it.

This isn't setting a fixed frequency, trying to control for memory bandwidth and latency, then comparing across games and programs. If you want that, talk with Andrei and Ian about it. I don't have all the hardware to test.

Further, people here surprised by the IPC numbers should realize that performance is closer to instructions per second (IPS), which is IPC*Frequency. Many people assume IPC on Intel is higher, but that is due to its frequency advantage.

It's like above reading too much into ice lake's FP performance instead of the weighting.
 

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
Read what the average is. If you notice, eyeballing doesn't fully work out with their numbers. They already consolidated the information from Spec testing as an overall average IPC. You're welcome!
I see. You can't see the forest for the trees. You happily quoted Andrei's findings and linked it too, but you can't be bothered to do any deductions from the graphs?
 

jpiniero

Lifer
Oct 1, 2010
14,571
5,202
136
Edit: so on the mobile leaked roadmap, it does say Q2 2020, even though the SIPP roadmap literally shows Q2 2021. This suggests consumer availability somewhere in between, like Q4 2020, which was my stated estimate.

SIPP is different, it's more for enterprise that really really doesn't want to deal with software updates. Intel's roadmap is so in flux anything is possible but as I said if Tiger has yield benefits there is an incentive to release ASAP because of the shortage.
 
  • Like
Reactions: ajc9988

ajc9988

Senior member
Apr 1, 2015
278
171
116
I see. You can't see the forest for the trees. You happily quoted Andrei's findings and linked it too, but you can't be bothered to do any deductions from the graphs?
It's because there are a multitude of variables that go into average IPC. For example, if you look at a task that is heavy floating point and utilizes AVX512, that will skew how you evaluate the IPC even if that is not representative of most consumer workloads. If you look at a pure integer test, that might be lower than the average mixed workloads and underrate a CPUs performance. Then there is weighting involved, etc., to end up with the final IPC average.

Are you trying to say Andrei's work is shoddy? Because I would argue him and Ian do great work. Yet you are directly saying his average is wrong. Maybe you should tag him and ask him to explain it to you.
 
  • Like
Reactions: Thunder 57

ajc9988

Senior member
Apr 1, 2015
278
171
116
SIPP is different, it's more for enterprise that really really doesn't want to deal with software updates. Intel's roadmap is so in flux anything is possible but as I said if Tiger has yield benefits there is an incentive to release ASAP because of the shortage.
Thank you for getting back with that response. Sorry for not recognizing that you were going off the road maps (that is all I needed to know and once I realized it, I looked them up and it matches your stated timeline). But, even under that timeline, Intel was late with ice lake.

You are correct that there are incentives IF yields are better. But Intel only has 1-3 fabs currently able for 10nm, depending on reference (some have suggested 1-2 of those went back to 14nm to alleviate the shortage: I think semi-accurate was one source for that). So getting it keyed to the new iteration would possibly take that down, which is needed for ice lake server, which was marked for Q2 and may be pushed to Q3. That is why I like the other suggestion of availability in Q3 better, even if I personally am thinking Q4. That is why I wasn't willing to argue on that timeline suggestion.

But with all the recent production woes, until Intel shows they can hit what they thought like they used to, I remain skeptical.

Also, that is why I didn't use the SIPP quarter either, instead splitting the baby, so to speak.
 

mikk

Diamond Member
May 15, 2012
4,131
2,127
136
SPEC does mixed workloads, which can show separate integer and floating point, etc. This is why I pointed out above that the graphs don't always look like what the final average is.

In this case the estimated scores and different memory used is the cause for the different result, this test is not ideal.
 

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
It's because there are a multitude of variables that go into average IPC. For example, if you look at a task that is heavy floating point and utilizes AVX512, that will skew how you evaluate the IPC even if that is not representative of most consumer workloads. If you look at a pure integer test, that might be lower than the average mixed workloads and underrate a CPUs performance. Then there is weighting involved, etc., to end up with the final IPC average.

Are you trying to say Andrei's work is shoddy? Because I would argue him and Ian do great work. Yet you are directly saying his average is wrong. Maybe you should tag him and ask him to explain it to you.
How so? I'm merely asking you to clue in on those blue and red horizontal lines, but apparently, it isn't that simple.
 

ajc9988

Senior member
Apr 1, 2015
278
171
116
In this case the estimated scores and different memory used is the cause for the different result, this test is not ideal.
I'm not rehashing the IPC argument and how to test here. Last time I did, a mod split off the discussion into its own thread. Your argument on sufficiency of IPC testing this way belongs there, where Andrei and others chimed in and went into greater detail.

Instead, what we have to work with is those IPC numbers, which I will mention are what AMD and Intel usually use to test IPC when they make their claims about IPC, so they are roughly agreed on it being the metric. Check out how often both manufacturers cite to SPEC benchmarks in their presentation materials.

Further, Anandtech has used this method of testing for awhile. So that makes the testing fairly consistent as you have the same tester with roughly the same knowledge (we always learn more over time, so it isn't going to remain static) using the same tests and the same methodology to compare the IPC on different hardware over the years. Few places anywhere on the internet can you find such consistent data to compare IPC (I've tried to find it, but haven't; if you do find somewhere as consistent and reliable on the information, let me know so I can evaluate their information).

Until then, this is what we are working with, and the work is good and high quality.
 

DrMrLordX

Lifer
Apr 27, 2000
21,608
10,802
136
As for tiger lake coming out in 1 year? Now that is the real wishful thinking.

Nah not really. I'd agree with the assessment that TigerLake should show up sometime next year, maybe a little earlier than IceLake-U/Y did. Or maybe not.
 
  • Like
Reactions: Gideon

ajc9988

Senior member
Apr 1, 2015
278
171
116
How so? I'm merely asking you to clue in on those blue and red horizontal lines, but apparently, it isn't that simple.
Then why can you not accept his average IPC increase data? Seriously, you either believe his numbers are correct, which then you should accept his average IPC (which those graphs are but a slice of the average), or you don't, which then you are questioning his work and that also means you cannot reference those graphs either.

Make your choice: you either believe the work, including his average IPC number, or you don't. It isn't that hard.
 
  • Like
Reactions: spursindonesia

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
DG1 is likely to be announced soon too, yes. But you are only getting the 128 EU die. Don't know if they will bother with a desktop card of it.
Like I said, I'm not holding my breath. Just the other day, intel exec said one more time, their products are on track. Judging from the past 2 years, that means the exact opposite.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Tiger Lake should be coming soon actually, first half of next year if not Q1. Intel has a pretty strong incentive to not hold it back if it has improvements that improves yield over Icelake.

Q1 seems too early, but I can see it coming early to mid Q2.

Y: May launch with systems by June-July
U: Computex June launch with systems by August-September(same thing as mikk said)

They have a big incentive to compete, and the bigger competition is WoA ARM. AMD matters, but probably a secondary consideration for them in this space. With -Y they can launch it early since ICL-Y doesn't exist.

It's 2x faster when comparing 25W TGL vs 15W ICL.

You need to account for the Gen 11 in the 2x figure Intel provided being clocked lower.

No its not. They claimed 4x over WHL. Icelake does 2x, but oftentimes requires 25W to do so. But really its not a huge deal and you can see from the ICL SDP systems that the 25W mode was often not faster than the 15W. Also, the shipping systems outperform the SDP ones.

All the talks about 7% or 10% or 15% IPC gains seem not much when the A76 seem pretty close. What's happening is that the execution of both Intel and AMD suck.

Even for the GPU the ARM SoCs are stellar. It's not ISA, but huge momentum behind the ARM ecosystem driving all the advances. Plus the x86 companies sucking. Tigerlake will need the 2x graphics. At least LPDDR5x means it'll have fully caught up for memory standards.
 
  • Like
Reactions: DooKey