Discussion Intel current and future Lakes & Rapids thread

Page 222 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,615
5,865
146
There's a full slide of it at Notebookcheck.

It's actually when it ships to Lenovo. Cometlake U and Renoir are shown as early May and mid May respectively. Obviously Cometlake U was available much earlier so this doesn't necessarily mean Tigerlake is coming at that time.

AFAIK Icelake isn't used at all in their Thinkpad lineups so for Tigerlake generation more 10nm products are used by Lenovo. This may also have something to do with the fact that Icelake doesn't have a business oriented version at all.

Wasn't aware of that, thanks. And yeah, Ice Lake has no vPro SKUs and never will afaik, and Comet Lake-U's is yet to appear in any devices yet. There's been some benchmarks, but nothing's hit shelves yet I don't think.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
@uzzi38 You can also see from Cometlake H and Renoir that Lenovo is bit later than other manufacturers.

The graphics performance of Tigerlake is also below expectations as its performing about 30-40% better than Icelake and roughly on par with Renoir.

Either they have a critical feature they are hiding or there's going to be a G9 graphics variant that goes a step beyond G7. If they are internally promoting not needing dGPUs I expect they are not going to let that opportunity pass by. This means they'll have the G7 for competitive reasons and G9 for Premium, or in other words, extra cash either by direct price increase or indirectly by putting it into halo devices.
 
  • Like
Reactions: uzzi38

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
@Exist50 It's precisely because of the iGPU's latency and bandwidth requirements that Arrandale/Clarkdale opted for GMCH as a separate chiplet. If anything modern iGPUs are even more sensitive, not less. Sure you will lose some CPU performance compared to a monolithic setup which is why chiplets are a compromise, not a magic bullet like some believe.

In a modern context, CPUs are more latency sensitive than GPUs. Moreover, there are marketing, political, and manufacturing problems with putting the memory controller on the GPU die.

In terms of marketing, Intel sells its products primarily based on CPU benchmarks. Compromising the CPU to boost the GPU is thus a poor marketing tradeoff.

In terms of politics, the Core team is much more closely linked to the SoC's development than the GPU team is (both the Alder Lake and the Core team are part of IDC), so their desires will factor in first.

Finally, one of the primary benefits of chiplets would be the ability to swap out dies instead of needing to tape out new products, and the GPU would likely see the greatest variation. It's easy to imagine a "GT1" variant for desktop and low end mobile, a mainstream "GT2", and a flagship/premium "GT3". The GT1 market would certainly not care about any latency benefit to the GPU if it meant taking away from the CPU.

I agree with this too. But the leak showed 8+8+1 125W, 8+8+1 80W, and 6+0+1 80W.

Let me propose a theory. Instead of that slide describing end-user configurations available, it described the dies they were going to tape out? An 8+8 for high end/benchmark winner, and a cheaper 6+0 config they could throw into a shit-ton of i5 OEM and gaming systems.
 

uzzi38

Platinum Member
Oct 16, 2019
2,615
5,865
146
@uzzi38 You can also see from Cometlake H and Renoir that Lenovo is bit later than other manufacturers.

The graphics performance of Tigerlake is also below expectations as its performing about 30-40% better than Icelake and roughly on par with Renoir.

Either they have a critical feature they are hiding or there's going to be a G9 graphics variant that goes a step beyond G7. If they are internally promoting not needing dGPUs I expect they are not going to let that opportunity pass by. This means they'll have the G7 for competitive reasons and G9 for Premium, or in other words, extra cash either by direct price increase or indirectly by putting it into halo devices.

Agreed on the first bit, from the new roadmap I would still lean towards mid-late summer for first TGL devices, this roadmap isn't indicative of anything. On the second it does seem to consistently have a 10-20% lead over Renoir, but I'm not worried this is final performance yet. I wouldn't be surprised to see final perf being an extra 20% or so higher still.

I don't think the issue is of them concealing a variant above what we have benchmarks of. The benchmarks we have already are likely of the top end SKUs already. What I think is more likely is that the dies might be being tested at significantly lower clocks or with gimped drivers compared to final release (whether intentionally or unintentionally I can't say), but in any case that's just a theory of mine.

As for the last bit, I think that'll happen regardless of whether or not the top end SKU has a better iGPU or not given the new 118xG7 naming scheme seems to indicate chips that are binned a tier above the rest. And besides, I'm sure Intel want to get to the i9 naming at some point on mobile, and the product stack where - at least in 1T performance - they can finally beat even the most high end desktops confidently seems like the perfect time to do it to me.

(That's assuming Intel hit 4.5GHz boost of the utmost binned chips, which personally I think is likely but also the highest we'll see. Not that it matters for the most part, but I'm sure Intel will want to push that marketing angle).
 

mikk

Diamond Member
May 15, 2012
4,133
2,136
136
The graphics performance of Tigerlake is also below expectations as its performing about 30-40% better than Icelake and roughly on par with Renoir.

I haven't seen any representative Tigerlake iGPU tests with a final SKU with known memory specs and a driver which is close to final up to now, and all leaks came from synthetic benchmarks where Icelake is better than in real world. It sounds like there are meaningful tests available, this is not good for your reputation. I'm sure you are dead wrong with 30-40% better than Icelake. Also there is no sign for a G9 variant at the moment, the two leaked higher end SKU i7 did have G7 branding.
 

uzzi38

Platinum Member
Oct 16, 2019
2,615
5,865
146
Okay, this post is going to get weird.

So Sharkbay left some messages yesterday about ADL-S here: https://www.ptt.cc/bbs/PC_Shopping/M.1588608144.A.077.html

Here's Chia's translation:

When I first read it I was confused as he wrote that diehard gamers would prefer Comet Lake over Alder Lake and Rocket Lake, but he later followed up and said it more closely translated to 'core enthusiasts' or something to that degree, and that makes significantly more sense. I was starting to wonder if there was something really wacky about Rocket Lake as well.

But in any case, I still went and asked a friend about the translation, and he said that something else was a little off. The portion about 'blame Atom'. He translated it closer to "give intel a reason to ditch Atom finally". Not big.LITTLE, but to ditch the Atom uArch. Furthermore, he then said afterwards that the third statement of the 4 actually meant something along the lines of Intel were "putting Atoms in is just to use up those excessive Atoms".

To me, that's implying something like "the 6 core version will make you want to buy the 8 core version, and if you do, you will be forced to "eat those 8c big.LITTLE". To me that only leaves two options:

1. The 6 core and below use a seperate die with no Atom cores at all to the 8c which use one big monolithic die with 8 big cores and 8 little cores. This actually somewhat makes sense if Gracemont and Golden have the same size ratio as Tremont a and Sunny Cove do - where 1 SNC is roughly the same size as 4 TNT (I prefer this abbreviation :3). However, given the likely inclusion of AVX in Gracemont, I'm not entirely certain this will be the case. It is a little up in the air.

2. They all use the same die but differing levels of segmentation

Or the idea I really don't like the sound of and to be completely honest, makes very, very, very little sense:

3. ADL-S is an MCM chip with an 8 Golden Core die and a seperate die with 8 Gracemont cores on it. Only full 8 core variants would have the Gracemont die on package.

At this point I'm completely bamboozled and would rather have other's opinions on it.
 
Last edited:

Gideon

Golden Member
Nov 27, 2007
1,619
3,645
136
3. ADL-S is an MCM chip with an 8 Golden Core die and a seperate die with 8 Gracemont cores on it. Only full 8 core variants would have the Gracemont die on package.

At this point I'm completely bamboozled and would rather have other's opinions on it.

If this is the case it will be super strange if there are no 2x ADL-S chiplet (16 cores total) dies at all. Such a wasted opportunity
 

uzzi38

Platinum Member
Oct 16, 2019
2,615
5,865
146
@uzzi38

Seems hard to interpret that honestly. Are they expecting Gracemont to be THAT bad?
When did they say they was expecting Gracemont to be bad? I honestly don't expect it to be bad at all, and I don't think they were implying it is bad either.

Sure, Gracemont certainly isn't a full desktop centric core, but take Tremont for example. 70% of the performance of a SNC core for 50% the power and a 1/4 the die space? Pretty goid tradeoff if you ask me, if only it had AVX so that you could use it with no problems pretty easily.

Gracemont seems set to remove that limitation, and even if the final product say is on par with a Skylake core (not the 5GHz ones but the early Skylake), has AVX support and on top of that is both extremely power and cost effective then I don't see what there isn't to like about it.

The bit at the beginning seems to be just him talking to people that don't like the idea of big.LITTLE, that's all.
 
  • Like
Reactions: geegee83

Gideon

Golden Member
Nov 27, 2007
1,619
3,645
136
The only thing I really don't like is the idea of MCM between a Gracemont and Golden Cove core die. I can't think of a configuration that makes sense if they were to do that unless they also seperated the IMC and maybe the iGPU from those core dies.
Maybe it's a 2.5D chip with memory controller on a die below, similar to Lakefied? If not, that would imply most of the limitations we're seeing with Ryzen multi-chip products.

EDIT Regardlelss these two CPU clusters need memory coherency so you can forget memory latencies that skylake has, even when it is single-die, unless they have some kind of L4 cache.
 

uzzi38

Platinum Member
Oct 16, 2019
2,615
5,865
146
Maybe it's a 2.5D chip with memory controller on a die below, similar to Lakefied? If not, that would imply most of the limitations we're seeing with Ryzen multi-chip products.
Hmm; personally I don't see Foveros coming to desktop just yet.

As for limitations we're seeing with Ryzen based products, what do you mean? Matisse only sees a 5-8ns memory latency penalty from going MCM as evidenced by Renoir, compared to the benefit of going MCM I don't think it's really a limitation at all. Sure you get slightly worse memory latency, but not to a degree that really actually matters.
 

Gideon

Golden Member
Nov 27, 2007
1,619
3,645
136
Hmm; personally I don't see Foveros coming to desktop just yet.

As for limitations we're seeing with Ryzen based products, what do you mean? Matisse only sees a 5-8ns memory latency penalty from going MCM as evidenced by Renoir, compared to the benefit of going MCM I don't think it's really a limitation at all. Sure you get slightly worse memory latency, but not to a degree that really actually matters.
As stated in the post above, it's not just MCM the L3 caches of the CPU clusters also need to be in sync:
Regardless these two CPU clusters need memory coherency
Unless they have some sort of inclusive L4 or will only use the 8-core "Atom" cluster or the 8-core "Cove" cluster at a time, all at once (which seems like a very retarded way to do Big/Little).
If neither of these is the case, there will be additional latency on top of the 5ns of going MCM as there no longer is a nice all-inclusive shared L3 cache.

I never said the latency will be terrible or unusable or whatever. All i said Is that you won't see 38-45ns memory-latency that you currently see on the best Skylake chips.
 

jpiniero

Lifer
Oct 1, 2010
14,580
5,203
136
Only executing on one cluster at a time, no, but a single process would likely be on one cluster only.
 

Gideon

Golden Member
Nov 27, 2007
1,619
3,645
136
Only executing on one cluster at a time, no, but a single process would likely be on one cluster only.
I can see that working, but what about OS threads? I presume these will only run at the big cluster all the time? Otherwise it's still potentially shared memory used by both clusters.
 
Last edited:

dacostafilipe

Senior member
Oct 10, 2013
771
244
116
At this point I'm completely bamboozled and would rather have other's opinions on it.

Could be a political issue?

Like some high level executive forced the big.LITTLE idea and the engineers hope that it flops so they can go back to "real big cores".
 

jpiniero

Lifer
Oct 1, 2010
14,580
5,203
136
I can see that working, but what about OS threads? I presume these will only run at the big cluster all the time? Otherwise it's still potentially shared memory used by both clusters.

I would think it would be the small cluster for power savings.
 
  • Like
Reactions: Gideon

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I haven't seen any representative Tigerlake iGPU tests with a final SKU with known memory specs and a driver which is close to final up to now, and all leaks came from synthetic benchmarks where Icelake is better than in real world. It sounds like there are meaningful tests available, this is not good for your reputation. I'm sure you are dead wrong with 30-40% better than Icelake. Also there is no sign for a G9 variant at the moment, the two leaked higher end SKU i7 did have G7 branding.

What the hell is your obsession with reputation? If you think I'm very wrong then its just one person that's wrong so for the most part its irrevalent what you think.

I post here because of personal preference so I don't care about reputation. I think you're upset because I somehow implied Tigerlake is worse than you expect it to be. Perhaps you should take a break from looking at Intel news.

You do not know more than I do. You can easily compare the scores with good Icelake implementations and its roughly in that range. It's also on par with Renoir which meets AMD's comparisons against Icelake.

Opinion: I do not believe drivers will improve it significantly. Intel is a company that rarely passes on an opportunity for financial gain. If they are saying Xe is the biggest advancement of their graphics in a decade and various other marketing hype words then expect them to take advantage of it. Especially if they end up being significantly faster than Renoir.

This is also simple business logic. You do enough to be on par with competitors, and whatever is above that is sold for premium.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
@uzzi38

I've rarely seen scenarios where in synthetic benchmarks final product is significantly faster. Even driver optimizations in games are a hit and miss. I've seen a lot where it doesn't do better. 5-10% overall at max I get. Anything greater than that is either missing a very critical feature or its a gain in a single game.

If its really 2x we should be looking at Time Spy GPU scores being in the 1700-1800 range. Right now the best Tigerlake score is 5% above Renoir at 1296(Renoir is at 1227).* 35% additional gain using a version running at higher clocks and call it Iris Pro G9** I believe is a perfectly reasonable assumption and an upsell potential for Intel and manufacturers.

Besides, that means most SKUs are going to run at far easy to run clocks and/or have EUs disabled for yield purposes.

*Interestingly its AMD themselves that shows the highest performing 1065G7 Iris graphics scores. Iris Plus G7 Icelake gets 957 according to AMD, which is higher than ~850 range most are assuming. 2x that would result in 1900 Time Spy GPU scores or a whopping 46% faster than best leaked TGL score. That's not drivers. That's hardware.

**The idea of G9 or i9 is often used by manufacturers to make it sound better than it actually is. Same with Titan. Nothing changed, just the naming.

The idea of hetereogenous cores such as big.little has to do with attempts to overcoming the limitations with modern processes, where the costs are rising, TTM is getting longer, but benefits are less.

I can see that working, but what about OS threads? I presume these will only run at the big cluster all the time? Otherwise it's still potentially shared memory used by both clusters.


1588693732623.png
 
Last edited:

uzzi38

Platinum Member
Oct 16, 2019
2,615
5,865
146
@uzzi38

I've rarely seen scenarios where in synthetic benchmarks final product is significantly faster. Even driver optimizations in games are a hit and miss. I've seen a lot where it doesn't do better. 5-10% overall at max I get. Anything greater than that is either missing a very critical feature or its a gain in a single game.

If its really 2x we should be looking at Time Spy GPU scores being in the 1700-1800 range. Right now the best Tigerlake score is 5% above Renoir at 1296(Renoir is at 1227).* 35% additional gain using a version running at higher clocks and call it Iris Pro G9** I believe is a perfectly reasonable assumption and an upsell potential for Intel and manufacturers.



Perhaps, honestly in my own guesses I'm just taking potshots, I could very easily be wrong. There could just as easily be an 1185G9 as there is a 1185G7 for example, after all the 1035 chips ranged from G1 to G7.

I definitely don't think Intel will be hitting a 2x uplift - I have to agree there, but Ice Lake 25W vs Whiskey Lake 15W is already nearly a doubling of performance I believe, so with TGL they only need a 50-60% uplift to meet their own targets.

*Interestingly its AMD themselves that shows the highest performing 1065G7 Iris graphics scores. Iris Plus G7 Icelake gets 957 according to AMD, which is higher than ~850 range most are assuming. 2x that would result in 1900 Time Spy GPU scores or a whopping 46% faster than best leaked TGL score. That's not drivers. That's hardware.

**The idea of G9 or i9 is often used by manufacturers to make it sound better than it actually is. Same with Titan. Nothing changed, just the naming

As for why AMD shows the highest performing 1065G7 score, it's because they used the Dell XPS 7390 for all their testing. I can't say I know if software reporting of PL1 is accurate in mobile, but in the NotebookCheck reviews it reads as a 46W PL1, which certainly helps out the Ice Lake system a fair bit.


Honestly at this point I'm just hoping we get the Xe primer sooner or later, supposedly it should be on the sooner side of things but ugh the wait is killing me,
 
  • Like
Reactions: lightmanek

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I definitely don't think Intel will be hitting a 2x uplift - I have to agree there, but Ice Lake 25W vs Whiskey Lake 15W is already nearly a doubling of performance I believe, so with TGL they only need a 50-60% uplift to meet their own targets.

Well, it was 4x from 15W Whiskeylake to 28W Tigerlake. Icelake 25W is 2x over 15W Whiskeylake. 15W Icelake falls short of 2x by 25-30%. They also repeated the 2x gain over Icelake several times.

The gains are impressive, but not astounding if you think about it. Nvidia's Ampere is rumored for a big gain, and PowerVR's latest architecture talks about 2x gains.

Also don't forget Apple kicks both current Intel/AMD to the curve in graphics performance/mm2 and performance/watt. So room for improvement exists.

In a side note, I also believe this isn't a repeat of Netburst era failures where Intel will spring back up and AMD will stagnate. This seems similar to early to mid 90s, where there were multiple CPU and graphics vendors. I believe we're seeing/we'll see a resurgent Intel and AMD.

As for why AMD shows the highest performing 1065G7 score, it's because they used the Dell XPS 7390 for all their testing.

That same XPS doesn't do that well in games. The Surface Laptop 3 has the best Icelake implementation. It's very balanced. It's at 25W PL1, it performs well in games, and doesn't throttle performance under battery, all of which is false on the XPS.

Also, PL1 isn't fixed for the XPS. The Dell seems to dynamically adjust PL1 depending on how hot the system is. The cooling isn't really meant for more than 25W. 46W must be a temporary thing and likely on AC power. I've seen other Whiskeylake systems do it too. Assume the PL1 will drop on the XPS to 15W when folded to Tablet mode(this is actually quite common among convertibles).

@uzzi38 You can see from the picture I included in the above post that Lakefield can already weave two cores quite seamlessly.

If that's the case maybe Gracemont cores may be working in some quasi-SMT like function to boost the performance in multi-threads better than SMT can? Apple already does this. Maybe its more power efficient than SMT.

Ideas tend to converge.
 
Last edited:
  • Like
Reactions: uzzi38

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
@uzzi38 I realized AMD's presentation uses the overall Time Spy score. The average G7 ICL graphics gets 900 points, and the top one gets 960. The Lenovo Yoga C940 gets 940. The graphics scores are roughly 10% lower but AMD presentation isn't using the graphics score. So that explains the "higher scores" for AMD's presentation, not because it uses XPS.

In that case we're talking 1400 points for 1185G7 versus 950 points for ICL G7.

Also I have forgotten the obvious. 1185G7 is probably running at 15W. According to Intel's naming scheme, a 28W part has to be 1188G7.

In comparison a 18W Icelake gets 770 points. That compares to 1400 points for Tigerlake.
 
  • Like
Reactions: uzzi38

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
Okay, this post is going to get weird.

So Sharkbay left some messages yesterday about ADL-S here: https://www.ptt.cc/bbs/PC_Shopping/M.1588608144.A.077.html

Here's Chia's translation:

When I first read it I was confused as he wrote that diehard gamers would prefer Comet Lake over Alder Lake and Rocket Lake, but he later followed up and said it more closely translated to 'core enthusiasts' or something to that degree, and that makes significantly more sense. I was starting to wonder if there was something really wacky about Rocket Lake as well.

But in any case, I still went and asked a friend about the translation, and he said that something else was a little off. The portion about 'blame Atom'. He translated it closer to "give intel a reason to ditch Atom finally". Not big.LITTLE, but to ditch the Atom uArch. Furthermore, he then said afterwards that the third statement of the 4 actually meant something along the lines of Intel were "putting Atoms in is just to use up those excessive Atoms".


From what I hear, the situation is the exact opposite. The Atom team is very, very proud of Gracemont. My source has been hyping it up without much subtlety, though I've been unable to weasel out numbers. I've also gotten the impression that they feel they've had to fight tooth and nail to get the recognition they deserve, and Gracemont is their opportunity to flaunt their potential a little. Supposedly Keller himself is responsible for pushing the Atom team into the limelight, partly due to some frustration with the Core team.

More practically speaking, why would they be using Gracemont at all if it were so bad? Hybrid is certainly a non-trivial amount of work to support.

Could be a political issue?

Like some high level executive forced the big.LITTLE idea and the engineers hope that it flops so they can go back to "real big cores".

Seems more likely that it's the other way around. Keep in mind how Intel in structured. There are two client teams, C2DG (Skylake, Ice Lake, Rocket Lake?, Alder Lake) and DDG (Broadwell, Broxton, Tiger Lake, Meteor Lake?). The Core team is part of C2DG (Israel), while the Atom team is part of DDG (Oregon). Wouldn't be surprised if there was some resistance from C2DG towards Atom.
 
  • Like
Reactions: dacostafilipe

jpiniero

Lifer
Oct 1, 2010
14,580
5,203
136
More practically speaking, why would they be using Gracemont at all if it were so bad? Hybrid is certainly a non-trivial amount of work to support.

So marketing can say it has 16 cores without actually putting 16 Golden Cove cores.