Discussion Intel current and future Lakes & Rapids thread

Page 310 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
According to JZWSVIC there is a 8.5% Firstrike difference between 1R and 2R DDR4-3200, this is in line with previous generation AMD iGPUs or Iris Pro 580, Intel recommended dualrank (DDR3) a few years ago and claimed a double digits gain in real games for Iris Pro 580. It's not only about the ranks, it's also about LPDDR4 vs DDR4.

Iris Pro 580 wasn't bound by bandwidth at all.

Yea, and the difference in bandwidth between LPDDR4x and DDR4 is only 33%.
Maybe in some 3DMark tests it affects it that much for some reason.

Also if it requires all that to perform like it did in Intel presentations(and even that was below expectations), then even the supposedly most impressive part of Tigerlake is a bore. It takes a mismatch in mere 1-2 configuration details to go from 30% faster to suddenly on par with Renoir, and only about 30-40% better than Icelake.

All that available with higher cost, much worse driver support, more power used, 7 months behind release and to top it off a CPU with half the amount of cores to boot! Their laptop division looks as hopeless as the server and desktop division.
 

mikk

Diamond Member
May 15, 2012
4,140
2,154
136

SAAA

Senior member
May 14, 2014
541
126
116
The local power plant, most probably.


Jokes aside, I don't see how a 5.5GHz single core turbo, on a backported core, on 14nm is coolable either on air or water for 24/7 usage. Skylake already becomes quite toasty in that territory and that core had six iterations of refinement. I know this is a 250w part, but still.

Do we know what cooling they're using?

The local wind turbine of course.

Might be the reference cooler on a development board for all we know xD
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
The difference between DDR4-3200 and LPDDR 4266 seems to be in the range between 15-20% in Firestrike/timespy with the same rank and x8.

What about in games? I don't care about 3DMark benchmarks when it differs from gaming results.

The choices we have in the Tigerlake land are very poor.

I sense déjà vu:
Prescott → Skylake and friends, up to the super hot, super -clocked Comet Lake
Yonah → Tiger Lake, also Rocket lake
"Core" 2 cores, 65 nm → Alder Lake, 8 big cores (+8 little ones…) on 10 nm
Penryn 4 cores, 45 nm→ Meteor Lake, 16 big cores, 10 nm or maybe just 7nm EMIB tiles?

I don't know why you see Skylake as Prescott. I think Rocketlake is going to turn out to be Prescott. Comes as a next generation part that's sometimes faster, but sometimes slower than the predecessor, and it runs hot, hot, hot. It even emulates the process issue.

Also Alderlake sounds much better on mobile. The problem on desktops is 8+8. The hybrid config is ok, but it needs more cores, Golden Cove or Gracemont, or both.

Zen 4 might go for 20-24 cores on 5nm node, just like its server counterpart increasing its core count.

There's no backup plan for Intel. No Banias. This time, if they do catch up it'll be through persistence and hard work, and winning little by little.
 

Hulk

Diamond Member
Oct 9, 1999
4,225
2,015
136
What table overstates is 4 cycle latency for L1 in generations before Sunny Cove. Those 4 cycles were when stars aligned and sun also shined. In most real world cases it was 5 cycle L1 on Skylake and Intel recognized that and made scheduling easier by removing those special cases and having uniform 5-cycle L1.

Good insight. I did read about that but didn't make a note. I will do that for the sake of completeness.
Thanks!
 

Hulk

Diamond Member
Oct 9, 1999
4,225
2,015
136
Overall obviously high clocks (thermals) to make up for IPC isn't a great idea. Been there, done that with the P4.

But there are specific scenarios where it does make sense. For example, one bottleneck for my current 4770k system is when I'm mixing/recording audio with a lot of plug-ins. At a certain point the CPU is overloaded and the game is over. While the software I use, Presonus Studio One is multithreaded, it does rely heavily on single core performance. So having a single core be able to crank up to a ridiculous frequency (like 5.5) would be beneficial to push the envelope on those dense mixes. In addition, heat/power isn't a huge deal since it's usually a brief time for the frequency ramp up and of course it's only one core.

Now I realize this is a VERY specific use case. But the thing is honestly CPU's are so powerful these days that power users have to look specifically where they need the extra performance. Then they have to decide which part best fill that "gap" in performance they are experiencing with their current system.

I'm going to build a new Zen 3 or Rocket Lake system next year. And you can bet I'm going to do some serious (fun) reading before making my decision.

This not like the old days in the late '80's and early '90's where I would have to wait 30 seconds for a screen redraw in CorelDraw, or scale video to a postage stamp size to edit, etc... Now most tasks are easily completed with relatively short wait time/high frame rate... What this means is the upgrade decision is more relaxed and most people can wait for something that really fits their tasks to come along. I personally have been waiting for a significant architecture upgrade from Haswell and that looks to be Rocket Lake or Zen 3.
 

mikk

Diamond Member
May 15, 2012
4,140
2,154
136
What about in games? I don't care about 3DMark benchmarks when it differs from gaming results.

The choices we have in the Tigerlake land are very poor.

The 3200 single/dual rank difference alone proves that Iris Xe would scale further up with faster RAM. The bandwidth flaw won't go away in real world gaming and sure some games will more and some less bandwidth affected. You might search for older AMD DDR3 APU RAM scaling tests like this (only a 2133->2400 scale test) or this or this and you will see that real world gaming will improve with better RAM when the graphics is bandwidth starved, this is not bound to 3dmark.
 
Last edited:

Gideon

Golden Member
Nov 27, 2007
1,637
3,672
136
While this is some old, preliminary info (with no source it seems) I found something interesting on wikichip site about Rocket Lake:

Mainly, they have listed iGPU versions with L4 edram. I wonder if we'll still see them? It would actually make sense even on the desktop to compensate for the 1/2 L3 cache Rocket Lake has vs Zen 3 and would probably make it faster in games.

I still doubt it will happen, but it certainly would be nice as an option.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Mainly, they have listed iGPU versions with L4 edram. I wonder if we'll still see them? It would actually make sense even on the desktop to compensate for the 1/2 L3 cache Rocket Lake has vs Zen 3 and would probably make it faster in games.

I'd be skeptical of that info. It still has Rocketlake-U listed. Their pages with future CPU data are often outdated or missing information.

Why would Rocketlake get the eDRAM when it has only 32 execution units, that's merely 50% faster than the 5-year old HD 630?

The 3200 single/dual rank difference alone proves that Iris Xe would scale further up with faster RAM.

I have already given you my reasons why testing on Firestrike might not translate into gaming. Just to recap: the Swift gets lower in Firestrike but beats the Asus' in games.
 

RTX

Member
Nov 5, 2020
90
40
61
"*Yes, Intel has a 14++++ node. It’s even in their diagrams. The only product confirmed to be on 14++++ as far as we can tell is the Cooper Lake Xeon Scalable family. "
Base 14nm is Broadwell in the Intel's 2020 pic but the chart says Skylake is also base 14nm in the article? CometLake = 14nm+++, right?

Is Rocketlake built on the latest 14nm. If it's built on a later optimization, it should clock higher than the 10700K, right?drivecurrent.jpg

They didn't update this to include 14nm+++ and 14nm++++ but where would the numbers be vs base 14nm be?
 

mikk

Diamond Member
May 15, 2012
4,140
2,154
136
I have already given you my reasons why testing on Firestrike might not translate into gaming. Just to recap: the Swift gets lower in Firestrike but beats the Asus' in games.

Simply because the Swift runs with higher sustained load clock speeds. In 3dmark it is mainly about the GPU and therefore there is more headroom for the GPU. Real gaming needs more CPU power which can be a problem when it runs into a thermal or power limit, the Asus ultrabook isn't doing good in this metric as we know. In 3dmark the thermal/power bottleneck is a bit masked. The gaming comparisons with Vega and MX350 from these DDR4 devices are not looking that good to be honest, it's clearly not just a Firestrike/timespy thing. You can be sure it will translate into gaming, tendentially I would expect a bigger difference in real world gaming. It makes me wonder if Intels Xe LP memory compression is much worse compared to Vega/Turing.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
Is Rocketlake built on the latest 14nm. If it's built on a later optimization, it should clock higher than the 10700K, right?

Clock speed is also heavily dependent on architecture. We don't really know how Sunny/Cypress Cove fairs in that regard.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
You can be sure it will translate into gaming, tendentially I would expect a bigger difference in real world gaming. It makes me wonder if Intels Xe LP memory compression is much worse compared to Vega/Turing.

Listen. You are saying the Xe LP is so memory bandwidth bound that it scales nearly 100% with faster memory.

-Yet it manages to be 70% faster using the same conditions as the previous generation which is about equal to the increase in resources.

-Also the i7 is noticeably faster than i5. A device that is nearly 100% bandwidth bound cannot have a slightly smaller GPU noticeably slower - it should be pretty much identical.

-It's also contradictory to state its much worse in handling memory bandwidth yet still end up being 30-40% faster than Vega.

Intel's Gen 8 GPU had crappy memory compression techniques. Going from GT2 to GT3 gave you 5-10% gains. Another 5-10% if you went for the 28W version. That means even the GT2 version could benefit nicely from more bandwidth. When Gen 9 came, GT2 was bound far less, despite being faster. If Xe LP was so badly bound by BW that it scales nearly linearly, then it should be at best 10% faster than Vega.

Real gaming needs more CPU power which can be a problem when it runs into a thermal or power limit, the Asus ultrabook isn't doing good in this metric as we know.

First, not everything is about thermals. I repeated many times that there are dozen knobs that manufacturers can adjust.

Second, Tigerlake further complicates this. The new boost can estimate the workload and set a different PL1 per application.

Third, the Swift has no thermal/power advantage.

HWInfo screenshot shows the Swift is pretty much 17W for all 4 games, while the Zenbook goes easily over 20W.

About the Swift:
In fact, it’s fairly far from it, with the GPU averaging frequencies of only .9 to 1 GHz in our gaming tests, down from the 1.30 GHz peak performance that the platform is theoretically capable of.
Zenbook:
and the GPU running at 1.1 to 1.25 GHz.

The graphs also show its over a thermally significant power period, so initial Turbo/thermal throttling doesn't apply.
 
Last edited:

Cardyak

Member
Sep 12, 2018
72
159
106
MLID has dropped some leaks he's received over the past few weeks and months.

Caution required as a lot of these products are far out and subject to change, but there are some interesting tidbits in here regardless.

Looks like Ocean Cove is dead and replaced with "Redwood Cove", whether this is large increase like Sunny/Golden Cove, or a smaller incremental upgrade akin to Willow (If I had to guess I'd say it's more like Willow. Intel seems to be on a cadence of Large Increase -> Small Increase -> Large Increase -> etc...)

IntelEarlyNov1.png

IntelEarlyNov2.png
 

exquisitechar

Senior member
Apr 18, 2017
657
871
136
MLID has dropped some leaks he's received over the past few weeks and months.

Caution required as a lot of these products are far out and subject to change, but there are some interesting tidbits in here regardless.

Looks like Ocean Cove is dead and replaced with "Redwood Cove", whether this is large increase like Sunny/Golden Cove, or a smaller incremental upgrade akin to Willow (If I had to guess I'd say it's more like Willow. Intel seems to be on a cadence of Large Increase -> Small Increase -> Large Increase -> etc...)

View attachment 33149

View attachment 33148
I don't usually take MLID seriously, but since he got Cypress Cove right and this is an Intel leak, I find it interesting that he claims that Ocean Cove was canceled. A user on this very forum claimed the same thing a while ago. I don't think the supposed replacement is going to be another Willow Cove, or at least, I hope not.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
I don't usually take MLID seriously, but since he got Cypress Cove right and this is an Intel leak, I find it interesting that he claims that Ocean Cove was canceled. A user on this very forum claimed the same thing a while ago. I don't think the supposed replacement is going to be another Willow Cove, or at least, I hope not.

If I may, @mikk as well, consider this supporting evidence for my statement ~a year ago.

Ocean Cove is dead. Make of that what you will.

A very reasonable position to take. All I can say is that time will vindicate me. Indeed, if there's a new architecture day this year, it may not even take very long.

The core idea behind Ocean Cove was deemed unnecessary, hence why the team wasn't needed any longer. Future designs may inherit parts of the work on Ocean Cove, but they will not be Ocean Cove. Of course, this was all under BK, so draw your own conclusions about the wisdom of such a move.

Ocean Cove's cancelation wasn't new at the time either. And Redwood Cove is absolutely nothing like it would have been. I will not comment on most of the MLID claims, save for this one:

RWC was designed from the ground up to be node agnostic.

That is unequivocally false.
 
Last edited:

mikk

Diamond Member
May 15, 2012
4,140
2,154
136
Listen. You are saying the Xe LP is so memory bandwidth bound that it scales nearly 100% with faster memory.


Once again it doesn't scale nearly 100%. You have to compare the same rank configuration. 2x1R x8 DDR4 devices can score like 4400 in Firestrike graphics, a typical dualrank boost of 7-8% would result in a 4700-4800 graphics score. 2R LPDDR4-4266 can do like 5600 points without throttling. At best there is a 20% increase from DDR4-3200 to LPDDR4-4266, more likely something between 15-20%. This is quite a big increase but not nearly 100%. The singlerank and x16/mixed x8+x16 configuration on DDR4 devices hurts a lot, the effective bandwidth gap is bigger for them.


First, not everything is about thermals. I repeated many times that there are dozen knobs that manufacturers can adjust.


The fastest real world gaming Tigerlake device on ultrabookreview (SF314-59) is a sustained 28W load device (in Cinebench at least). This is a big advantage because it means it has a big CPU/GPU clock advantage:

Power allocation makes a big difference here and allows the iGPU to run at its peak frequencies of 1.3 GHz in most of the titles, with fairly solid CPU frequencies as well.

It only has 80 EUs which is the reason for the relatively 3dmark scores. The other Acer goes down to sub 20W.
 
Last edited:

RTX

Member
Nov 5, 2020
90
40
61
Clock speed is also heavily dependent on architecture. We don't really know how Sunny/Cypress Cove fairs in that regard.
Tigerlake seemingly clocks fine to 4.8 in a ULV product and isn't that just Sunny Cove built on the 10SF node?
 

Ajay

Lifer
Jan 8, 2001
15,451
7,861
136
What is the RWC core being talked about with regard to Meteor Lake? Any info?
 

davideneco

Junior Member
Apr 7, 2020
4
1
41
MLID has dropped some leaks he's received over the past few weeks and months.

Caution required as a lot of these products are far out and subject to change, but there are some interesting tidbits in here regardless.

Looks like Ocean Cove is dead and replaced with "Redwood Cove", whether this is large increase like Sunny/Golden Cove, or a smaller incremental upgrade akin to Willow (If I had to guess I'd say it's more like Willow. Intel seems to be on a cadence of Large Increase -> Small Increase -> Large Increase -> etc...)

Ocean Cove was the NGG , this mean to replace core architecture introduce in 2006
 
  • Like
Reactions: yuri69