• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Discussion Intel current and future Lakes & Rapids thread

Page 172 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

FriedMoose

Member
Dec 14, 2019
48
28
51
Willow Cove on 14nm should have high power draw, but the vast increase in cache should help regulate thermal density. Clocks will be lower than Skylake but not massively so. 14nm can tolerate really high currents so durability shouldn't be an issue. A 9900k is rated up to 193 amps in Intel's data sheets.

The real challenge is the die size considering the cores are larger, L2 is something like 5x larger, and L3 is 50% larger.
 
  • Like
Reactions: Richie Rich

DrMrLordX

Lifer
Apr 27, 2000
16,877
5,841
136
RocketLake-S will require some serious cooling at high clocks. That alone will probably prevent 5 GHz from being a reality on those parts.
 

uzzi38

Golden Member
Oct 16, 2019
1,361
2,528
96
RocketLake-S will require some serious cooling at high clocks. That alone will probably prevent 5 GHz from being a reality on those parts.
Yeah, if only I was talking about not hitting 5GHz specifically. Intel hitting 5GHz means nothing to me, I'm simply doing nothing more than pointing out a fundamental flaw in the line of thinking here amongst the weirdos here who think that somehow, 10nm is less - or only slightly more - power efficient than 14nm.

We're talking about a process node 2.7x more dense. I can't believe there are people here who believe that 10nm is actually less than 20% more power efficient than 14nm. That's ridiculous.
 
  • Like
Reactions: spursindonesia

Richie Rich

Senior member
Jul 28, 2019
470
227
76
So I'm going to assume for the rest of this post that Willow Cove is as efficient as Sunny Cove, even though we know it's not. It's more efficient.

By this point, you're now assuming that Intel's 10nm is only 20% more efficient than their 14nm for the same architecture. You're talking about a 2.7x density increase going from 14nm to 10nm and claiming only a <20% power efficiency improvement.

Bollocks. Complete and utter bollocks.
Why are you repeating power efficiency over and over? Nobody cares about power efficiency at desktop. Just answer the simple question: What will be max clocks of RocketLake?

IMHO even 4.8-4.9 GHz will be enough to keep ST performance crown for Intel over Zen3 (not speaking about useless zombie Skylake). This is the idea behind this Willow Cove backport. Not bad move from Intel though.
 

uzzi38

Golden Member
Oct 16, 2019
1,361
2,528
96
Why are you repeating power efficiency over and over? Nobody cares about power efficiency at desktop. Just answer the simple question: What will be max clocks of RocketLake?

IMHO even 4.8-4.9 GHz will be enough to keep ST performance crown for Intel over Zen3 (not speaking about useless zombie Skylake). This is the idea behind this Willow Cove backport. Not bad move from Intel though.
I already did. It would not surpass Kaby Lake in terms of clocks only even with 200W of power.

Actually, I've decided to revise my original number to account for better than expected CML-U/S silicon. I'd instead say 4.4GHz SC, 4.1GHZ all-core.
 

FriedMoose

Member
Dec 14, 2019
48
28
51
RocketLake-S will require some serious cooling at high clocks. That alone will probably prevent 5 GHz from being a reality on those parts.
I suspect it wont be as bad as many people in this thread are making it out to be. Thermal density is the real challenge with cooling modern CPUs. Rocket Lake should have a massive die size increase due to the much larger cache and additional instructions.
 

Exist50

Member
Aug 18, 2016
193
240
116
Wonder how many times I'll have to say it at this point.
Willow Cove in it's entirety backported to 14nm would draw meme-levels of power to sustain even OG Skylake clocks. You can very much forget being able to even touch 5GHz.

Ice Lake-U is barely more efficient than Comet Lake-U once whatever issue it has at low power (<25W) is gone. And I mean barely. https://cdn.discordapp.com/attachments/476511857310564393/638725335306731541/unknown-7.png

Why is everyone suddenly assuming that Intel would be able to extract 20% more performance at the same power out of Willow Cove on 14nm compared to Comet Lake and, by extension, also 20% more power efficient than Sunny Cove on 10nm?

Are you people actually insinuating that the same architecture on both 10nm and 14nm would be just as power efficient?

God, I feel like I'm going mad the number of people that keep on going on about this 5GHz 14nm Willow Cove claim. For just two seconds, will someone just sit down and think about what they're suggesting? Because you're suggesting that 14nm and 10nm are as power efficient as one another.
You keep claiming stuff like this over and over again, but with no source or fundamental logic behind your claim. Unless you somehow think that the difference in gate delay or Cdyn between 14nm and 10nm is an order of magnitude or two higher than Ice Lake implies, there's no sense to it. But from everything we've seen, 10nm is a minor improvement over 14nm's current state in everything but density.
 

Richie Rich

Senior member
Jul 28, 2019
470
227
76
I already did. It would not surpass Kaby Lake in terms of clocks only even with 200W of power.

Actually, I've decided to revise my original number to account for better than expected CML-U/S silicon. I'd instead say 4.4GHz SC, 4.1GHZ all-core.
Nobody says Rocket Lake will surpass Kaby Lake in terms of clocks. However in term of absolute performance Rocket Lake at 4.9 GHz will be as powerfull as Kaby Lake at 5.88 GHz (assuming +20% IPC). RCL will be much faster than KBL and also faster than Zen3. And that's the point.

Only question is the +50%? die size increase (ICL was +38% transistor increase). Intel will produce even less CPUs from same waffers amount. However as temporary solution until ramping 7nm production (Golden Cove) is not bad move (Intel has enough resources, he can afford it, AMD not).
 

uzzi38

Golden Member
Oct 16, 2019
1,361
2,528
96
You keep claiming stuff like this over and over again, but with no source or fundamental logic behind your claim. Unless you somehow think that the difference in gate delay or Cdyn between 14nm and 10nm is an order of magnitude or two higher than Ice Lake implies, there's no sense to it. But from everything we've seen, 10nm is a minor improvement over 14nm's current state in everything but density.
The Ice Lake numbers are a little worse than what 10nm is really at, ICL-U does indeed have capacitance issues of it's own, specifically below 25W, but that's an architecture problem. Or at least, I hope it is, but then again, the Tiger Lake benchmark results that are getting leaked aren't exactly very positive on that end.

Specifically on the last point though, you're completely ignoring the effect architecture has on power efficiency. 10nm does actually provide some pretty darn decent power efficiency gains, the problem is Ice Lake neutered it. Shouldn't be a surprise, Sunny Cove cores are huge, 6.91 mm^2 compared to Skylake's 8.73mm^2. Of course, this is on 10nm vs 14nm.
 

uzzi38

Golden Member
Oct 16, 2019
1,361
2,528
96
Nobody says Rocket Lake will surpass Kaby Lake in terms of clocks. However in term of absolute performance Rocket Lake at 4.9 GHz will be as powerfull as Kaby Lake at 5.88 GHz (assuming +20% IPC). RCL will be much faster than KBL and also faster than Zen3. And that's the point.
And that's my point too. You won't be getting it clocked to 4.9GHz either. If it could clock to 4.9GHz, that would be fantastic. Unfortunately though, things aren't as simple as '14nm clocks good, 10nm clocks bad'. Maybe the desktop market would be more interesting in a years time if they were, but for the time being, you can count on AMD's relentless execution on IPC being the only thing capable of holding keeping x86 together.
 

lobz

Golden Member
Feb 10, 2017
1,564
1,981
136
Yeah, if only I was talking about not hitting 5GHz specifically. Intel hitting 5GHz means nothing to me, I'm simply doing nothing more than pointing out a fundamental flaw in the line of thinking here amongst the weirdos here who think that somehow, 10nm is less - or only slightly more - power efficient than 14nm.

We're talking about a process node 2.7x more dense. I can't believe there are people here who believe that 10nm is actually less than 20% more power efficient than 14nm. That's ridiculous.
I don't think that anyone here would think that 10nm in itself would be less efficient. What I'm sure about is that Ice Lake on the current 10nm node is a power hog compared to what it should be, and that on 10nm achieving the highest frequencies at a reasonable power consumption and heat will be much more challenging than it is on 14nm.
 

scannall

Golden Member
Jan 1, 2012
1,764
1,202
136
We're talking about a process node 2.7x more dense. I can't believe there are people here who believe that 10nm is actually less than 20% more power efficient than 14nm. That's ridiculous.
2.7x density was the original goal for 10nm. They had to relax that to get any parts out the door at all. But they haven't been at all forthcoming about just how much.
 
  • Like
Reactions: lobz

lobz

Golden Member
Feb 10, 2017
1,564
1,981
136
Nobody says Rocket Lake will surpass Kaby Lake in terms of clocks. However in term of absolute performance Rocket Lake at 4.9 GHz will be as powerfull as Kaby Lake at 5.88 GHz (assuming +20% IPC). RCL will be much faster than KBL and also faster than Zen3. And that's the point.
Man, you draw these conclusions using donkey logic so easy like you do when you compare 2.5 GHz ARM CPUs with SKL and Zen 2 and say A13 has 82% higher IPC than Sylake and Zen 2.

What is donkey logic? It's like covering your eyes and saying: if I can't see the lion, the lion can't see me either.
 

IntelUser2000

Elite Member
Oct 14, 2003
7,250
1,839
136
And that's my point too. You won't be getting it clocked to 4.9GHz either. If it could clock to 4.9GHz, that would be fantastic. Unfortunately though, things aren't as simple as '14nm clocks good, 10nm clocks bad'.
I can agree on this, because 14nm clocks are reached after 4 years of refinement on the same uarch and same process.

Look how long it took them to actually reach 5GHz on an overclock. 4790K claimed it and failed. Same with 6700K, and then 7700K. Only by 8700K it started being viable, but it took one more iteration.

Rocketlake changes the cores, which will have different characteristics and no longer benefit from the years of tiny modifications they did to reach 5GHz.

2.7x density was the original goal for 10nm. They had to relax that to get any parts out the door at all. But they haven't been at all forthcoming about just how much.
~2x for CPU, ~2.5-2.6x for iGPU.
 

Thunder 57

Golden Member
Aug 19, 2007
1,605
1,608
136
Man, you draw these conclusions using donkey logic so easy like you do when you compare 2.5 GHz ARM CPUs with SKL and Zen 2 and say A13 has 82% higher IPC than Sylake and Zen 2.

What is donkey logic? It's like covering your eyes and saying: if I can't see the lion, the lion can't see me either.
Thank you! If you believe what Richie Rich has to say, you may as well believe AMD and Intel are run by buffoons they pulled out of a random three ring circus. After all, they are just sitting on massive performance improvements!
 

Exist50

Member
Aug 18, 2016
193
240
116
Thank you! If you believe what Richie Rich has to say, you may as well believe AMD and Intel are run by buffoons they pulled out of a random three ring circus. After all, they are just sitting on massive performance improvements!
I mean, Apple inarguably has higher IPC, and much better power efficiency than either AMD or Intel. They reach desktop performance levels from mobile chips. You're doing them a disservice to write them off.
 

IntelUser2000

Elite Member
Oct 14, 2003
7,250
1,839
136
I mean, Apple inarguably has higher IPC, and much better power efficiency than either AMD or Intel. They reach desktop performance levels from mobile chips. You're doing them a disservice to write them off.
Since the regular ARM vendors are behind both on the GPU and the CPU side, I have to think maybe there's a big advantage to being vertically integrated as Apple is.

Not being a merchant chip vendor and having control of the OS means they aren't bound by certain limits.

For example, Haswell needed Windows 8 to take full advantage of battery life gains. When Skylake introduced Speedshift, they had to wait for Windows 10 update to get support. This synchronization that happens between vendors must add up over time.

Beyond Apple's execution on the hardware side being fantastic, maybe this tight work between the two means they can simplify and improve the hardware that's out of reach for the merchant chip vendors like Qualcomm and Intel.

15mm2 GPU in the Apple A13 performs like a 41mm2 Gen 11 GPU in Icelake while fitting in Smartphone form factor and TDP!
 

DrMrLordX

Lifer
Apr 27, 2000
16,877
5,841
136
IMHO even 4.8-4.9 GHz will be enough to keep ST performance crown for Intel over Zen3 (not speaking about useless zombie Skylake).
Cooling Rocket Lake-S @ 4.9 GHz would be more than most consumers would want. Custom water required, thanks to the power draw/heat output.

Thermal density is the real challenge with cooling modern CPUs. Rocket Lake should have a massive die size increase due to the much larger cache and additional instructions.
14nm has its own issues when trying to run huge dice at high clocks - look at Skylake-X and Cascade Lake-X. You seen how much power those things can draw above 4.5 GHz? It's massive. I imagine than an 8c Rocket Lake-S (there probably won't be a 10c part) would be pushing a lot of heat.
 

Thunder 57

Golden Member
Aug 19, 2007
1,605
1,608
136
I mean, Apple inarguably has higher IPC, and much better power efficiency than either AMD or Intel. They reach desktop performance levels from mobile chips. You're doing them a disservice to write them off.
And yet they have largely failed to breakout from low power devices. When I can do real work with one, then it might be interesting. I'm not writing them off. I am just tired of hearing how ARM is going to take over the world year after year yet it never happens.
 
  • Like
Reactions: A///

A///

Senior member
Feb 24, 2017
829
578
106
I'm willing to partially hold my breath until 10th gen parts are tested en-masse. Velocity Boost is interesting, but I don't expect much from it, especially at the high end.
 

DrMrLordX

Lifer
Apr 27, 2000
16,877
5,841
136
And yet they have largely failed to breakout from low power devices. When I can do real work with one, then it might be interesting.
I think the SQ1 (and really, as far down as the lowly Snapdragon 855, if you can keep clocks up) is the first "general" ARM chip (read: not one of Apple's designs locked up in their software paradigm) that you could use as an everyday tool. @Thala was nice enough to run some Java software of mine on his SQ1 and managed a score about 47% higher in fp throughput versus my old heavily-overclocked A10-7700k. My Snapdragon 855+ got a score that was ~41% faster. That's not fantastic - Kaveri wasn't an amazing performer, and I don't think you'd want to do "real work" on one of those today. But I think if you saw some honest porting of software to Windows-on-ARM that the SQ1 products could deliver.
 

Exist50

Member
Aug 18, 2016
193
240
116
And yet they have largely failed to breakout from low power devices. When I can do real work with one, then it might be interesting. I'm not writing them off. I am just tired of hearing how ARM is going to take over the world year after year yet it never happens.
They haven't bothered to try breaking out from low powered devices, not failed to. And that aside, they clearly don't even need to go chasing 10s of watts per core to be performance competitive with those chips from Intel and AMD. The numbers speak for themselves, really.
 

beginner99

Diamond Member
Jun 2, 2009
4,718
1,115
136
Since the regular ARM vendors are behind both on the GPU and the CPU side, I have to think maybe there's a big advantage to being vertically integrated as Apple is.
Of course there is an advantage but the most important part which you did not mention is that being vertically integrated means you can decide and control the end-user (consumer) pricing. So if you design a chip/SOC, you know the cheapest device it will sell in and design it accordingly.
Why does it matter? It matters because if the cheapest device it goes into costs $800 you can afford to make it bigger and invest more R&D.
Your GPU example contradicts this but let's be honest that intels GPU efficency has been known to be pretty bad (die area and power-wise)

EDIT:

Also Apple SOC exists in 1 version. It doesn't need to scale from 1 Ghz to 5ghz like intel core CPUs need to. Makes it much easier to create an efficient design because you have exactly 1 target and hence optimize for that with 0 trade-offs.
 

IntelUser2000

Elite Member
Oct 14, 2003
7,250
1,839
136
@beginner99 AMD is also behind on the GPU side. Comparing to ARM vendors they are closer, but Apple's advantages are phenomenal.

Or is it that AMD/Intel's execution has been lacking at best? That's also possible. Intel should have had Golden Cove cores now! The argument that the advantage is due to ISA starts to dissolve when you see that ARM chips have lead in CPU, GPU, and IO.

I'm glad Tigerlake supports LPDDR5, but it merely catches up with them. At one time, they were the one to lead the industry with memory and IO standards.

on his SQ1 and managed a score about 47% higher in fp throughput versus my old heavily-overclocked A10-7700k. My Snapdragon 855+ got a score that was ~41% faster.
ARM vendors never fell completely flat on their faces like AMD/Intel did. Sure, they had some less than optimal cores, but it still was faster than previous ones.

Imagine if Bulldozer was a genuine advancement for AMD. Or if Intel delivered 10nm in 2016. They both set themselves back YEARS.
 

Thunder 57

Golden Member
Aug 19, 2007
1,605
1,608
136
They haven't bothered to try breaking out from low powered devices, not failed to. And that aside, they clearly don't even need to go chasing 10s of watts per core to be performance competitive with those chips from Intel and AMD. The numbers speak for themselves, really.
So they just don't want the more lucrative market? Meh, not buying it. Again, come back to me when I can do real work an ARM CPU.
 

ASK THE COMMUNITY