Discussion Intel current and future Lakes & Rapids thread

Page 408 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Panino Manino

Senior member
Jan 28, 2017
813
1,010
136
And discourage a dGPU sale? ;)

AMD appears to be going the other way; putting a basic IGP on the IO Die and calling it a day. Maybe at some point there will be an actual IGP chiplet.

Wouldn't be a good idea do kill the bottom end of the GPU line to increase CPU sales? It's just switching a product for another, and in turn would lead to increased motherboard sales.
 

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
Last time both manufacturers put a large GPU, it didn't work out for them. Look how the 5775C was and abandoning it with Skylake. How about AMD in the Llano days?

Yea sure they can put a large GPU, but would we pay extra for it? We want all the extra performance for practically free right? Same price as the current anemic iGPUs? But they won't do that.

The alternative is pairing the highest end iGPU on a Core i3/Ryzen 3, so you save on the "useless" extra CPU and is more balanced. I just don't see this happening either. Do you want to be stuck to the iGPU performance, or eventually upgrade to a much faster dGPU you've been waiting for?


Given how much Intel wanted for the 5775C, it wasn't exactly much of a value for the money. Llano wasn't exactly setting the CPU core performance world on fire either.

The environment is also quite different at the moment. Both are competitive with core performance. There's a bunch more memory bandwidth to play with. No one can even find most video cards in the first place.

I also don't expect to get the gpu at a discount, but, conversely, I also don't expect to pay the same premium that I would expect to pay for a whole video card for an iGPU when they aren't having to foot the bill for a pcb, integrated VRM, VRAM, bracket, connectors, etc. If the 5800x was selling for $349, I wouldn't pay more than $399 for one that performed like an rx560-570.
 

mikk

Diamond Member
May 15, 2012
4,112
2,106
136
Driver team is most likely mobilized to help with dGPU drivers, no resources to spare on side show that is iGPU atm probably. Good thing is that it is same architecture, so game fixes/optimizations will trickle back to iGPU drivers.

I don't think Gen12 HPG is the reason because in the end it's all Xe based, there are surely small architecturally differences in Gen12 HPG but the main driver base should be the same, also Gen12 HPG won't be ready before Q4, delaying TGL/RKL drivers doesn't make sense to me. We can only speculate but I could believe they are not happy with the driver base and need more time for a bigger restructuring of some driver parts. The last official driver is dated 18th February.


OK so this was my experience running Batman Arkham Knight - it would not detect the Intel drivers at all, and the game runs using Microsoft's Basic Display Driver. I contacted Intel support and this was their response:

You happy now?

How it comes there are Iris Xe gameplay videos from Batman Arkham Knight on youtube? And how does it prove your 30% claim? You were claiming UHD 750 runs only 30% faster than UHD 630 despite almost all benchmarks from several reviewer shows 50% or slightly more? Feel freel to prove it with some more benchmarks, I may not found all UHD 750 reviews. Considering that the gap will most likely grow over time (the old 9316 already seems to improve certain scenearios over the even older press driver) 50% feels about right.

edit: another test with an average of 50%+ higher fps:

 
Last edited:
  • Like
Reactions: Tarkin77

Asterox

Golden Member
May 15, 2012
1,026
1,775
136
Intresting details, but it is expected for Intel 10nm no doubt.

If IPC is 20% higher(average is around 12%), "but for singlecore Turbo or a All core boost you have 500mhz lower CPU clocks".This is direct comparison, 28 Cores on 14nm vs 28 Ice lake cores on Intel 10nm.



2021-04-10_134724.jpg

What we can expect from new Alder Lake on 10nm?Same comparison, or 8 big Alder Lake cores on 10nm vs Rocket Lake 8 cores cpu clocks on 14nm.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
@Asterox

The Xeon 8380 is on 10nm+, and its clocks show. Alder Lake will be on 10SFE which should be able to reach clocks at least as high as 10SF/4c Tiger Lake-U. Whether or not the thermals will be pleasant at those clocks is another matter altogether.
 

SAAA

Senior member
May 14, 2014
541
126
116
What we can expect from new Alder Lake on 10nm?Same comparison, or 8 big Alder Lake cores on 10nm vs Rocket Lake 8 cores cpu clocks on 14nm.

I think we already got the clock regression with Rocket from Comet lake (all core speeds at least) so Alder won't fall as much as presented here. This comparison is still first iteration 10nm vs latest 14 nm servers, not 10nm SF, the same process used in Tiger lake got a large boost over Ice lake.
I don't see Alder failing to reach 5GHz for example, maybe all core will have a slight regression as there are 8 small cores to power too.
 
  • Like
Reactions: Tlh97 and mikk

Asterox

Golden Member
May 15, 2012
1,026
1,775
136
@Asterox

The Xeon 8380 is on 10nm+, and its clocks show. Alder Lake will be on 10SFE which should be able to reach clocks at least as high as 10SF/4c Tiger Lake-U. Whether or not the thermals will be pleasant at those clocks is another matter altogether.

That is the point, can you imagine to hot a not very power efficient Alder Lake on 10nm.That would be another big flop on 10nm, if Alder Lake is in red details worse vs Zen 3/Zen 3+ on TSMC 7nm.
 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
That is the point, can you imagine to hot a not very power efficient Alder Lake on 10nm.That would be another big flop on 10nm, if Alder Lake is in red details worse vs Zen 3 on TSMC 7nm.

Tiger Lake-U reaches clocks in the 4.7+ GHz range in 4c config with a listed TDP of 28W or so if I recall correctly? All-core turbo of 4.3 GHz. Not counting overhead for the ring (which usually isn't too bad), 8c Tiger Lake @ 4.3 GHz would be around 55-60W TDP. Let's say sub-65W (note I am listing TDP and not actual power consumption). 10SFE will presumably shift voltage/clockspeed curves to help Intel squeeze out at least a few more hundred MHz in the same power envelope. That leaves the relatively unknown effect of how power-hungry Golden Cove will be compared to Willow Cove.

Even if Intel completely bones Willow Cove -> Golden Cove such that the +20% IPC comes at a cost of +20% power consumption @ isoclocks AND if 10SFE offers no real improvement in voltage/clockspeed curve, 8c Alder Lake (no small cores) @ 4.3 GHz all-core would check in at a TDP of around 72W (which is a TDP listing Intel would never use; they'd label it 65W or something). At that point you have to guess at how much more power said hypothetical 8c Alder Lake would have to burn to "catch up" to the 5800x with its nominal all-core turbo of 4.7 GHz, but I think Intel could sneak that in under a 105W TDP (or an actual power draw of 142W, which is what the 5800x can consume fairly reliably).

So, I do not think the overall efficiency of Alder Lake would be too terribly awful unless Intel manages to screw something up with Golden Cove and/or 10SFE that I haven't taken into account. The issue with temps to which I was referring might be more-related to hotspots, if such things become problematic. Which they might not (at least not to the extent as they do on N7).
 
  • Like
Reactions: Tlh97

Shivansps

Diamond Member
Sep 11, 2013
3,835
1,514
136
Last time both manufacturers put a large GPU, it didn't work out for them. Look how the 5775C was and abandoning it with Skylake. How about AMD in the Llano days?

Yea sure they can put a large GPU, but would we pay extra for it? We want all the extra performance for practically free right? Same price as the current anemic iGPUs? But they won't do that.

The alternative is pairing the highest end iGPU on a Core i3/Ryzen 3, so you save on the "useless" extra CPU and is more balanced. I just don't see this happening either. Do you want to be stuck to the iGPU performance, or eventually upgrade to a much faster dGPU you've been waiting for?

Prices are a very subjective thing, simply because the final price depends in a lot of more things that just costs, in the end they will sell at wharever Intel/AMD wants to.
That said, the I7-5775C wasnt much more expensive than the I7-4790K, but those Broadwell were really in low availity. And the amount of people, at that time, that wanted to pay for an I7 to game with a IGPU when you could get a GTX950/GTX960/GTX970/R7 370, etc at MSRP was null.

If they came out today with an I7 that has the IGP power of a GTX1650/RX570 whiout increasing prices too much, things would be diferent.

and im petty sure it will happen next year with DDR5 on AMD, and with Intel it depends on how big the IGP is on Alder Lake. But im petty sure Intel will beat AMD to DDR5 and as result it they will gain the IGP lead, at least for a while.
 

Ajay

Lifer
Jan 8, 2001
15,332
7,792
136
That is the point, can you imagine to hot a not very power efficient Alder Lake on 10nm.That would be another big flop on 10nm, if Alder Lake is in red details worse vs Zen 3/Zen 3+ on TSMC 7nm.
Intel will still sell all they can make, so it won't be a flop. Pushing CPU clockspeeds up to a point where the added benefits outweigh the increase in power consumption doesn't seem to be an issue for Intel on desktop CPUs. For OEMs making systems for business, they just clock them down, reduce core count and presto - nice efficient CPU with virtually noiseless cooling.
 
  • Like
Reactions: Tlh97 and IEC

Hulk

Diamond Member
Oct 9, 1999
4,191
1,975
136
Ice Lake is still Ice Lake. It topped out at 3.9GHz and it still does. Intel has fabs available now to produce server based Ice Lake parts, high clocks aren't as meaningful for servers as is high core count and efficiency. Would SF or ESF be better? Of course, but that process is needed for Tiger Lake mobile, where clocks are 1GHz better.

No surprises here. It AMD can't supply all the parts needed, and they can't, then Intel will sell them, and they will.

Eventually these high core count parts will move to SF as the desktop parts and mobile move to ESF. Yes, yes, it should have all happened faster but this is where we (they) are.

I don't get the shock and surprise? Like I wrote above it's still Ice Lake, just more cores.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
If they came out today with an I7 that has the IGP power of a GTX1650/RX570 whiout increasing prices too much, things would be diferent.

But I'm saying, that won't happen. RX 570 performs nearly 3x the Iris Xe. So they'll have to spend something like 120mm2 die, plus they'll need to add in the on package memory. And it'll use power like one too.

And you guys saw what happened with Kabylake-G.

I don't see that kind of performance happening even next year. I can see RX 560 happening though.

5775C was actually quite cheap for the price, since it had the massive iGPU that took up half the die plus the eDRAM.

If the 5800x was selling for $349, I wouldn't pay more than $399 for one that performed like an rx560-570.

What you need to realize is that either it'll balloon the die size, which will increase production costs more than the percentage increase, or will be on a separate die possibly needing a separate larger package.

Even RX 560 clobbers the top end iGPU today. The 32EU iGPU in Rocketlake adds $25 MSRP. I don't see a 560 level performance for anything under $75 if it was available today.
 

bumble81

Junior Member
Feb 14, 2021
7
1
16

Hulk

Diamond Member
Oct 9, 1999
4,191
1,975
136
The top turbo bin is 4.4Ghz. It's likely that they sell faster parts off roadmap.


Yes, that's Cooper Lake and 14nm.

But in full disclosure the 1068NG7 was just released with a turbo of 4.1GHz.
But still, as I wrote Ice Lake is Ice Lake. Good for Intel if they tweaked a new stepping and got another 200MHz from the 1065G7.
 

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
Rx560 is built on 14lp and has 14-16 polaris CUs at 1100-1200 mhz. Cezanne is built on N7 and already has 8CUs running at up to 2000Mhz. If everything else on the die was kept the same, except just doubling the number of CUs, converting the DRAM controllers to DDR5, and doubling the L3 cache to 32MB, but not even bothering to update the CUs to RDNA2, you would only expand the die by about 40% max. With decently specced DDR5 memory in "dual" channel, it would easily wipe the floor with the RX560 while staying in the same power envelope of the 5900/5950.

If the CUs were updated to RDNA2, and reduced to 12CUs, the effective memory bandwidth would improve due to better compression and efficiency and overall performance would push towards the rx570. That's more than enough for decent 1080p gaming. It would also improve power efficiency. Yes, there would be a larger die. Yes, it would be more expensive to produce, but it wouldn't cost $50 more. If they need more die per wafer, they can always make a half size Cezanne to add lower end volume.

As for KadyLake-g, it was crazy expensive for what it was because it was a three chip module that included an HBM stack when it was still expensive and had thermal issues because of all the 14nm chips present. That was an exercise in futility and bombed in the market as a result. The proposed chip is of manageable size and won't cost an arm and a leg to make.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
If the CUs were updated to RDNA2, and reduced to 12CUs, the effective memory bandwidth would improve due to better compression and efficiency and overall performance would push towards the rx570. That's more than enough for decent 1080p gaming. It would also improve power efficiency. Yes, there would be a larger die. Yes, it would be more expensive to produce, but it wouldn't cost $50 more. If they need more die per wafer, they can always make a half size Cezanne to add lower end volume.

RDNA2 is only 35% better than Vega per CU. RX 570 is a 32CU part, with GDDR5 having bandwidth of 224GB/s, or equivalent to dual channel DDR clocked at 14GHz!

Even the RX560 is literally a doubled up Vega 8. The bandwidth, which isn't even shared is equal to dual channel DDR clocked at 7GHz, and initial DDR5 parts are going to be DDR5-4800.

As for KadyLake-g, it was crazy expensive for what it was because it was a three chip module that included an HBM stack when it was still expensive and had thermal issues because of all the 14nm chips present.

Yes, but now imagine you make it two, with one being HBM. So while you might save a bit on packaging, now the die size would go from 120mm2 on Kaby-G to 200mm2. The amount of chips that can be produced on a wafer gets reduced not linearly, but square of the die size differences. So the cost to produce a 200mm2 die is actually 2.8x the 120mm2 one.
 
  • Like
Reactions: lightmanek

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
RDNA2 is only 35% better than Vega per CU. RX 570 is a 32CU part, with GDDR5 having bandwidth of 224GB/s, or equivalent to dual channel DDR clocked at 14GHz!

Even the RX560 is literally a doubled up Vega 8. The bandwidth, which isn't even shared is equal to dual channel DDR clocked at 7GHz, and initial DDR5 parts are going to be DDR5-4800.



Yes, but now imagine you make it two, with one being HBM. So while you might save a bit on packaging, now the die size would go from 120mm2 on Kaby-G to 200mm2. The amount of chips that can be produced on a wafer gets reduced not linearly, but square of the die size differences. So the cost to produce a 200mm2 die is actually 2.8x the 120mm2 one.

Vega 8, as it exists on Cezanne, is MUCH higher clocked than the Polaris based chips on the 560/570 were, to the tune of 50+% higher. 12 RDNA2 CUs clocked 50+% higher will have similar performance to 24 VEGA CUs at the lower clocks that were run on the 560 and 570. At introduction, "dual" channel DDR5 will provide at least 76 GB/sec of throughput, and is expected to scale to 102GB/sec. Modern APUs barely use 10% of their available bandwidth for cpu data while gaming in most games, and it seems like DDR5 will further shrink that amount, not to mention larger caches. The RX560 barely had more than 110GB/sec bandwidth, and RDNA2 is much more bandwidth efficient. The proposed larger cache APU with 12 RDNA2 CUs should easily best the rx560 by a large margin.

The RX570 won't be drastically faster, though, it should have more raw compute throughput. Having a 32MB shared L3 should help with the bandwidth difference.

I don't see a point in using HBM for an APU. The DDR5 memory setup for the processor should be enough to keep it useful. Plus, it makes for a drastically higher pin out. At worst, it would be cheaper to just include a third DDR5 "channel",maybe with soldered modules on the laptop main board.
 
  • Like
Reactions: Tlh97

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106
this thread has the ability to turn into AMD iGPU discussion for the Nth time....
Yes, you are absolutely right.
RDNA2 IGP discussion is interesting I don't disagree, but this is simply not the right place for It. Rembrandt has Its own thread, discuss It there!
 
Last edited:

tamz_msc

Diamond Member
Jan 5, 2017
3,710
3,554
136
How it comes there are Iris Xe gameplay videos from Batman Arkham Knight on youtube?
I'll trust my own experience over some random YT video with <100 views, thank you.
You were claiming UHD 750 runs only 30% faster than UHD 630 despite almost all benchmarks from several reviewer shows 50% or slightly more?
Check Gamers Nexus video on UHD 750 performance. On games like Rocket League and CS:GO, the gains are much less than 50%. I suspect that in a majority of popular/casual titles which are easy to run and are light on the GPU, games which people would actually be interested playing on an iGPU, the gains will be much lower than 50%.