Discussion Intel current and future Lakes & Rapids thread

Page 121 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

jpiniero

Lifer
Oct 1, 2010
14,618
5,227
136
Yeah, Icelake is going to be very limited availability, the models with the GPU fully enabled doubly so.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Did you guys notice Intel is calling Icelake 10nm now? On their investor presentation they called Tigerlake 10nm+. On their presentation about 7nm, 2019 is the year for 10nm.

Other thoughts:
-eDRAM should go away for Intel, and be replaced by HBM2. PCWatch says they wanted HBM, but didn't have supporting platforms.

-If it takes Icelake and 10nm to be on par with Ryzen's GPU, then Intel is seriously behind AMD in graphics power efficiency. They are claiming 2x improvement in Tigerlake with Gen 12, meaning they are expecting great improvements in that generation.

-I'm expecting Tigerlake U's iGPU to increase clocks, not decrease. If I were to guess, 1.3-1.4GHz for the LP parts. dGPUs with Xe/Gen 12 that are named "HP" must be aiming somewhere north of that. 2x performance might be achieved using new Iris devices using single HBM-class memory instead of eDRAM. Expect premiums like current U chips with Iris graphics.

-Glacial Falls to be the "workstation" with Optane DIMMs? They called the Optane DC PMM supporting workstation "client workstation". I could be reading too much into this, but I see a great opportunity assuming proper support is in place(starting with the OS).

-Intel seems to having trouble making Optane work for their platforms. Interestingly they call the problem "system challenges". I think the single socket Xeon W released earlier also alludes to this. 2TB for the "M" SKU is a perfect fit for 6x256GB Optane and 6x64GB DRAM.
 
  • Like
Reactions: lightmanek

birdie

Member
Jan 12, 2019
98
71
51
A jaw-dropping (in a bad way) interview with Intel's senior Principle Engineer, Ophir Edlis, who has lead and been involved with many of Intel's recent desktop processor designs:

https://www.forbes.com/sites/antonyleather/2019/07/26/intel-engineer-talks-processors-design-testing-and-why-10nm-delay-shouldnt-matter/

From the interview:

Ophir: "I actually have a question for you [the journalist] – why do you think we need to have desktop on 10nm?"

You could think that maybe the board of directors have drunk too much Kool-Aid and they still believe 14nm+++++ suffices (despite the fact that they are seriously lacking the capacity to even fulfill their current demand in 14nm CPUs which is why they've reverted to 22nm for some of their products) but hearing that from a senior principle engineer? I'm appalled.

Some other things that he's saying are either completely false or pretentious as well. For instance, Intel themselves admitted that Ice Lake was designed for the 10nm node, so without a working 10nm node there will be no new faster/better/more efficient uArchs and Intel has basically given up on progress. WTF??

Meanwhile despite the fact that they've reported that mobile Ice Lake parts entered mass production almost a quarter ago, we still have zero released laptops based on this uArch and there are just one vendor which has formally announced ICL laptops (Dell). HP results have leaked but no announcement has been made.

It's sad really. Oh maybe it's good 'cause finally we have some decent competition in the CPU market.
 

mikk

Diamond Member
May 15, 2012
4,141
2,154
136
-If it takes Icelake and 10nm to be on par with Ryzen's GPU, then Intel is seriously behind AMD in graphics power efficiency. They are claiming 2x improvement in Tigerlake with Gen 12, meaning they are expecting great improvements in that generation.


This is a bit too simple and overdramatic when you say seriously behind. You have to keep in mind that the 64 EU GPU in Icelake-U is only 40mm² big which means Intels area investment into the GPU is still really low compared to AMD for the Icelake generation, therefore the RAW speed on paper is higher for Ryzen Vega. Also the efficiency gains from 14nm to 7nm didn't look great for them, the Vega shrink from 14 to 7nm was poor. A shrink alone won't bring a huge improvement nowadays. AMD needed an architecture overhaul plus a process shrink to yield a 50% improvement with Navi over GCN. Nevertheless I'm expecting nice gains from Gen12, I mean the Gen11 design itself must be old by now because of all the delays.


-I'm expecting Tigerlake U's iGPU to increase clocks, not decrease. If I were to guess, 1.3-1.4GHz for the LP parts.

And 96 EUs beside some other overhauls like 16 EUs per subslice.


Meanwhile despite the fact that they've reported that mobile Ice Lake parts entered mass production almost a quarter ago, we still have zero released laptops based on this uArch and there are just one vendor which has formally announced ICL laptops (Dell). HP results have leaked but no announcement has been made.


This is expected and nothing special. They claimed mass product started in June and it will usually take 3-4 months from this date until we have products in shelves. This is a mobile release not a simple CPU release with much smaller timeframes.
 

ondma

Platinum Member
Mar 18, 2018
2,721
1,281
136
It's not so much useless as it is extremely low-yield. Most of us won't be able to buy 10nm Intel products for awhile due to limited availability.
Well, I am certainly interested in whether 10 nm really improves the performance in laptops. After what, 4 years, one would hope so, but TBH, I am not expecting much advance in cpu performance. GPU maybe, but who cares, really? Maybe with an improvement in IPC and if it uses less power, they can maintain performance longer on sustained loads.
 

birdie

Member
Jan 12, 2019
98
71
51
Well, I am certainly interested in whether 10 nm really improves the performance in laptops. After what, 4 years, one would hope so, but TBH, I am not expecting much advance in cpu performance. GPU maybe, but who cares, really? Maybe with an improvement in IPC and if it uses less power, they can maintain performance longer on sustained loads.

This doesn't seem likely considering that base frequencies are so low.

Alternatively Intel might have changed the notion of "Base frequency" and we don't know yet.
 

jpiniero

Lifer
Oct 1, 2010
14,618
5,227
136
The very low base frequency is probably a combination of some AVX-512 instructions taking a ton of juice and the need to bin very loose. The ACT of the i7 Icelake model is 3.5; the i5's are all 3.3.

Note that Comet Lake's max ACT is 4.3 for the quad i7, 3.8 for the 6 core; and 3.9 for the i5.
 

Dayman1225

Golden Member
Aug 14, 2017
1,152
974
146
The only 14nm +10nm gfx Rocket Lake is placed in the H/G segment. The U-series comes with 14nm graphics, The entire desktop lineup isn't even a chiplet, it's plain 14nm, this is for sure not extremely small volume. 32 EUs for GT1 must be something Gen11 based. Obviously Gen11 can be ported to 14nm.

Any thoughts for Gen12 HP DG2? 512= 512 EUs???

; DG2 HW
iDG2HP512 = "Intel(R) UHD Graphics, Gen12 HP DG2"
iDG2HP256 = "Intel(R) UHD Graphics, Gen12 HP DG2"
iDG2HP128 = "Intel(R) UHD Graphics, Gen12 HP DG2"

Don't know if this really means anything but Raja Koduri liked this tweet...


Perhaps hinting at something bigger and more powerful?
 
  • Like
Reactions: lightmanek

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
This is a bit too simple and overdramatic when you say seriously behind. You have to keep in mind that the 64 EU GPU in Icelake-U is only 40mm² big which means Intels area investment into the GPU is still really low compared to AMD for the Icelake generation, therefore the RAW speed on paper is higher for Ryzen Vega.

Yes I'm very aware of this. Area isn't a problem with new processes though, power is.

You can tell from gaming battery life benchmarks that AMD iGPUs have a much better perf/watt.

This lack of efficiency is what made Nvidia's dGPUs beat Iris Pro iGPUs in price and perf/watt. It was extremely disappointing to find out a dGPU with a separate PCB and VRAM could beat the Iris Pro in battery life. Of course Nvidia is a step above AMD in efficiency.

So either Xe is a titanic improvement in efficiency, or its not as good as they claim. There's simply no guarantee it'll be that good. Because the improvements in computing come so regularly, we all take it for granted.

And 96 EUs beside some other overhauls like 16 EUs per subslice.

That would either need a huge boost in the subslice section or make EUs perform less.

Intel moved from 12EUs per subslice to 8EUs in Ivy Bridge, which boosted available resources by 50%.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
If 512 EU is 260mm2 then 1024 EU at 520 mm2?

And how many EUs could the GPU be if the cache, display/media and memory controllers were removed from the die and positioned underneath via Foveros?

They don't have to go for 1024EU. They could go for 768EUs.

I don't expect 7nm dGPUs to offer great increase in performance. 768EUs running at 1.5GHz is good enough to be competitive, as long as Intel gets the efficiency on par with Nvidia.

Even on the super low power Lakefield, the memory controller is on the compute portion of the die. I'm doubtful whether a 200W+ GPU will have any form of 3D integration, because they'll have serious thermal issues.
 
  • Like
Reactions: IEC and cbn

DrMrLordX

Lifer
Apr 27, 2000
21,637
10,855
136
I'm doubtful whether a 200W+ GPU will have any form of 3D integration, because they'll have serious thermal issues.

3D integration = Foveros?

They would need some kind of active heat pump to remove heat from various layers of the stack and move it to the cooling solution.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
3D integration = Foveros?

They would need some kind of active heat pump to remove heat from various layers of the stack and move it to the cooling solution.

Yes, I think the research paper outlined in the above link is 15-20 years away.

The closest might be direct-die fluid cooling, or having a heatpipe be part of the die. Even then, reliability requirements needed for a device that'll sell to hundreds of millions of users, at an affordable price are tremendous. The two requirements, affordability and reliable mass production are likely what stops most laboratory experiments from becoming a production device.

That's why I think its crazy that desktops chips are all aiming for 5GHz.

Say in the distant feature we get the microfluidic cooling or whatever exotic things are needed to get them higher.

Then what?

All exponential growth has to end. But it won't end abruptly. You'll see it slow down or, the markets just shift to something else entirely(like Smartphones). However after 50 years of such growth, there will inevitably be some(even in the industry!) that'll refuse to accept its over. They'll kick, scream, cry and beg for it.

It's ok, because the industry will still be ridiculously large. The market thinks zero growth in PC equals death, when 250 million units selling annually with 2 billion or so users means anything but.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
That's why I think its crazy that desktops chips are all aiming for 5GHz.

Say in the distant feature we get the microfluidic cooling or whatever exotic things are needed to get them higher.

Then what?

Maybe instead of increasing clockspeed further Intel will be able to bring the data closer to the CPU boosting IPC (as well as reduce the need for speculative execution....which improves security among other things.)

And then to go along with that there will also be very efficient transistors capable of high current with low leakage (e.g. Nano-sheet GAAFET).

Together I am hoping those two things could actually result in compute layers running cooler than we are expecting.

This to fullfill what Intel mentions below for both CPU and GPU:

https://newsroom.intel.com/news/new...ologies-target-expanded-market-opportunities/

Advanced packaging solutions will enable Intel to continue exponential scaling in computing density by extending transistor density to the third dimension.

P.S. With compute scaling both lithography and layers we are going to need a memory technology that can keep up with that. We need a fast memory tech that can also scale layers and lithography (Optane).
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
@cbn

(I am not a fan of the emoji rating system, so I will just say good post)

In their Q&A, Intel said the GPGPUs are interesting for process nodes. The redundancy that's offered by the GPGPU allows them to work even early in the process node cycle, and they've compared that contrary to a CPU. And that's why the GPGPU is the lead product for 7nm, and why the server CPU will come later.

So even if they have issues in 10nm, per mm2 their dGPU might be easier to produce than their server CPU.
 

mikk

Diamond Member
May 15, 2012
4,141
2,154
136
That would either need a huge boost in the subslice section or make EUs perform less.

Intel moved from 12EUs per subslice to 8EUs in Ivy Bridge, which boosted available resources by 50%.


It could be a dual subslice setup.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
It could be a dual subslice setup.

What do you mean? It already has more than 2 subslices.

The Gen 9 GT2 is setup as such:
-24EUs
-3 subslices
-8 EUs per subslice
-Two "ports" per EU
-4x 32-bit FMA units per "port"

Gen 11 would then be:
-64EUs
-8 subslices
-8 EUs per subslice
-Two "ports"
-4x 32-bit FMA units per "port"

(Note: The similarity to the original Pentium is interesting. The Pentium had asymmetric U and V pipes. The ports aren't identical in their GPUs either.)

RKLD
OpenGL_Gen12.Copy_DS

Rocket Lake could be Gen 12 based, the driver asks for Gen12 instead Gen11.

That's very interesting. Skipping Gen 11 for Gen 12 on Rocketlake?

Maybe. But remember Cannonlake? Intel disabled the iGPU on every die.

I found this part very strange, and isn't the norm. Yes, its true that GPUs have more replicated structures.
 
Last edited:

Thunder 57

Platinum Member
Aug 19, 2007
2,675
3,801
136
...

Ophir: "I actually have a question for you [the journalist] – why do you think we need to have desktop on 10nm?"

You could think that maybe the board of directors have drunk too much Kool-Aid and they still believe 14nm+++++ suffices (despite the fact that they are seriously lacking the capacity to even fulfill their current demand in 14nm CPUs which is why they've reverted to 22nm for some of their products) but hearing that from a senior principle engineer? I'm appalled.

Some other things that he's saying are either completely false or pretentious as well. For instance, Intel themselves admitted that Ice Lake was designed for the 10nm node, so without a working 10nm node there will be no new faster/better/more efficient uArchs and Intel has basically given up on progress. WTF??

Couldn't agree with you more. And besides, wasn't it Cannon Lake that was designed for 10nm? Well if you're Intel, they'd rather have you forget about that part by now.
 

Ajay

Lifer
Jan 8, 2001
15,468
7,870
136
If 512 EU is 260mm2 then 1024 EU at 520 mm2?

And how many EUs could the GPU be if the cache, display/media and memory controllers were removed from the die and positioned underneath via Foveros?
Moving the cache off die would be disastrous for GPU performance.

Uh, geez, edit box just lost its mind