Discussion Intel current and future Lakes & Rapids thread

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ajay

Lifer
Jan 8, 2001
15,429
7,847
136
Given how bad yields must be at 10 nm I am not sure this is true. Either way the purpose of using EMIB with Icelake mainstream would be for Xeon-D (have 4 8 core tiles) and of course being able to segment the GPU harder than they do now. I also think they would want to get some practice in before they deal with the SP servers.

I assume 10nm+ yields will be vastly superior to 10nm (otherwise, we'd have CNL-S CPUs). EMIB has already been implemented with Stratix 10. I would think Xeon-D would be under the auspices of Intel's DCG. Moving the iGPU off die would give Intel much more flexibility. I'm a bit wary of Intel's current push for more segmentation - Intel becoming a marketing company more than an engineering and manufacturing company will expose them to more risk (though it can create more profit), IMHO. That kind of thinking already caused Intel to lose a chance at the smartphone biz.
 

TheF34RChannel

Senior member
May 18, 2017
786
309
136
I'm not even sure what base clocks are for, if the CPU is never running at them.

I don't mind them and have SpeedStep still on despite the OC because there can be weeks where I use MS Office so I don't need a 24/7 OC.

I wonder how long it will take CPUs to mirror GPUs for OC capabilities, where they auto OC permitted it meets certain requirements.
 

scannall

Golden Member
Jan 1, 2012
1,946
1,638
136
You mean... like on Ryzen? (XFR)
At this point, XFR seems like a work in progress. It's a good idea, and an OK implementation. I'm looking forward to seeing what they do with it though. Particulalry in the laptop parts. Run at 250Mhz for example when you're just looking at a static page, crank up to whatever boost is needed when the workload goes up. Up to the ceiling *if needed*. Work on the machine learning some, so it knows when to crank it up, and how far etc.
 

Lodix

Senior member
Jun 24, 2016
340
116
116
At this point, XFR seems like a work in progress. It's a good idea, and an OK implementation. I'm looking forward to seeing what they do with it though. Particulalry in the laptop parts. Run at 250Mhz for example when you're just looking at a static page, crank up to whatever boost is needed when the workload goes up. Up to the ceiling *if needed*. Work on the machine learning some, so it knows when to crank it up, and how far etc.
This is how mobile SOCs work (?)
 

StinkyPinky

Diamond Member
Jul 6, 2002
6,763
783
126
Slightly underwhelming?! You get a guaranteed 20-30% total throughput increase over Kaby Lake, IMHO it's excellent news. ST and low threaded performance will also get a boost.

Maybe you're right. I guess it doesn't matter anyway as long as it is a good overclocker.
 
  • Like
Reactions: TheF34RChannel

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I'm not even sure what base clocks are for, if the CPU is never running at them.

Actually in thermally constrained environments like with T and S CPUs, mobile chips, and ones using iGPUs, it does fall back to base.

Intel has shown a graph that illustrates the point of Base clocks nicely. On y-axis of the graph its an application that uses CPU entirely. On the x-axis it shows an application that uses the GPU entirely.
If you combine the maximum of the y-axis and maximum of the x-axis it represents Turbo clocks of CPU and the GPU. However the combination also exceeds TDP(and power usage).

Lots of the applications are actually closer to the middle, where GPU and the CPU are equally utilized. In this case, neither the CPU or the GPU can run at Turbo. iGPU benchmarks that measure frequency show that CPUs run close to the base. There are laptop systems that often fail to reach Base clocks in such scenarios.

In desktops if you run AVX-intensive code like Linpack along with running a 3D game I bet you won't see top performance on either of them.
 

ZGR

Platinum Member
Oct 26, 2012
2,052
656
136
Actually in thermally constrained environments like with T and S CPUs, mobile chips, and ones using iGPUs, it does fall back to base.

Intel has shown a graph that illustrates the point of Base clocks nicely. On y-axis of the graph its an application that uses CPU entirely. On the x-axis it shows an application that uses the GPU entirely.
If you combine the maximum of the y-axis and maximum of the x-axis it represents Turbo clocks of CPU and the GPU. However the combination also exceeds TDP(and power usage).

Lots of the applications are actually closer to the middle, where GPU and the CPU are equally utilized. In this case, neither the CPU or the GPU can run at Turbo. iGPU benchmarks that measure frequency show that CPUs run close to the base. There are laptop systems that often fail to reach Base clocks in such scenarios.

In desktops if you run AVX-intensive code like Linpack along with running a 3D game I bet you won't see top performance on either of them.

This is where Intel's Turbo Boost needs refining. Right now, it is far too aggressive in notebooks during heavy 3d loads causing premature throttling. Disabling Turbo entirely lets the iGPU have tons of thermal headroom and will most likely prevent any iGPU throttling that that will cause a drop in frames.

It is frustrating to see Intel's Turbo competing with its own iGPU or a dGPU for the tiny amount of thermal headroom in a laptop while playing a game. If only it was smarter, and the average user experience would be a lot better out of the box. It takes a savvy user to know that they must disable Intel Turbo Boost to prevent their GPU from throttling and that is unacceptable.
 

mikk

Diamond Member
May 15, 2012
4,133
2,136
136
This is where Intel's Turbo Boost needs refining. Right now, it is far too aggressive in notebooks during heavy 3d loads causing premature throttling. Disabling Turbo entirely lets the iGPU have tons of thermal headroom and will most likely prevent any iGPU throttling that that will cause a drop in frames.


Intel did change the Turbo behaviour with new Kabylake drivers in late 2016 resulting in a much more constant GPU Turbo. Prior to this change CPU had priority over the GPU.
 

mikk

Diamond Member
May 15, 2012
4,133
2,136
136
gonz5a66.jpg


mhvqgj4m.jpg



Wccftech looks really poor now....posted ES SKUs and thought they are final, what a fail lol
 

Ajay

Lifer
Jan 8, 2001
15,429
7,847
136
Cool, must be getting close to release :)
Not that I can afford it this year :(
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
If only it was smarter, and the average user experience would be a lot better out of the box. It takes a savvy user to know that they must disable Intel Turbo Boost to prevent their GPU from throttling and that is unacceptable.

Automated systems are naturally dumb. :)

As mikk points out, it improves over new driver and hardware generations. They can probably optimize per-game day 1 of the launch, but they just don't do it. That would be the only way to ensure it works everywhere. They do a LOT more than they used to though. I think those that believe IMG or whatever company will come and just knock Intel off 3rd position is ridiculous, because 1/2 of graphics is about driver and vendor support and that's a huge job.

In Intel's case, they know as well as anyone if you are talking about gaming, those that consider their graphics in high numbers are mostly in the E-Sports space, where the 3D graphics demand is relatively low, and the frames are high enough that it moves demand to the CPU. So its better that the driver tells the CPU to take the Turbo priority as a general rule.

Again, automated systems won't satisfy everyone.
 

jpiniero

Lifer
Oct 1, 2010
14,583
5,204
136
Hmm, apparently according to the Linux libdrm driver the Coffee Lake U models are GT3(e?) only and not GT2. Seems like it would be a tight fit to get a QC die, the PCH and edram to fit in the package.

Also it has 6 different IGP versions of Cannonlake Y's GT2, probably just clock speeds.
 

dullard

Elite Member
May 21, 2001
25,054
3,408
126
Wouldn't it be hilarious if they do turn up in August? :p They won't, but just saying :D @Sweepr
I think the 8400 is a typo and should be 8600?
wccftech (take their stuff with a grain of salt) is saying August: http://wccftech.com/intel-coffee-lake-core-i7-8700k-core-i5-8600k-6-core-cpu-leak/
The Intel Coffee Lake 8th generation Core processors are expected to launch around Gamescom in the month of August which is next month.
I'm beginning to suspect there will be no 8600 chip, just an 8600K.
 

Bouowmx

Golden Member
Nov 13, 2016
1,138
550
146
I think the 3 main hardware authors at Wccftech sometimes just pass off educated guesses, some using currently known info, as affirmative statements. I too, can make an educated guess that directly comparable tier chips (ex. Intel Core i5-6400, 7400, and 8400) will be priced almost exactly like each other. The August date, from the partner presentation; it's what we've got, but no word of confirmation yet.
 
  • Like
Reactions: TheF34RChannel