Discussion Intel current and future Lakes & Rapids thread

Page 106 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

mikk

Diamond Member
May 15, 2012
4,133
2,136
136
Gen 9 performs in general better because Intel was once in mobile and it became optimized for this. I even saw Cherry Trail doing better than some Iris parts.

Also, if you look Anand's Galaxy S10 and iPad Pro 2018 review, you'll see that the GPUs throttle to about 2/3rds the performance or less.
https://www.anandtech.com/show/14072/the-samsung-galaxy-s10plus-review/10
https://www.anandtech.com/show/13661/the-2018-apple-ipad-pro-11-inch-review/6

GPUs that outperform HD 620 at peak, underperform it when running the bench sustained. Even the iPad drops to 60% of the performance when running


Yes this is such a nonsense, nervertheless some people like to compare Android phones with bigger windows devices. Also we can be sure phone vendors have gfxbench optimized like hell. Intel could probably do the same with Gen11 in future driver because they did quite well @Gen9 with almost 1/3 the raw power. It's even worse for AMD with mobile Vega.


And then ALU 2 is purely synthetic and most likely the worst benchmark when comparing architectures.


It gives you a better picture about the GPU capability than the synthetic 3d benches.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Yes this is such a nonsense, nervertheless some people like to compare Android phones with bigger windows devices.

Therefore i was picking peak performance before throttling as reference point for comparision in order to not put GPUs operating in a phone power and thermal envelope at disadvantage.

Also we can be sure phone vendors have gfxbench optimized like hell. Intel could probably do the same with Gen11 in future driver because they did quite well @Gen9 with almost 1/3 the raw power.

Could be, could not be - speculation at best. At least you do acknowledge, that the GFX bench numbers for Gen11 Iris Pro should be much higher compared to the likes of Adreno 640.
 

jpiniero

Lifer
Oct 1, 2010
14,584
5,206
136
I'd say the big clock speed deficit compared to Comet Lake and possibly even Renoir is a bigger problem than any GPU performance issues.
 

Nothingness

Platinum Member
Jul 3, 2013
2,400
733
136
Also we can be sure phone vendors have gfxbench optimized like hell. Intel could probably do the same with Gen11 in future driver because they did quite well @Gen9 with almost 1/3 the raw power
You mean Intel don't do optimize benchmarks for their GPU while they add trick to their compilers to get better SPEC results? Their driver team should be fired then.
 

mikk

Diamond Member
May 15, 2012
4,133
2,136
136
You mean Intel don't do optimize benchmarks for their GPU while they add trick to their compilers to get better SPEC results? Their driver team should be fired then.


The driver team has nothing to do with Spec results, this is trolling at its best. If the driver team put more effort into real world gaming instead some synthetic crap benchmark then it's a welcomed preference for the customer. For a windows device gfxbench is pretty much irrelevant while for a phone vendor gfxbench is one of the main 3d benchmark, difficult to understand for you I know.
 

Nothingness

Platinum Member
Jul 3, 2013
2,400
733
136
The driver team has nothing to do with Spec results, this is trolling at its best. If the driver team put more effort into real world gaming instead some synthetic crap benchmark then it's a welcomed preference for the customer. For a windows device gfxbench is pretty much irrelevant while for a phone vendor gfxbench is one of the main 3d benchmark, difficult to understand for you I know.
You didn't get it or made as if you didn't get it.

I obviously meant Intel, as most CPU/GPU companies, has been caught cheating at CPU benchmarks. Why wouldn't they do that for GPU?

EDIT: Ha well sorry. Read too quickly your answer because the personal attacks you made (which are not allowed by forum rules, but you're wise enough to make them indirect), made me miss your point :)
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
You mean Intel don't do optimize benchmarks for their GPU while they add trick to their compilers to get better SPEC results? Their driver team should be fired then.

I don't think its a matter of cheating, as the GFXBench results on the mobile GPU camp were abnormally higher when compared to PC parts running it.

There are probably low level optimizations on the mobile side that we don't know exists, but doesn't apply to PC GPUs. Half-precision FP for one.

Just because of the screen size difference, the quality of 3D applications on PCs are much more scrutinized and requires beefing up parts of the GPU uarch that may not be applicable when on a small screen and details can be reduced without noticeable visual quality degradation. A benchmark meant for such a platform, would try to mirror this as well.
 

Nothingness

Platinum Member
Jul 3, 2013
2,400
733
136
I don't think its a matter of cheating, as the GFXBench results on the mobile GPU camp were abnormally higher when compared to PC parts running it.

There are probably low level optimizations on the mobile side that we don't know exists, but doesn't apply to PC GPUs. Half-precision FP for one.

Just because of the screen size difference, the quality of 3D applications on PCs are much more scrutinized and requires beefing up parts of the GPU uarch that may not be applicable when on a small screen and details can be reduced without noticeable visual quality degradation. A benchmark meant for such a platform, would try to mirror this as well.
Thanks for clarifying, that was helpful.

I guess that still doesn't mean there's no benchmark tweaks on Intel side. They would be stupid not to do it when others are and when they used to do it back to Gen9 as @mikk wrote; of course it might be too early to have these tricks implemented in Gen11, but how can we know they're not already in place?

And no, I don't condemn such behavior, but we all know what marketing is :)

FWIW there's a result of Adreno 680 on Night Raid here. It seems comparable to a 630 IGP. So I guess there's little doubt Gen11 is significantly faster than mobile GPUs.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
They would be stupid not to do it when others are and when they used to do it back to Gen9 as @mikk wrote; of course it might be too early to have these tricks implemented in Gen11, but how can we know they're not already in place?

I think by Gen 11, any differences due to tweaking will be minimal. It's been quite a long time since Intel's mobile efforts started, and they haven't abandoned it completely.

I would just like to reiterate what I said back in Post #2622. Due to fundamental platform differences, you can't compare the two exactly. But if you really want to try, my opinion is that while the top performing mobile GPUs like in the 2018 iPad Pro can at peak, perform like Gen 11, since it can't sustain this it can't be said to be equal. Theoretically, if the iPad Pro GPU were to move to the PC, then sustained would be all we care about.

Of course even with 2/3rd of Gen 11 performance, the top mobile GPUs are phenomenal. Conversely, can it be said AMD/Intel's execution sucks?
 
Last edited:
  • Like
Reactions: Nothingness

Nothingness

Platinum Member
Jul 3, 2013
2,400
733
136
Do Intel IGP throttle under heavy load in a laptop?

Anyway my feeling is that these graphics benchmarks are easily amenable to tricks, even more than compiler tricks for SPEC or AnTuTu, and are to be taken with a huge grain of salt. As you wrote, what matters in the end is (sustained) performance in apps and games, which makes comparisons between Intel IGP and mobile GPU hard to make. The only possibility would be a comparison against Windows on ARM laptops (well if the apps are ported that is :D).
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
But if you really want to try, my opinion is that while the top performing mobile GPUs like in the 2018 iPad Pro can at peak, perform like Gen 11, since it can't sustain this it can't be said to be equal. Theoretically, if the iPad Pro GPU were to move to the PC, then sustained would be all we care about.

It cant be equal because the mobile GPUs cannot sustain performance? Give them few hundred mW more thermal headroom and they could very well sustain their clocks. For example the Snapdragon 835 in the HP Envy X2 (slim tablet form factor, passively cooled) never throttles even when all 8 cores are at 100% and the GPU is doing something meaningful. Of course when putting the same SoC into a Phone things look different.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Do Intel IGP throttle under heavy load in a laptop?

The current 15W GT2 parts run at about 1GHz under load when gaming. Yea, you can see Notebookcheck stress results bringing it down to 500MHz or lower, but that's under extreme stress with Prime95 and Furmark running together.

I have not seen gaming results perform like their stress results, and is unrealistic. When you run two applications stressing completely different parts, of course you can make it seriously throttle. But in games, low fps would make the CPU run at low clocks and GPU won't have to throttle that much.
 

Nothingness

Platinum Member
Jul 3, 2013
2,400
733
136
The current 15W GT2 parts run at about 1GHz under load when gaming. Yea, you can see Notebookcheck stress results bringing it down to 500MHz or lower, but that's under extreme stress with Prime95 and Furmark running together.

I have not seen gaming results perform like their stress results, and is unrealistic. When you run two applications stressing completely different parts, of course you can make it seriously throttle. But in games, low fps would make the CPU run at low clocks and GPU won't have to throttle that much.
Yeah, Furmark seems very unrealistic and running it along Prime95 is even worse, so it's useless except to test the system does not crash under heavy load.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Do Intel IGP throttle under heavy load in a laptop?

Depends on the TDP. With i5-8250U (15W TDP) the GPU (1.1GHz max) throttles under Furmark down to 800MHz and under Furmark + Prime96 down to 650 MHz.

Notebookcheck

Anyway my feeling is that these graphics benchmarks are easily amenable to tricks, even more than compiler tricks for SPEC or AnTuTu, and are to be taken with a huge grain of salt. As you wrote, what matters in the end is (sustained) performance in apps and games, which makes comparisons between Intel IGP and mobile GPU hard to make. The only possibility would be a comparison against Windows on ARM laptops (well if the apps are ported that is :D).

Well 3dmark and Geekbench are compiled to ARM64 native under Windows - Prime95 and Furmark not yet.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
It cant be equal because the mobile GPUs cannot sustain performance?

I actually addressed this in a subtle way I guess? I'm trying to give people benefit of the doubt.

WoA laptops are in the same scenario as Intel Atoms were in the Smartphone space. Customers will ask "Why?" and the advantages needed are tremendous to even think of switching.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
WoA laptops are in the same scenario as Intel Atoms were in the Smartphone space. Customers will ask "Why?" and the advantages needed are tremendous to even think of switching.

Certainly - but i was not even remotely trying to discuss this point. I was merely pointing out, that it is not valid to declare a GPU worse, when its operating under a fraction of available power. I figured that you rather want to compare different device classes instead of different GPUs under similar constraints.
 
  • Like
Reactions: Olikan

Thala

Golden Member
Nov 12, 2014
1,355
653
136
I have not seen gaming results perform like their stress results, and is unrealistic. When you run two applications stressing completely different parts, of course you can make it seriously throttle. But in games, low fps would make the CPU run at low clocks and GPU won't have to throttle that much.

I do agree that running both Furmark and Prime95 does not reflect a very realistic scenario.
However I have shown that Furmark alone makes the GPU throttle - which is precisely the case you describe above - low fps and low CPU usage. In fact Furmark running at about 11fps such that GPU can take larger portion of the available 15W while the CPU is sitting at a low 0.77V. I would assume, that under more realistic gaming conditions, that the user would reduce GFX setting to achieve a playable framerate, which would make the CPU usage going up - leaving less power for the GPU.
Wouldnt you think so?
 

DrMrLordX

Lifer
Apr 27, 2000
21,617
10,826
136
Yeah, Furmark seems very unrealistic and running it along Prime95 is even worse, so it's useless except to test the system does not crash under heavy load.

Last time I tried that was on my Kaveri. I think AMD drivers at least make the GPU throttle automatically under Furmark now. Kinda funny. Not sure if that's true, but I haven't tried it in awhile . . . and I wouldn't be surprised if Intel had gone the same route. Furmark is infamous.
 

mikk

Diamond Member
May 15, 2012
4,133
2,136
136
  • Like
Reactions: Gideon

jpiniero

Lifer
Oct 1, 2010
14,584
5,206
136
Singlecore Turbo is working finally, so as expected the previous Dell entry didn't had a working SC Turbo. The Bios is newer, I guess it was just a Bios issue.

Much better. If the average Comet Lake i7 U hits 4.6 in 1-2 thread, the 1065G7 will still be slower in ST (but only barely) but the Icelake should be faster in MT.
 

Gideon

Golden Member
Nov 27, 2007
1,620
3,645
136
At this point I believe Intel should scrap their 10nm process altogether: Intel 10nm Ice Lake Desktop CPUs Delayed, Server Parts Will Have Low Clock Speeds

Yields are horrible, maximum frequencies are not there, and it's been dragging for far too long to be viable.

I still can't believe the Hyper-Threading disabled rumours mentioned in the article. IMO it doesn't make sense to disable it just on Comet-Lake, but keep all the mobile SKUs unaffected (which are the vast majority of CPUs actually sold).

The current exploits IMO don't yet justify disabling it on all consumer processors. It might be, that's there is something more serious coming, but then, why are Whiskey- and Amber-Lake's, etc unaffected? I'm aware that mobile processors require HT much more than desktop processors, but if the security issues are serious enough then that wouldn't matter. No-one implements security measures selectively, just because some SKUs would be more affected with the hit. Even if Intel tried, Microsoft would probably just disable HT anyway on all SKUs, if it were serious enough. If not, then why would Intel voluntarily cripple their top-of-the-line chips?

The only realistic reason I can come up with is, that to is that 20 threads would just draw too much power on 14nm (just look a the difference between 8700K and 9700K when all cores are @ 100%) and they disable it to keep the clocks up and power manageable in all cases.

Anyhow, this all requires more proof, before I'll believe it.
 
  • Like
Reactions: mikk

DrMrLordX

Lifer
Apr 27, 2000
21,617
10,826
136
@ApTeM

No surprises. Those leaked (and apparently mostly true) Intel roadmaps showing stuff like Rocket Lake etc. had no 10nm products on desktop at all through the end of 2020. You just confirmed it.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
IMO it doesn't make sense to disable it just on Comet-Lake, but keep all the mobile SKUs unaffected (which are the vast majority of CPUs actually sold).

It might be, that's there is something more serious coming, but then, why are Whiskey- and Amber-Lake's, etc unaffected?

That has little to do with security. Whiskeylake, Amberlake and Coffelake CPUs are not affected by the Hyperthreading security bug. The older chips are.