Discussion Intel current and future Lakes & Rapids thread

Page 105 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

mikk

Diamond Member
May 15, 2012
4,111
2,105
136
I'm so tired of hearing this over and over again. Geekbench has not been updated to properly support Ice Lake (it might be possible that such support is also required from Windows as well), so it's not known whether the turbo frequency that it reports for Ice Lake based systems is correct.


Not sure why you quoted the part of my Skylake/Whiskey Lake clarification, I mean Hans was wrong about the scores there. I haven't seen Geekbench updates for new CPUs, even the Ryzen 2 frequency measurements seems correct, it is more likely it didn't boost over 4C Turbo, it is quite common that devices doesn't use the 1C Turbo.


Seems that the Dell is going to be the only Icelake model on the market for a period of time.

Why do you think so?
 

Shivansps

Diamond Member
Sep 11, 2013
3,835
1,514
136
powerswitchkjjya.jpg

https://img1.mydrivers.com/img/20190530/1e0066b500224a96a1cf3d0d7da32ec1.jpg


15W to 25W power switch in windows on the fly?

So the "power switch" app is done on Windows Forms and did not bother to change the default icon... mmmmmmmmm

Anyway, im far more interested on IGP perf.
 
Last edited:

Bouowmx

Golden Member
Nov 13, 2016
1,138
550
146
Is Ice Lake-H still a thing? I know S is not, but H is based on the same die (PCIE on die, replacing IPU).
 

mikk

Diamond Member
May 15, 2012
4,111
2,105
136
We have another Geekbench result from a lower end SKU

Core i5-1034G1: https://browser.geekbench.com/v4/cpu/13349360

maximum 3565
median 3526

It's pretty close to the i7 result.

https://browser.geekbench.com/v4/cpu/compare/13349360?baseline=13303489

Kex difference:
- L3 Cache 6144 KB instead 8192 KB
- Memory Bandwidth is quite a bit slower (but this is related to the LPDDR speed the OEM is using)

It backs up the frequency report for the i7 I would say because in case it really run at 3.9 Ghz the i7 would be in front in every subtest and not just here and there, especially with a faster LPDDR and 2 MB bigger L3 cache, the i7 actually is slower in 11 subtests, no way it was running with 3.9 Ghz.
 
  • Like
Reactions: Zucker2k

jpiniero

Lifer
Oct 1, 2010
14,509
5,159
136
The Dell spec list says all the options are for 3733 Mhz speed. The spec list could be wrong though, the i5 option mentioned in the spec list is the 1035G1 which from the leak has a max boost of 3.6 and not 3.7. I'm inclined to believe the leak more because the 1034G1 also has a max boost of 3.6 which matches the Geekbench result.

I guess the question really is whether it's a bug or the Dell model simply can't hit anywhere near 3.9.
 

Bouowmx

Golden Member
Nov 13, 2016
1,138
550
146
Can Ice Lake-U/Y on-die Thunderbolt 3 be reconfigured to PCIE for attaching internal GPU? The 4 ports of TB3 together is effectively PCIE 3.0 x16 of bandwidth.
 

jpiniero

Lifer
Oct 1, 2010
14,509
5,159
136
Seems that even though it looks like Intel hasn't officially announced it, Cascade Lake-W is now on Ark. Apple must be using a custom 300 W TDP version of the Xeon W-3275 (28 cores, 2.5 base, 4.4 turbo, 205W TDP) for the top upgrade in the new Mac Pro.

By the way, don't ask how much it would be to upgrade to that.
 
  • Like
Reactions: kostarum

Dayman1225

Golden Member
Aug 14, 2017
1,152
973
146
Seems that even though it looks like Intel hasn't officially announced it, Cascade Lake-W is now on Ark. Apple must be using a custom 300 W TDP version of the Xeon W-3275 (28 cores, 2.5 base, 4.4 turbo, 205W TDP) for the top upgrade in the new Mac Pro.

By the way, don't ask how much it would be to upgrade to that.
Talking about Cascade-W, looks like it has 64 PCIe lanes

 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
Is Cascade-W what we should eventually expect in Intel's HEDT space? Having more PCIe lanes would make their HEDT buyers happy.
 

Ajay

Lifer
Jan 8, 2001
15,332
7,792
136
Rocket Lake, if it's using a Cove core is going to be really big, even if they do cut the L2 in half. Certainly doable, especially since it looks like it would be using chiplets.
If only Intel could find somewhere to put that PCH.
 

sxr7171

Diamond Member
Jun 21, 2002
5,079
40
91
Can Ice Lake-U/Y on-die Thunderbolt 3 be reconfigured to PCIE for attaching internal GPU? The 4 ports of TB3 together is effectively PCIE 3.0 x16 of bandwidth.


No because Intel reserves bandwidth for DisplayPort. Egpu guys have been suffering with this nonsense for years. They could just offer the user the option but they don’t. They were supposed to open source the standard yet I don’t know if someone will write firmware to bypass that.

The maximum bandwidth you can get per port is like 2.5 lanes of PCIE 3.0.
 

mikk

Diamond Member
May 15, 2012
4,111
2,105
136
Hmm that would be slightly faster than Adreno 640...in a phone. Not particular impressive for a notebook class SoC?


Not really meaningful if you don't know the render quality of both, also gfxbench in general isn't that meaningful, for example Gen9 is abnormal fast compared to mobile Vega and in real world Vega trashes Gen9. The ALU 2 score of Adreno 640 is much worse which is a better indicator.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
also gfxbench in general isn't that meaningful, for example Gen9 is abnormal fast compared to mobile Vega and in real world Vega trashes Gen9. The ALU 2 score of Adreno 640 is much worse which is a better indicator.

Gen 9 performs in general better because Intel was once in mobile and it became optimized for this. I even saw Cherry Trail doing better than some Iris parts.

Also, if you look Anand's Galaxy S10 and iPad Pro 2018 review, you'll see that the GPUs throttle to about 2/3rds the performance or less.
https://www.anandtech.com/show/14072/the-samsung-galaxy-s10plus-review/10
https://www.anandtech.com/show/13661/the-2018-apple-ipad-pro-11-inch-review/6

GPUs that outperform HD 620 at peak, underperform it when running the bench sustained. Even the iPad drops to 60% of the performance when running GFXBench 3.1 Manhattan for a long time.

The reality is, while the PC vendors(Intel and AMD) execute quite horribly, differences in performance exist mostly because of the greater thermal headroom available on the PC parts. I think even this example shows mobile GPUs in a too optimistic way, because the benchmark, no matter how good it is, can't replace a usable application.

It's simply, impossible to fairly compare the mobile GPUs to the PC ones because of this.

On PCs, we judge harshly the vendors that only show peak performance, because we want to see how it performs after the thermal headroom is used up.

On mobile, its nearly the opposite. Nearly no one cares about sustained. Partly its cause the devices don't need to perform at peak for a long time.
 
Last edited:
  • Like
Reactions: mikk

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
It backs up the frequency report for the i7 I would say because in case it really run at 3.9 Ghz the i7 would be in front in every subtest and not just here and there, especially with a faster LPDDR and 2 MB bigger L3 cache, the i7 actually is slower in 11 subtests, no way it was running with 3.9 Ghz.

This is why, using user-submitted results for other than acknowledging the part exists is futile. This is especially true for comparing CPUs where we want to see 2-3% differences in perf/clock(which can easily be masked by testing errors).

Same configuration, you can find Geekbench/3DMark/GFXBench scores 30-50% lower.

The only way to really know is to look at a review when it comes out. Then the testing setup will be standardized, the systems will be in the same room, the tested software all with the same versions and some even run it multiple times to verify accuracy.

It also adds to the argument that why PC, and Smartphone/Tablet hardware can never be equalized for testing purposes.
 
  • Like
Reactions: Nothingness

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Not really meaningful if you don't know the render quality of both, also gfxbench in general isn't that meaningful, for example Gen9 is abnormal fast compared to mobile Vega and in real world Vega trashes Gen9. The ALU 2 score of Adreno 640 is much worse which is a better indicator.

We can comment on benchmarks which are available only. Reasoning about real-world applications behave differently are moot at this point. In addition GPUs in a phone are much more thermally limited. So even at peak we are comparing an Adreno 640@600MHz against Gen 11@1GHz or so.
And then ALU 2 is purely synthetic and most likely the worst benchmark when comparing architectures.