Intel Cannonlake, Ice Lake, Tiger Lake & Sapphire Rapid Thread

Page 105 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

mikk

Platinum Member
May 15, 2012
2,415
39
126
I'm so tired of hearing this over and over again. Geekbench has not been updated to properly support Ice Lake (it might be possible that such support is also required from Windows as well), so it's not known whether the turbo frequency that it reports for Ice Lake based systems is correct.

Not sure why you quoted the part of my Skylake/Whiskey Lake clarification, I mean Hans was wrong about the scores there. I haven't seen Geekbench updates for new CPUs, even the Ryzen 2 frequency measurements seems correct, it is more likely it didn't boost over 4C Turbo, it is quite common that devices doesn't use the 1C Turbo.


Seems that the Dell is going to be the only Icelake model on the market for a period of time.
Why do you think so?
 

Shivansps

Platinum Member
Sep 11, 2013
2,530
255
126
Last edited:

jpiniero

Diamond Member
Oct 1, 2010
6,411
260
126
Why do you think so?
All the attention Intel gave it. The actual "exclusivity" may not be all that long, might be the kind of thing where the other models launch alongside Comet U.
 

mikk

Platinum Member
May 15, 2012
2,415
39
126

Bouowmx

Senior member
Nov 13, 2016
853
20
116
Is Ice Lake-H still a thing? I know S is not, but H is based on the same die (PCIE on die, replacing IPU).
 

jpiniero

Diamond Member
Oct 1, 2010
6,411
260
126
Is Ice Lake-H still a thing? I know S is not, but H is based on the same die (PCIE on die, replacing IPU).
No. The roadmap leak has Comet next year and then Rocket the year after.
 

mikk

Platinum Member
May 15, 2012
2,415
39
126
We have another Geekbench result from a lower end SKU

Core i5-1034G1: https://browser.geekbench.com/v4/cpu/13349360

maximum 3565
median 3526

It's pretty close to the i7 result.

https://browser.geekbench.com/v4/cpu/compare/13349360?baseline=13303489

Kex difference:
- L3 Cache 6144 KB instead 8192 KB
- Memory Bandwidth is quite a bit slower (but this is related to the LPDDR speed the OEM is using)

It backs up the frequency report for the i7 I would say because in case it really run at 3.9 Ghz the i7 would be in front in every subtest and not just here and there, especially with a faster LPDDR and 2 MB bigger L3 cache, the i7 actually is slower in 11 subtests, no way it was running with 3.9 Ghz.
 

jpiniero

Diamond Member
Oct 1, 2010
6,411
260
126
The Dell spec list says all the options are for 3733 Mhz speed. The spec list could be wrong though, the i5 option mentioned in the spec list is the 1035G1 which from the leak has a max boost of 3.6 and not 3.7. I'm inclined to believe the leak more because the 1034G1 also has a max boost of 3.6 which matches the Geekbench result.

I guess the question really is whether it's a bug or the Dell model simply can't hit anywhere near 3.9.
 

Bouowmx

Senior member
Nov 13, 2016
853
20
116
Can Ice Lake-U/Y on-die Thunderbolt 3 be reconfigured to PCIE for attaching internal GPU? The 4 ports of TB3 together is effectively PCIE 3.0 x16 of bandwidth.
 

jpiniero

Diamond Member
Oct 1, 2010
6,411
260
126
Seems that even though it looks like Intel hasn't officially announced it, Cascade Lake-W is now on Ark. Apple must be using a custom 300 W TDP version of the Xeon W-3275 (28 cores, 2.5 base, 4.4 turbo, 205W TDP) for the top upgrade in the new Mac Pro.

By the way, don't ask how much it would be to upgrade to that.
 

Dayman1225

Senior member
Aug 14, 2017
878
77
96
Seems that even though it looks like Intel hasn't officially announced it, Cascade Lake-W is now on Ark. Apple must be using a custom 300 W TDP version of the Xeon W-3275 (28 cores, 2.5 base, 4.4 turbo, 205W TDP) for the top upgrade in the new Mac Pro.

By the way, don't ask how much it would be to upgrade to that.
Talking about Cascade-W, looks like it has 64 PCIe lanes

 
Apr 27, 2000
11,857
1,048
126
Is Cascade-W what we should eventually expect in Intel's HEDT space? Having more PCIe lanes would make their HEDT buyers happy.
 

Thala

Senior member
Nov 12, 2014
721
40
116

Dayman1225

Senior member
Aug 14, 2017
878
77
96
Is Cascade-W what we should eventually expect in Intel's HEDT space? Having more PCIe lanes would make their HEDT buyers happy.
Perhaps. But Cascade Lake - W uses LGA 3647. Not sure if LGA 2066 is able to support the additional 16 lanes.
 

jpiniero

Diamond Member
Oct 1, 2010
6,411
260
126
Per Core Size: 6.91mm^2 (3.5mm x 1.97mm~)
Rocket Lake, if it's using a Cove core is going to be really big, even if they do cut the L2 in half. Certainly doable, especially since it looks like it would be using chiplets.
 

Ajay

Diamond Member
Jan 8, 2001
5,163
195
136
Rocket Lake, if it's using a Cove core is going to be really big, even if they do cut the L2 in half. Certainly doable, especially since it looks like it would be using chiplets.
If only Intel could find somewhere to put that PCH.
 

sxr7171

Diamond Member
Jun 21, 2002
5,066
5
91
Can Ice Lake-U/Y on-die Thunderbolt 3 be reconfigured to PCIE for attaching internal GPU? The 4 ports of TB3 together is effectively PCIE 3.0 x16 of bandwidth.

No because Intel reserves bandwidth for DisplayPort. Egpu guys have been suffering with this nonsense for years. They could just offer the user the option but they don’t. They were supposed to open source the standard yet I don’t know if someone will write firmware to bypass that.

The maximum bandwidth you can get per port is like 2.5 lanes of PCIE 3.0.
 

mikk

Platinum Member
May 15, 2012
2,415
39
126
Hmm that would be slightly faster than Adreno 640...in a phone. Not particular impressive for a notebook class SoC?

Not really meaningful if you don't know the render quality of both, also gfxbench in general isn't that meaningful, for example Gen9 is abnormal fast compared to mobile Vega and in real world Vega trashes Gen9. The ALU 2 score of Adreno 640 is much worse which is a better indicator.
 

IntelUser2000

Elite Member
Oct 14, 2003
6,084
218
126
also gfxbench in general isn't that meaningful, for example Gen9 is abnormal fast compared to mobile Vega and in real world Vega trashes Gen9. The ALU 2 score of Adreno 640 is much worse which is a better indicator.
Gen 9 performs in general better because Intel was once in mobile and it became optimized for this. I even saw Cherry Trail doing better than some Iris parts.

Also, if you look Anand's Galaxy S10 and iPad Pro 2018 review, you'll see that the GPUs throttle to about 2/3rds the performance or less.
https://www.anandtech.com/show/14072/the-samsung-galaxy-s10plus-review/10
https://www.anandtech.com/show/13661/the-2018-apple-ipad-pro-11-inch-review/6

GPUs that outperform HD 620 at peak, underperform it when running the bench sustained. Even the iPad drops to 60% of the performance when running GFXBench 3.1 Manhattan for a long time.

The reality is, while the PC vendors(Intel and AMD) execute quite horribly, differences in performance exist mostly because of the greater thermal headroom available on the PC parts. I think even this example shows mobile GPUs in a too optimistic way, because the benchmark, no matter how good it is, can't replace a usable application.

It's simply, impossible to fairly compare the mobile GPUs to the PC ones because of this.

On PCs, we judge harshly the vendors that only show peak performance, because we want to see how it performs after the thermal headroom is used up.

On mobile, its nearly the opposite. Nearly no one cares about sustained. Partly its cause the devices don't need to perform at peak for a long time.
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
6,084
218
126
It backs up the frequency report for the i7 I would say because in case it really run at 3.9 Ghz the i7 would be in front in every subtest and not just here and there, especially with a faster LPDDR and 2 MB bigger L3 cache, the i7 actually is slower in 11 subtests, no way it was running with 3.9 Ghz.
This is why, using user-submitted results for other than acknowledging the part exists is futile. This is especially true for comparing CPUs where we want to see 2-3% differences in perf/clock(which can easily be masked by testing errors).

Same configuration, you can find Geekbench/3DMark/GFXBench scores 30-50% lower.

The only way to really know is to look at a review when it comes out. Then the testing setup will be standardized, the systems will be in the same room, the tested software all with the same versions and some even run it multiple times to verify accuracy.

It also adds to the argument that why PC, and Smartphone/Tablet hardware can never be equalized for testing purposes.
 

Thala

Senior member
Nov 12, 2014
721
40
116
Not really meaningful if you don't know the render quality of both, also gfxbench in general isn't that meaningful, for example Gen9 is abnormal fast compared to mobile Vega and in real world Vega trashes Gen9. The ALU 2 score of Adreno 640 is much worse which is a better indicator.
We can comment on benchmarks which are available only. Reasoning about real-world applications behave differently are moot at this point. In addition GPUs in a phone are much more thermally limited. So even at peak we are comparing an Adreno 640@600MHz against Gen 11@1GHz or so.
And then ALU 2 is purely synthetic and most likely the worst benchmark when comparing architectures.
 


ASK THE COMMUNITY

TRENDING THREADS