Discussion Intel current and future Lakes & Rapids thread

Page 403 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tamz_msc

Diamond Member
Jan 5, 2017
3,770
3,590
136
Hulk, you need to differentiate between SpecInt and SpecFP. The latter is useless for client as it represents HPC workloads and they are extremely memory bandwidth bound.

With the microcode updated 11700K, it's getting 6.98 in SpecCPU2017_Int versus 6.04 for 10700K. That's 5GHz ST Turbo for 11700K vs 5.1GHz Turbo for 10700K. Assuming 85% scaling and not erroneously assuming 100% scaling like Ian does in his reviews,* it'll put 11700K at 7.09.

That means Rocketlake is 17.5% faster per clock compared to Cometlake. Then normalizing for 5.1GHz clocks again, the 5950X gets 7.715, or 8.7% faster than Rocketlake.

*Again if you are not being like AT is being in their reviews, the 3% "deficit" that Tigerlake has compared to Icelake disappears entirely. Higher clocked parts perform less per clock in real-world scenarios.
Eh, you seem to imply that HPC means supercomputers and compute clusters while the fact is that many HPC workloads like modeling, simulation are also done on high-end workstations, which till only a few years ago were commonly 8-core or 16-core systems, and now 'client' systems are able to match those in core-count. So SPECfp is not irrelevant.

Also why do you assume 85% scaling? I know frequency scaling isn't 100% but to assign a number less than 100% also seems arbitrary.
 
Last edited:
  • Like
Reactions: lightmanek

tamz_msc

Diamond Member
Jan 5, 2017
3,770
3,590
136
Found three more UHD 750 game tests: https://www.conseil-config.com/2021...k-et-core-i5-11600k/4/#nouvel-igp-intel-xe-lp

53-55% gain over UHD 630. So in almost all game tests the advantage is ~50% over UHD 630 with a few outliers between 30-40%.
Hexus found a little over 30% in their testing. I'm more inclined to believe that the improvement is likely to be around 30% instead of 50%, because of the simple fact that there are many more games which are unsupported on Xe than Intel has optimizations for, according to gameplay.intel.com.
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
They obviously had to make compromises; 8 core tops; higher power usage. In my mind, this was always going to be the case with 14nm fabrication. I personally think that for typical desktop usage the reviewers have been overly critical; the average user isn't bench marking 24/7 and the processor idles back in most common usage scenarios

The issue is it doesn't offer much benefits over comet-lake which happened to have 10-cores as top SKU so it's even a step back in MT.

What are the benefits really? There is exactly only 1: If you heavily use AVX512 enabled software.
 
  • Like
Reactions: lobz and lightmanek

mikk

Diamond Member
May 15, 2012
4,133
2,136
136
Hexus found a little over 30% in their testing. I'm more inclined to believe that the improvement is likely to be around 30% instead of 50%, because of the simple fact that there are many more games which are unsupported on Xe than Intel has optimizations for, according to gameplay.intel.com.


Hexus isn't new and their timespy scores looks strange, you can read more here: https://forums.anandtech.com/thread...-rapids-thread.2509080/page-401#post-40474669

Almost all of the tested games from other pages are 50% or more, this makes sub 50% an outlier. And by the way this is a typical Intel fail:




Imagine Nvidia releases a new GPU without driver support.
 
Last edited:

rainy

Senior member
Jul 17, 2013
505
424
136
Almost all of the tested games from other pages are 50% or more, this makes sub 50% an outlier.

Are you trying to suggest, that Intel is lying on official slide with "up to 50 percent performance improvement over previous generation"?
 

mikk

Diamond Member
May 15, 2012
4,133
2,136
136
Are you trying to suggest, that Intel is lying on official slide with "up to 50 percent performance improvement over previous generation"?


No I'm not saying they are lying, Intel is usually using 3dmark for their GPU claims. As we know 3dmark doesn't necessarily reflect real world performance, we are looking for real world gaming tests. There are not that much tests available but from the few gaming tests we have in most cases the advantage is 50% or slightly more, so the up to is not the right wording. Up to usually refers to a rare peak and outliers.

Interestingly the press UHD 750 driver was based on build 9220 (it's quite old, edit: dated 29th January).

 
Last edited:
  • Like
Reactions: lightmanek

tamz_msc

Diamond Member
Jan 5, 2017
3,770
3,590
136
Hexus isn't new and their timespy scores looks strange, you can read more here: https://forums.anandtech.com/thread...-rapids-thread.2509080/page-401#post-40474669

Almost all of the tested games from other pages are 50% or more, this makes sub 50% an outlier. And by the way this is a typical Intel fail:




Imagine Nvidia releases a new GPU without driver support.
Hexus tested at 1080p, while the Italian review you've linked tested at 720p. That could easily account for the difference.

The problem is that games not listed on gameplay.intel.com far outnumber the games that Intel optimizes for, and this fact alone is enough to doubt whether the +50% improvement is an average figure or best case figure, if you test using a wide spectrum of games.

Besides that, some games not listed on Intel's website may not even run properly with Xe graphics. I've experienced this myself on my i5-1135G7.
 
  • Like
Reactions: Tlh97

Asterox

Golden Member
May 15, 2012
1,026
1,775
136
Well, it is RocketLake backport CPU. It is expected, as we see in a few non gaming examples it is slower vs R5 3600.

R5 3600, average 4-4.1ghz all core boost+4.2ghz singlecore boost

i5 11400, by Intel All core boost is 4.2ghz+4.4ghz singlecore boost


 

mikk

Diamond Member
May 15, 2012
4,133
2,136
136
Hexus tested at 1080p, while the Italian review you've linked tested at 720p. That could easily account for the difference.

conseil-config tested full HD, madboxpc tested Full HD, hardwareluxx tested Full HD - almost all of them 50% or more. Resolution shouldn't matter if the fps are low enough, actually the 1080p advantage in Strange Brigade is higher than on 720p at madboxpc which I would expect. The 720p advantage from hwupgrade should increase with 1080p and not the other way around, your argument doesn't make any real sense. Your should better ask yourself why the timespy scores from hexus doesn't match other reviews. The UHD630 is too high or the UHD750 too low, this is more of a concern than 720p or 1080p.


The problem is that games not listed on gameplay.intel.com far outnumber the games that Intel optimizes for, and this fact alone is enough to doubt whether the +50% improvement is an average figure or best case figure, if you test using a wide spectrum of games.


This is not a problem because it doesn't matter, the games running on a standardized 3d API. They don't have to test every single game and they can't optimize every single game, this is more a marketing thing, they need this for their control panel library/settings. However your comment proves why such a "supported" game list is important to the basic user, it was a good decision from Intel to add this gameplay.intel project, important for their future dedicated GPUs. Because obviously basic people do believe games not listed there can't run well or not at all.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,770
3,590
136
This is not a problem because it doesn't matter, the games running on a standardized 3d API. They don't have to test every single game and they can't optimize every single game, this is more a marketing thing, they need this for their control panel library/settings. However your comment proves why such a "supported" game list is important to the basic user, it was a good decision from Intel to add this gameplay.intel project, important for their future dedicated GPUs. Because obviously basic people do believe games not listed there can't run well or not at all.
This is a big problem because despite games using a standardized 3d api, the Intel driver doesn't work with some of them. It's not about basic people believing that games not listed on Intel's website won't run - it is a FACT that games not listed there have a 50/50 chance of not running properly. If you contact Intel support by filing a bug report, they tell you to check with the game developer, which isn't helpful at all because I doubt the game developer of a five or six year old game is going to buy a Tiger Lake laptop to bother checking whether the game runs on it or not.

That's why I don't believe Intel's claims when it comes to graphics performance.
 

Mopetar

Diamond Member
Jan 31, 2011
7,831
5,980
136
Like most impressive sounding figures it's one of those "up to X%" which obviously should never be taken as an average. If they have one title where it's true then they were hardly lying.
 

mikk

Diamond Member
May 15, 2012
4,133
2,136
136
This is a big problem because despite games using a standardized 3d api, the Intel driver doesn't work with some of them. It's not about basic people believing that games not listed on Intel's website won't run - it is a FACT that games not listed there have a 50/50 chance of not running properly. If you contact Intel support by filing a bug report, they tell you to check with the game developer, which isn't helpful at all because I doubt the game developer of a five or six year old game is going to buy a Tiger Lake laptop to bother checking whether the game runs on it or not.

That's why I don't believe Intel's claims when it comes to graphics performance.


This is nonsense, sorry. We don't need a list at all. Intel didn't even have such a supported game list for many years, this is quite new actually.

Like most impressive sounding figures it's one of those "up to X%" which obviously should never be taken as an average. If they have one title where it's true then they were hardly lying.


it's not a real up to figure, this is the point. IntelUser2000 claimed it's a real up to figure based on only one test - he made a mistake. The 3dmark scores from hexus are incorrect, something is wrong there. Even the Night Raid scores are wrong, compare it with hardwareluxx.
 

Gideon

Golden Member
Nov 27, 2007
1,622
3,645
136
This is nonsense, sorry. We don't need a list at all. Intel didn't even have such a supported game list for many years, this is quite new actually.
The list itself might be of little value but it's an absolute fact that current Graphics APIs are faaaaar from "standard". There is a reason why driver packages for AMD and Nvidia are so huge, as they having per game hacks for most A - AAA titles. Just try running games without having their corresponding "release driver". It will often result in game-breaking visual corruptions and crashes, not only performance deficiencies.

These games and driver Are built on hacks on top of hacks on top of hacks , as the api abstractions are actually quite far from how things actually run in hardware and need per-game fixes just to run well (It's absolutely true for DX11 in particular but also for DX12 and Vulcan to a bit lesser degree).
 
  • Like
Reactions: Tlh97

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
You seem to imply that HPC means supercomputers and compute clusters while the fact is that many HPC workloads like modeling, simulation are also done on high-end workstations, which till only a few years ago were commonly 8-core or 16-core systems, and now 'client' systems are able to match those in core-count. So SPECfp is not irrelevant.

Sure, fair point. But if you really want "IPC" then it has to be about the robustness of the architecture when it comes to performance since we're dealing with a general purpose CPU. Performance in vectors, cryptography, and graphics, or memory bandwidth bound can be made to be boosted way above architectural changes.

That's why I disregard SpecFP. You didn't judge Pentium 4's performance entirely using it's SSE2 performance because while it was pretty fantastic, it had limited support and the performance of the x87 FPU was very relevant in those days. The overall architecture sucked.

You don't judge Rocketlake's performance using AVX512, for the same reason.

Also why do you assume 85% scaling? I know frequency scaling isn't 100% but to assign a number less than 100% also seems arbitrary.

80-85% is a common range you'll get when you isolate for variables such as compilers and Turbo modes for SpecInt suites. The FP portion highly depends on whether the system has enough memory bandwidth or not.

it's not a real up to figure, this is the point. IntelUser2000 claimed it's a real up to figure based on only one test - he made a mistake.

I believe you there, but does it really matter? I cared for Tigerlake since the iGPU there was at the top of it's game. It matters whether it's equal to competition or 20-30% above.

In this case it's 50% above the severely underpowered UHD 630 graphics. Aside from academic reasons it doesn't matter.

Just like Rocketlake is only relevant from the point that they were able to port a 10nm core to 14nm. They were able to do it. So what? It's performance and power efficiency still sucks horribly.
 

mikk

Diamond Member
May 15, 2012
4,133
2,136
136
The list itself might be of little value but it's an absolute fact that current Graphics APIs are faaaaar from "standard". There is a reason why driver packages for AMD and Nvidia are so huge, as they having per game hacks for most A - AAA titles. Just try running games without having their corresponding "release driver". It will often result in game-breaking visual corruptions and crashes, not only performance deficiencies.

These games and driver Are built on hacks on top of hacks on top of hacks , as the api abstractions are actually quite far from how things actually run in hardware and need per-game fixes just to run well (It's absolutely true for DX11 in particular but also for DX12 and Vulcan to a bit lesser degree).


Intels driver package is also very big, the 3d related driver files itself are not that big. tamz_msc is expecting a 20% boost from games once they are added to the list of supported games, it won't happen.


I believe you there, but does it really matter? I cared for Tigerlake since the iGPU there was at the top of it's game. It matters whether it's equal to competition or 20-30% above.

In this case it's 50% above the severely underpowered UHD 630 graphics. Aside from academic reasons it doesn't matter.

Just like Rocketlake is only relevant from the point that they were able to port a 10nm core to 14nm. They were able to do it. So what? It's performance and power efficiency still sucks horribly.


A week ago I already said that iGPU performance for desktop models doesn't matter really, the feature list is more important in this case. AV1 decoding, higher quality Quicksync transcoding, HDMI 2.0, this is the real upgrade over UHD 630.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Besides that, some games not listed on Intel's website may not even run properly with Xe graphics. I've experienced this myself on my i5-1135G7.

I'm disappointed on the driver side with Xe. Sure it's usable but I thought it would be in a better condition now. Have they ever addressed Horizon Zero Dawn not even starting on their graphics? Even if it runs slow, it should run.

They stopped talking about the drivers and graphics. Did they give up or something? Does the graphics command center have the touted zero-day game optimizations yet?

If we're really expecting substantial improvements with next gen iGPUs, and we expect their dGPUs to launch late this year, it's going to be a double whammy if the iGPUs perform like the 192EU dGPUs and the supply situation improves.

Update: Wait, they forgot to release drivers for the Rocketlake Xe? Really? Boy it might be interesting if they released the information on what's going on at Intel to screw up that bad.

And here we were worrying about optimizations for games in general.
 
Last edited:

tamz_msc

Diamond Member
Jan 5, 2017
3,770
3,590
136
This is nonsense, sorry. We don't need a list at all. Intel didn't even have such a supported game list for many years, this is quite new actually.
What is nonsense? That some unlisted games don't run on Xe or that Intel support asks you to contact the game developer if a game doesn't run? I have first-hand experience with the former and can provide email transcripts of the latter if necessary.
tamz_msc is expecting a 20% boost from games once they are added to the list of supported games, it won't happen.
This isn't what I said. What I said was that since there are many more games which are unsupported on Xe than for which Intel has official support, it's anybody's guess as to how a wide spectrum of games would perform(if some of them run at all). It is obvious that some of them will not do as well as the games that are on Intel's list, which is why the +50% figure isn't a realistic indicator of UHD 750's performance over UHD 630.
 

eek2121

Platinum Member
Aug 2, 2005
2,930
4,026
136
Extremetech claims that Intel is thinking about rebranding their fabs to be a closer match to TSMC. We will see.
 

french toast

Senior member
Feb 22, 2017
988
825
136
Correct me if I am wrong, but I'm assuming Rocket lake is Mesh topology rather than Ring?

That being so I thought this would happen, AMD had poor latency and associated gaming performance with Zen 1, partly because they were forward looking with their Infinity fabric and paid an early latency penalty between cores and swapping between CCX's, I always figured Intel still had this penalty to pay and at that point the gap would close.

This was also based off Skylake X at the time which made me wonder.
Due to pricing Rocket lake still is a good Buy at certain price points as pointed out in reviews, but the top end part is a joke, honestly they should never had released it as it has tarnished the i9 branding in my opinion.

Alderlake looks like it will bring back the gaming performance and efficiency crown back this year for Intel, I wonder if AMD will bring out a refresh mid year before Alderlake?.
 

ondma

Platinum Member
Mar 18, 2018
2,720
1,280
136
Tom's has an interesting face off between the 11900K and 5900X.

If you're old enough to remember it's like when the old champ fought the new one... Tyson vs Spinks back in '85.

That review shows a much better result for Rocket Lake vs Comet Lake than Ian's review. And somehow they managed to score a state of the art gpu.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
That review shows a much better result for Rocket Lake vs Comet Lake than Ian's review. And somehow they managed to score a state of the art gpu.

Yeah, and ZEN3 still has worse 99 percentile FPS results. There seem to be some games that just love 64MB of L3, but there are games where crossing CCX domains is hurting performance as well.

Oh, and it is easy to get better results than Ian's -> don't run JEDEC memory timings.
 
  • Like
Reactions: Tlh97 and Zucker2k

Racan

Golden Member
Sep 22, 2012
1,108
1,984
136
Tom's has an interesting face off between the 11900K and 5900X.

If you're old enough to remember it's like when the old champ fought the new one... Tyson vs Spinks back in '85.


Some of the results are strange, I've never seen Zen 3 perform this poorly in POV-RAY in other tests.

ZDavYcnCN639sbSGwXB26g-1536-80.png