• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Discussion Intel current and future Lakes & Rapids thread

Page 403 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tamz_msc

Platinum Member
Jan 5, 2017
2,837
2,585
136
Found three more UHD 750 game tests: https://www.conseil-config.com/2021/test-intel-core-i9-11900k-et-core-i5-11600k/4/#nouvel-igp-intel-xe-lp

53-55% gain over UHD 630. So in almost all game tests the advantage is ~50% over UHD 630 with a few outliers between 30-40%.
Hexus found a little over 30% in their testing. I'm more inclined to believe that the improvement is likely to be around 30% instead of 50%, because of the simple fact that there are many more games which are unsupported on Xe than Intel has optimizations for, according to gameplay.intel.com.
 

beginner99

Diamond Member
Jun 2, 2009
4,741
1,145
136
They obviously had to make compromises; 8 core tops; higher power usage. In my mind, this was always going to be the case with 14nm fabrication. I personally think that for typical desktop usage the reviewers have been overly critical; the average user isn't bench marking 24/7 and the processor idles back in most common usage scenarios
The issue is it doesn't offer much benefits over comet-lake which happened to have 10-cores as top SKU so it's even a step back in MT.

What are the benefits really? There is exactly only 1: If you heavily use AVX512 enabled software.
 
  • Like
Reactions: lobz and lightmanek

mikk

Diamond Member
May 15, 2012
3,086
913
136
Hexus found a little over 30% in their testing. I'm more inclined to believe that the improvement is likely to be around 30% instead of 50%, because of the simple fact that there are many more games which are unsupported on Xe than Intel has optimizations for, according to gameplay.intel.com.

Hexus isn't new and their timespy scores looks strange, you can read more here: https://forums.anandtech.com/threads/intel-current-and-future-lakes-rapids-thread.2509080/page-401#post-40474669

Almost all of the tested games from other pages are 50% or more, this makes sub 50% an outlier. And by the way this is a typical Intel fail:




Imagine Nvidia releases a new GPU without driver support.
 
Last edited:

rainy

Senior member
Jul 17, 2013
453
288
136
Almost all of the tested games from other pages are 50% or more, this makes sub 50% an outlier.
Are you trying to suggest, that Intel is lying on official slide with "up to 50 percent performance improvement over previous generation"?
 

mikk

Diamond Member
May 15, 2012
3,086
913
136
Are you trying to suggest, that Intel is lying on official slide with "up to 50 percent performance improvement over previous generation"?

No I'm not saying they are lying, Intel is usually using 3dmark for their GPU claims. As we know 3dmark doesn't necessarily reflect real world performance, we are looking for real world gaming tests. There are not that much tests available but from the few gaming tests we have in most cases the advantage is 50% or slightly more, so the up to is not the right wording. Up to usually refers to a rare peak and outliers.

Interestingly the press UHD 750 driver was based on build 9220 (it's quite old, edit: dated 29th January).

 
Last edited:
  • Like
Reactions: lightmanek

tamz_msc

Platinum Member
Jan 5, 2017
2,837
2,585
136
Hexus isn't new and their timespy scores looks strange, you can read more here: https://forums.anandtech.com/threads/intel-current-and-future-lakes-rapids-thread.2509080/page-401#post-40474669

Almost all of the tested games from other pages are 50% or more, this makes sub 50% an outlier. And by the way this is a typical Intel fail:




Imagine Nvidia releases a new GPU without driver support.
Hexus tested at 1080p, while the Italian review you've linked tested at 720p. That could easily account for the difference.

The problem is that games not listed on gameplay.intel.com far outnumber the games that Intel optimizes for, and this fact alone is enough to doubt whether the +50% improvement is an average figure or best case figure, if you test using a wide spectrum of games.

Besides that, some games not listed on Intel's website may not even run properly with Xe graphics. I've experienced this myself on my i5-1135G7.
 
  • Like
Reactions: Tlh97

Asterox

Senior member
May 15, 2012
482
684
136
Well, it is RocketLake backport CPU. It is expected, as we see in a few non gaming examples it is slower vs R5 3600.

R5 3600, average 4-4.1ghz all core boost+4.2ghz singlecore boost

i5 11400, by Intel All core boost is 4.2ghz+4.4ghz singlecore boost


 

mikk

Diamond Member
May 15, 2012
3,086
913
136
Hexus tested at 1080p, while the Italian review you've linked tested at 720p. That could easily account for the difference.
conseil-config tested full HD, madboxpc tested Full HD, hardwareluxx tested Full HD - almost all of them 50% or more. Resolution shouldn't matter if the fps are low enough, actually the 1080p advantage in Strange Brigade is higher than on 720p at madboxpc which I would expect. The 720p advantage from hwupgrade should increase with 1080p and not the other way around, your argument doesn't make any real sense. Your should better ask yourself why the timespy scores from hexus doesn't match other reviews. The UHD630 is too high or the UHD750 too low, this is more of a concern than 720p or 1080p.


The problem is that games not listed on gameplay.intel.com far outnumber the games that Intel optimizes for, and this fact alone is enough to doubt whether the +50% improvement is an average figure or best case figure, if you test using a wide spectrum of games.

This is not a problem because it doesn't matter, the games running on a standardized 3d API. They don't have to test every single game and they can't optimize every single game, this is more a marketing thing, they need this for their control panel library/settings. However your comment proves why such a "supported" game list is important to the basic user, it was a good decision from Intel to add this gameplay.intel project, important for their future dedicated GPUs. Because obviously basic people do believe games not listed there can't run well or not at all.
 

tamz_msc

Platinum Member
Jan 5, 2017
2,837
2,585
136
This is not a problem because it doesn't matter, the games running on a standardized 3d API. They don't have to test every single game and they can't optimize every single game, this is more a marketing thing, they need this for their control panel library/settings. However your comment proves why such a "supported" game list is important to the basic user, it was a good decision from Intel to add this gameplay.intel project, important for their future dedicated GPUs. Because obviously basic people do believe games not listed there can't run well or not at all.
This is a big problem because despite games using a standardized 3d api, the Intel driver doesn't work with some of them. It's not about basic people believing that games not listed on Intel's website won't run - it is a FACT that games not listed there have a 50/50 chance of not running properly. If you contact Intel support by filing a bug report, they tell you to check with the game developer, which isn't helpful at all because I doubt the game developer of a five or six year old game is going to buy a Tiger Lake laptop to bother checking whether the game runs on it or not.

That's why I don't believe Intel's claims when it comes to graphics performance.
 

Mopetar

Diamond Member
Jan 31, 2011
5,651
2,412
136
Like most impressive sounding figures it's one of those "up to X%" which obviously should never be taken as an average. If they have one title where it's true then they were hardly lying.
 

mikk

Diamond Member
May 15, 2012
3,086
913
136
This is a big problem because despite games using a standardized 3d api, the Intel driver doesn't work with some of them. It's not about basic people believing that games not listed on Intel's website won't run - it is a FACT that games not listed there have a 50/50 chance of not running properly. If you contact Intel support by filing a bug report, they tell you to check with the game developer, which isn't helpful at all because I doubt the game developer of a five or six year old game is going to buy a Tiger Lake laptop to bother checking whether the game runs on it or not.

That's why I don't believe Intel's claims when it comes to graphics performance.

This is nonsense, sorry. We don't need a list at all. Intel didn't even have such a supported game list for many years, this is quite new actually.

Like most impressive sounding figures it's one of those "up to X%" which obviously should never be taken as an average. If they have one title where it's true then they were hardly lying.

it's not a real up to figure, this is the point. IntelUser2000 claimed it's a real up to figure based on only one test - he made a mistake. The 3dmark scores from hexus are incorrect, something is wrong there. Even the Night Raid scores are wrong, compare it with hardwareluxx.
 

Gideon

Golden Member
Nov 27, 2007
1,214
2,270
136
This is nonsense, sorry. We don't need a list at all. Intel didn't even have such a supported game list for many years, this is quite new actually.
The list itself might be of little value but it's an absolute fact that current Graphics APIs are faaaaar from "standard". There is a reason why driver packages for AMD and Nvidia are so huge, as they having per game hacks for most A - AAA titles. Just try running games without having their corresponding "release driver". It will often result in game-breaking visual corruptions and crashes, not only performance deficiencies.

These games and driver Are built on hacks on top of hacks on top of hacks , as the api abstractions are actually quite far from how things actually run in hardware and need per-game fixes just to run well (It's absolutely true for DX11 in particular but also for DX12 and Vulcan to a bit lesser degree).
 
  • Like
Reactions: Tlh97

IntelUser2000

Elite Member
Oct 14, 2003
7,337
1,994
136
You seem to imply that HPC means supercomputers and compute clusters while the fact is that many HPC workloads like modeling, simulation are also done on high-end workstations, which till only a few years ago were commonly 8-core or 16-core systems, and now 'client' systems are able to match those in core-count. So SPECfp is not irrelevant.
Sure, fair point. But if you really want "IPC" then it has to be about the robustness of the architecture when it comes to performance since we're dealing with a general purpose CPU. Performance in vectors, cryptography, and graphics, or memory bandwidth bound can be made to be boosted way above architectural changes.

That's why I disregard SpecFP. You didn't judge Pentium 4's performance entirely using it's SSE2 performance because while it was pretty fantastic, it had limited support and the performance of the x87 FPU was very relevant in those days. The overall architecture sucked.

You don't judge Rocketlake's performance using AVX512, for the same reason.

Also why do you assume 85% scaling? I know frequency scaling isn't 100% but to assign a number less than 100% also seems arbitrary.
80-85% is a common range you'll get when you isolate for variables such as compilers and Turbo modes for SpecInt suites. The FP portion highly depends on whether the system has enough memory bandwidth or not.

it's not a real up to figure, this is the point. IntelUser2000 claimed it's a real up to figure based on only one test - he made a mistake.
I believe you there, but does it really matter? I cared for Tigerlake since the iGPU there was at the top of it's game. It matters whether it's equal to competition or 20-30% above.

In this case it's 50% above the severely underpowered UHD 630 graphics. Aside from academic reasons it doesn't matter.

Just like Rocketlake is only relevant from the point that they were able to port a 10nm core to 14nm. They were able to do it. So what? It's performance and power efficiency still sucks horribly.
 

mikk

Diamond Member
May 15, 2012
3,086
913
136
The list itself might be of little value but it's an absolute fact that current Graphics APIs are faaaaar from "standard". There is a reason why driver packages for AMD and Nvidia are so huge, as they having per game hacks for most A - AAA titles. Just try running games without having their corresponding "release driver". It will often result in game-breaking visual corruptions and crashes, not only performance deficiencies.

These games and driver Are built on hacks on top of hacks on top of hacks , as the api abstractions are actually quite far from how things actually run in hardware and need per-game fixes just to run well (It's absolutely true for DX11 in particular but also for DX12 and Vulcan to a bit lesser degree).

Intels driver package is also very big, the 3d related driver files itself are not that big. tamz_msc is expecting a 20% boost from games once they are added to the list of supported games, it won't happen.


I believe you there, but does it really matter? I cared for Tigerlake since the iGPU there was at the top of it's game. It matters whether it's equal to competition or 20-30% above.

In this case it's 50% above the severely underpowered UHD 630 graphics. Aside from academic reasons it doesn't matter.

Just like Rocketlake is only relevant from the point that they were able to port a 10nm core to 14nm. They were able to do it. So what? It's performance and power efficiency still sucks horribly.

A week ago I already said that iGPU performance for desktop models doesn't matter really, the feature list is more important in this case. AV1 decoding, higher quality Quicksync transcoding, HDMI 2.0, this is the real upgrade over UHD 630.
 

IntelUser2000

Elite Member
Oct 14, 2003
7,337
1,994
136
Besides that, some games not listed on Intel's website may not even run properly with Xe graphics. I've experienced this myself on my i5-1135G7.
I'm disappointed on the driver side with Xe. Sure it's usable but I thought it would be in a better condition now. Have they ever addressed Horizon Zero Dawn not even starting on their graphics? Even if it runs slow, it should run.

They stopped talking about the drivers and graphics. Did they give up or something? Does the graphics command center have the touted zero-day game optimizations yet?

If we're really expecting substantial improvements with next gen iGPUs, and we expect their dGPUs to launch late this year, it's going to be a double whammy if the iGPUs perform like the 192EU dGPUs and the supply situation improves.

Update: Wait, they forgot to release drivers for the Rocketlake Xe? Really? Boy it might be interesting if they released the information on what's going on at Intel to screw up that bad.

And here we were worrying about optimizations for games in general.
 
Last edited:

tamz_msc

Platinum Member
Jan 5, 2017
2,837
2,585
136
This is nonsense, sorry. We don't need a list at all. Intel didn't even have such a supported game list for many years, this is quite new actually.
What is nonsense? That some unlisted games don't run on Xe or that Intel support asks you to contact the game developer if a game doesn't run? I have first-hand experience with the former and can provide email transcripts of the latter if necessary.
tamz_msc is expecting a 20% boost from games once they are added to the list of supported games, it won't happen.
This isn't what I said. What I said was that since there are many more games which are unsupported on Xe than for which Intel has official support, it's anybody's guess as to how a wide spectrum of games would perform(if some of them run at all). It is obvious that some of them will not do as well as the games that are on Intel's list, which is why the +50% figure isn't a realistic indicator of UHD 750's performance over UHD 630.
 
  • Like
Reactions: Tlh97 and NTMBK

eek2121

Senior member
Aug 2, 2005
811
903
136
Extremetech claims that Intel is thinking about rebranding their fabs to be a closer match to TSMC. We will see.
 

french toast

Senior member
Feb 22, 2017
985
813
136
Correct me if I am wrong, but I'm assuming Rocket lake is Mesh topology rather than Ring?

That being so I thought this would happen, AMD had poor latency and associated gaming performance with Zen 1, partly because they were forward looking with their Infinity fabric and paid an early latency penalty between cores and swapping between CCX's, I always figured Intel still had this penalty to pay and at that point the gap would close.

This was also based off Skylake X at the time which made me wonder.
Due to pricing Rocket lake still is a good Buy at certain price points as pointed out in reviews, but the top end part is a joke, honestly they should never had released it as it has tarnished the i9 branding in my opinion.

Alderlake looks like it will bring back the gaming performance and efficiency crown back this year for Intel, I wonder if AMD will bring out a refresh mid year before Alderlake?.
 

ondma

Platinum Member
Mar 18, 2018
2,015
726
106
Tom's has an interesting face off between the 11900K and 5900X.

If you're old enough to remember it's like when the old champ fought the new one... Tyson vs Spinks back in '85.

That review shows a much better result for Rocket Lake vs Comet Lake than Ian's review. And somehow they managed to score a state of the art gpu.
 

JoeRambo

Golden Member
Jun 13, 2013
1,049
837
136
That review shows a much better result for Rocket Lake vs Comet Lake than Ian's review. And somehow they managed to score a state of the art gpu.
Yeah, and ZEN3 still has worse 99 percentile FPS results. There seem to be some games that just love 64MB of L3, but there are games where crossing CCX domains is hurting performance as well.

Oh, and it is easy to get better results than Ian's -> don't run JEDEC memory timings.
 
  • Like
Reactions: Tlh97 and Zucker2k

Racan

Senior member
Sep 22, 2012
228
140
116

TheGiant

Senior member
Jun 12, 2017
748
351
106
Looking at those reviews, the prominent sites did the worst work

You have to watch youtube videos to get more info

I was interested in the new mem controller- that is partially answered with synthetic measurements- i wonder if this new mem controller is a production test of alder lake/later chips with ddr5, where we expect effective speeds up to 8GHz and and IMC probably can't run at that speed

It would be nice to see some exotic mem speed like Ddr4-5000+ at 1/2 mode, not only in gaming but with adobe software or calculations

I can't find any ring speed tuning

My experience with 9900K (2 weeks when a friend was on vacation) is that mem tuning and uncore clock has order of magnitude more impact than clocks, everyone is hunting 5,1 to 5,3 GHz but the gaming min fps limit is not there, it is latency

overall the gaming testing looks on the portals like average gameplay instead of CPU tests- at least Toms did some more demanding test which you see by lower mins within the same settings (like 1080p high..)

gaming CPU test should be composed by 90% of CPU intensive scenes, not running on the grass field or rotating camera inside a city

so mem latency can be 40ns which is like 10% behind the best skylake systems, cache is bigger, clock is the same, IPC is higher- even with those flawed gaming testing scenes, which test more thoughput than randomness I expected more

so about gaming it is a big grain of salt

app/productivity - one should test a machine with its intented purpose- an 8C cpu clocked way above its efficiency point for productivity is like testing a bmw m5 with full loaded trunk and 4 persons pulling 1,5t going up the hill
I mean wtf? that is a side info and for enthusiast a test should be done with decreased clocks and proper voltages
for example I am running my 3900X limited to 65W when handbraking, total performance loss is totally unimportant

power wise- I think the icelake power wasn't only the problem with 10nm, this sunny cove design looks leaky like a broken water dam
I didn't see idle power and lower load power- it is important, the sites don't test workload test packages per day with different distributions of idle, light and full load

iGPU- again wtf? this gpu isn't there to win 3dmarks, but support this age monitors, run everything on every system (win, linux..) and run even the most recent encoded movies with low CPU usage and enable fast decoding even with lower quality when it is not that important
I cant find anything about this

very shallow work from the reviewers, in short run cinebench and be done with it
 

lobz

Golden Member
Feb 10, 2017
1,620
2,084
136
That review shows a much better result for Rocket Lake vs Comet Lake than Ian's review. And somehow they managed to score a state of the art gpu.
Well it's Tom's after all. If there was a chance to show Intel in a better light, that was at Tom's.

That being said, it's kind of hilarious to see all the star reviewers being so mad at Intel, when all they did was release a couple of products at the top-end with practically no progress for a bit more money, and kind of really good products at the mid-range for an attractive price.
 

ASK THE COMMUNITY