Question Raptor Lake - Official Thread

Page 178 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hulk

Diamond Member
Oct 9, 1999
4,269
2,089
136
Since we already have the first Raptor Lake leak I'm thinking it should have it's own thread.
What do we know so far?
From Anandtech's Intel Process Roadmap articles from July:

Built on Intel 7 with upgraded FinFET
10-15% PPW (performance-per-watt)
Last non-tiled consumer CPU as Meteor Lake will be tiled

I'm guessing this will be a minor update to ADL with just a few microarchitecture changes to the cores. The larger change will be the new process refinement allowing 8+16 at the top of the stack.

Will it work with current z690 motherboards? If yes then that could be a major selling point for people to move to ADL rather than wait.
 
  • Like
Reactions: vstar

Hitman928

Diamond Member
Apr 15, 2012
5,392
8,280
136
Computerbase.de did some testing on gaming power consumption and found that the 13600K used 88w on average across 12 titles, whle the 7600x used 60w. Just 28w difference.

That's on average but the example you are using from HWUB is their largest difference. From computerbase's test suite, a 13600k's highest average power over a 7700x is 41W. The 7600x on average uses 20% less than a 7700x. Even if we give some margin for this test and say it was 10 - 15% less, you're still talking a difference of 50W+ between a 13600k and 7600x at the high end in gaming.

I see that, but compared to the bulk of reviews, it's definitely not common I wager.

Bulk of reviews using decent speeds and timing for both CPUs across a decent amount of games? Doubtful.

Also, Eurogamer used a RTX 3090 at 1080p, which means GPU bottleneck. Tweaktown used a 3090 Ti but at GPU limited settings, which is why their benchmarks looked bunched up compared to other outlets.

I actually went back and recalculated and I messed up one of the numbers from my previous calculation, the 13600k is actually only 1.6% faster than the 7600x in Eurogamer's tests. Your argument that the 13600k was being held back by the 3090 doesn't hold much water when you see that the 7950x in the same test was 7.3% faster than the 7600x, which is as much or more than other sites where both were tested, including computerbase.

This is why I said I wasn't going to get into a back and forth again earlier in the thread, because no matter how much evidence is provided from multiple review sites, you'll just continue to point to the same sites using slow memory and/or limited game tests and ignore anything to the contrary. You have an awesome system, I hope you enjoy it. I'm not going back around on this ride. The data is out there so everyone can make up their own minds.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
DId you notice a difference or just benchmarks?

I actually did notice a difference, but in just one game. Witcher 3 Next gen. That game uses a DX12 wrapper, so the CPU optimization is terrible and makes it very CPU dependent.....even at 4K with maxed settings including RT. My FPS went up by 2 FPS in one of the heaviest sections in Novigrad. This particular area I always test so I know that I did gain performance.

Faster memory only increases performance in games when the situation is CPU bound. The vast majority of the games I play are definitely either GPU bound due to playing at 4K maxed settings, or capped at 120 FPS with G-sync on. However, it's possible that frame time latency improves from the faster memory at higher resolutions as well, but I never tested it. I'm pretty sure it does though, because I have seen tests with improved frame latency from faster memory.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
That's on average but the example you are using from HWUB is their largest difference. From computerbase's test suite, a 13600k's highest average power over a 7700x is 41W. The 7600x on average uses 20% less than a 7700x. Even if we give some margin for this test and say it was 10 - 15% less, you're still talking a difference of 50W+ between a 13600k and 7600x at the high end in gaming.

Yes but the main problem with the HWUB result is that the power draw is significantly higher, but the performance was less than the 7600x. That's why I had such a big issue with it. The Computerbase.de review shows the 13600K using more power, but also delivering more performance.

It's OK if a CPU consumes more power, but also delivers more performance. It's not OK if it consumes more power and delivers less performance.

Bulk of reviews using decent speeds and timing for both CPUs across a decent amount of games? Doubtful.

3Dcenter.org did a launch review analysis and found that the 13600K was about 5.8% faster than the 7600x, and 1.9% faster than the 7700x. It was also faster than the 7900x and 7950x by similar negligible margins.

Launch Analysis Intel Raptor Lake (page 3) | 3DCenter.org

Of course there are plenty of factors behind which CPU comes out on top as discussed, but overall, 13600K took the win. In applications, it destroyed the 7600x.

This is why I said I wasn't going to get into a back and forth again earlier in the thread, because no matter how much evidence is provided from multiple review sites, you'll just continue to point to the same sites using slow memory and/or limited game tests and ignore anything to the contrary. You have an awesome system, I hope you enjoy it. I'm not going back around on this ride. The data is out there so everyone can make up their own minds.

I'm not necessarily trying to convince people, I just like to argue and talk about hardware performance......and so do you. :D

I was hoping you'd take the bait I left about Zen 4 having an inherently poor performing memory controller to explain why it does so poorly against RPL with the same DDR5 5200. It's an interesting observation to be sure.
 

In2Photos

Golden Member
Mar 21, 2007
1,645
1,658
136
I actually did notice a difference, but in just one game. Witcher 3 Next gen. That game uses a DX12 wrapper, so the CPU optimization is terrible and makes it very CPU dependent.....even at 4K with maxed settings including RT. My FPS went up by 2 FPS in one of the heaviest sections in Novigrad. This particular area I always test so I know that I did gain performance.

Faster memory only increases performance in games when the situation is CPU bound. The vast majority of the games I play are definitely either GPU bound due to playing at 4K maxed settings, or capped at 120 FPS with G-sync on. However, it's possible that frame time latency improves from the faster memory at higher resolutions as well, but I never tested it. I'm pretty sure it does though, because I have seen tests with improved frame latency from faster memory.

He asked if you noticed the difference, to which you said yes, but then point to a difference of 2 fps in a benchmark. Are you saying that you could see a difference in the game play and that the benchmark validated your experience? I don't think anyone can tell a difference in 2 fps, let's be honest.
I was hoping you'd take the bait I left about Zen 4 having an inherently poor performing memory controller to explain why it does so poorly against RPL with the same DDR5 5200. It's an interesting observation to be sure.
So you're just trolling then?
 
  • Like
Reactions: DAPUNISHER

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
He asked if you noticed the difference, to which you said yes, but then point to a difference of 2 fps in a benchmark. Are you saying that you could see a difference in the game play and that the benchmark validated your experience? I don't think anyone can tell a difference in 2 fps, let's be honest.

By benchmark I'm pretty sure he meant memory benchmarks. And it wasn't a benchmark. I just had performance monitoring software running to see what the FPS was.

That said, the frame pacing in that game is so bad that unless it's a huge FPS increase, it's practically impossible to notice a 2 FPS gain with the naked eye.

So you're just trolling then?

Why do you consider it trolling? It's a valid observation. It seems AMD fans have issues with some reviewers not using overclocked memory to make Zen 4 look good. But it's not incumbent on a reviewer to go out of their way to make a product look good.
 
  • Like
Reactions: Sulaco

Just Benching

Banned
Sep 3, 2022
307
156
76
It's bad enough to pick one game and choose that as an outlier. CapFrameX seems to specialize in finding one spot in one map in one game and benching while staring at a particular spot. On (apparently) buggy hardware, no less.
Funny - you are basically saying he is biased, but capframe X was the prime hypemaker (is that a word?) of the original 5800x 3d. Following his twitter back then you would think he is getting paid by amd.
 

Just Benching

Banned
Sep 3, 2022
307
156
76
He asked if you noticed the difference, to which you said yes, but then point to a difference of 2 fps in a benchmark. Are you saying that you could see a difference in the game play and that the benchmark validated your experience? I don't think anyone can tell a difference in 2 fps, let's be honest.
Obviously you can only tell a difference when you are gpu bound. Which isn't that hard with a 4090. With other cards, not so much. XMP leaves a lot of performance on the table. Tuning memory to get 98.5% of the performance takes 10-15 minutes, assuming you have some faint idea of what you are doing. Just primaries - trfc - trefi - tfaw - tcwl are your main offenders and an easy boost without much stability testing required. Especially with Adie - from my experience 1.5v causes errors, so you are kinda stuck with 1.45v max, so you can't keep adding voltage indefinitely to stabilize.
 

Just Benching

Banned
Sep 3, 2022
307
156
76
The dude has already admitted to have a bias against AMD:

View attachment 76692

View attachment 76691
Well go back last year and see his twitts about the 5800x 3d. He was going kinda nuts about how it's faster than alderlake.

But he isn't wrong, I'm in a similar position. People are way more critical about intel and nvidia even when they have the better products
 
  • Like
Reactions: controlflow

Hitman928

Diamond Member
Apr 15, 2012
5,392
8,280
136
Well go back last year and see his twitts about the 5800x 3d. He was going kinda nuts about how it's faster than alderlake.

But he isn't wrong, I'm in a similar position. People are way more critical about intel and nvidia even when they have the better products

Last year like when he tried to convince everyone that Rocketlake overclocked was faster than Zen3?

1676643075973.png

Nice name change, btw, almost didn't realize it was you for a minute, herald.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,640
14,630
136
Last year like when he tried to convince everyone that Rocketlake overclocked was faster than Zen3?

View attachment 76696

Nice name change, btw, almost didn't realize it was you for a minute, herald.
That is simply his other username, the first, and 2 accounts is not allowed. So he is back to his original username. And Herald posts now contain the old username.
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
28,664
21,172
146
Overclocking, memory sub timings, undervolting, etc. all that stuff is esoteric. Big reviewers don't spend any real time on it for a reason. What the vast majority of DIYers and gaming PC buyers want to know, they provide.
Quoting myself to call myself out, and tell myself to shut up. :p

HUB worked with buildzoid and put together memory scaling for Zen 4. Raptor Lake memory scaling is coming in the near future.


 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
HUB worked with buildzoid and put together memory scaling for Zen 4. Raptor Lake memory scaling is coming in the near future.

Wonderful stuff, mainstream reviewers waking up to importance of secondary/tertiary timings. That 5200 tuned vs 6000 tuned should open a lot of eyes.
From my own experience ZEN4 as a platform for memory tuning has some problems:

1) Experience is horrible as if AMD does not want people to help themselves. Safe Boot is not functioning, where battery reset is last resort on Intel, on AMD it is a feature exposed as easy to access header to short.
2) Same with tooling, want to quickly change and test things from Windows? Sucks to be you. Combined with (1) it makes for hilarious experience.
3) Memory speed scaling is not that great. FClock is limited to 2.1Ghz, they have not moved anywhere since Zen2 times, 1900 vs 2133 is not progress. IMC does not work above 6200'ish without titanic efforts and luck ( combine with (1) for extra 10 battery resets). Same server chiplet rejects and low effort in every corner.
4) A step from beaten path of memory vendors and sizes to say 2x32 DR config => wild west, combined with (1) bad things will happen. But real disaster struck people with 4x32GB setups, reading some of their forum woes makes some great horror stories.

Compared to this steaming pile, Raptor Lake was vacation, safe boot never failed, i did not manage to ruin Windows install and it just damn worked at lower voltages than expected.
 
  • Like
Reactions: DAPUNISHER

Timur Born

Senior member
Feb 14, 2016
277
139
116
I recently did some tests looking at core VIDs vs. Uncore (Ring) VID vs. Vcore. To my surprise it turned out that Vcore only follows Core VIDs, but never Uncore VID even when it is higher? I thought Vcore would always use the highest of all VIDs, including Uncore, but this does not seem to be the case?!
 
Last edited:

Hulk

Diamond Member
Oct 9, 1999
4,269
2,089
136
Is is possible to obtain the VID's for the individual cores? I was reading the overclocking guide below and he somehow identifies the weakest cores but reading the (long) article I can't see how he determines strong from weak cores?

 
  • Like
Reactions: lightmanek

Hulk

Diamond Member
Oct 9, 1999
4,269
2,089
136
The supposed Raptor Lake Refresh has me intrigued. Not because I'm expecting anything extraordinary, on the contrary, I'm expecting very little. My point is generally when Intel performs a refresh it's generally early in the process node's development.

Intel 7 seems fully matured as we are seeing 6GHz parts. So where do they go?

I have a feeling it's going to be the top of the line parts stay about where they are frequency-wise but those lower down the stack get a couple hundred MHz bump.

What do you think?
 

Saylick

Diamond Member
Sep 10, 2012
3,217
6,585
136
Well go back last year and see his twitts about the 5800x 3d. He was going kinda nuts about how it's faster than alderlake.

But he isn't wrong, I'm in a similar position. People are way more critical about intel and nvidia even when they have the better products
Are people actually more critical about Intel and Nvidia products though? I have seen more anti-AMD people on Twitter use that exact same argument to try and justify their actions, that they are anti-AMD only because they think too many people are pro-AMD and thus need to provide "balance" to the discussion.

In my opinion, I think these anti-AMD people (or anyone who is anti-Intel or Nvidia) need to get off their high-horse. No one should feel an obligation to bash one company or another simply because they perceive that others aren't critical enough. The only people who feel this way are those who have an emotional investment; otherwise, why should anyone care about the balance of discourse? It's not like anyone works for any of these companies, right, so who cares what others think about them? If someone does work for these companies and they aren't public about it, then they are likely a shill.

Said another way, just because someone has a certain perception of the world does not make it true. It's this kind of mentality, that my own POV is reality, is the same kind of mentality that gets us in trouble regarding a bunch of societal issues: racism, prejudice, etc.
 
  • Like
Reactions: lightmanek

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
I recently did some tests looking at core VIDs vs. Uncore (Ring) VID vs. Vcore. To my surprise it turned out that Vcore only follows Core VIDs, but never Uncore VID even when it is higher? I thought Vcore would always use the highest of all VIDs, including Uncore, but this does not seem to be the case?!
Interesting. Perhaps the ring is on a different power rail now?
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
I have a feeling it's going to be the top of the line parts stay about where they are frequency-wise but those lower down the stack get a couple hundred MHz bump.

What do you think?
I think that makes perfectly sense. And as you say, they don't really have an alternative.
 

LightningZ71

Golden Member
Mar 10, 2017
1,631
1,901
136
If we look at Coffee Lake refresh, which was the most recent actual full refresh product from Intel, we see them adding 2 cores and the associated L3 cache. I don't think that Intel will make Raptor Lake's die any longer than it already is, but, since refresh projects seem to be just tacking on extra cores, that may be the way they go. I don't see going above 8 P cores as being productive, so, maybe another 8 e-cores and the associated extra 6MB of L3? 42MB of L3 is nothing to sneeze at and will be more than any single CCD AMD product without V-cache has. As for overhauling the iGPU? I doubt it. They have no other products that could use that iGPU on that node as everything going forward will be on a different node, so it would be a functional waste, especially since what they already have is better than the competition.