Question Raptor Lake - Official Thread

Hulk

Diamond Member
Oct 9, 1999
4,255
2,050
136
Since we already have the first Raptor Lake leak I'm thinking it should have it's own thread.
What do we know so far?
From Anandtech's Intel Process Roadmap articles from July:

Built on Intel 7 with upgraded FinFET
10-15% PPW (performance-per-watt)
Last non-tiled consumer CPU as Meteor Lake will be tiled

I'm guessing this will be a minor update to ADL with just a few microarchitecture changes to the cores. The larger change will be the new process refinement allowing 8+16 at the top of the stack.

Will it work with current z690 motherboards? If yes then that could be a major selling point for people to move to ADL rather than wait.
 
  • Like
Reactions: vstar

Schmide

Diamond Member
Mar 7, 2002
5,587
719
126
Meanwhile, HWUB does their testing at GPU limited settings or with RT disabled, which paints a false picture of the gaming capabilities of these newer CPUs. But if it makes AMD look good, I suppose it's alright.

The above absolutely false and borders on libel. You shouldn't review and comment on things you do not watch.

I just watched the hardware unboxed GPU review of Hogwarts Legacy. Almost everything you say above is voided by the video. They test at all settings and resolutions, RT on and off, Hogwarts Grounds and Hogsmeade.

Please don't complain about the lack of CPU variance. They mention it but state that it would need to be covered in a future video. Doing so would be disingenuous.

Edit: You don't even have to watch it. Just read the indexes

Code:
00:00 - Welcome back to Hardware Unboxed
00:54 - Test System Specs
01:02 - 1080p Medium
02:52 - 1440p Medium
04:20 - 4K Medium
05:19 - 1080p Ultra
07:00 - 1440p Ultra
07:54 - 4K Ultra
09:12 - 1080p Ultra, Ray Tracing Ultra
10:11 - 1440p Ultra, Ray Tracing Ultra
11:05 - 4K Ultra, Ray Tracing Ultra
11:34 - Hogsmeade
12:17 - Radeon 1080p Medium
12:36 - Radeon 1440p Medium
13:02 - Radeon 4K Medium
13:37 - Radeon 1080p Ultra
14:01 - Radeon 1440p Ultra
14:14 - Radeon 4K Ultra
14:24 - Radeon 1080p Ultra RT
14:43 - Radeon 1440p Ultra RT
14:55 - Radeon 4K Ultra RT
15:07 - GeForce 1080p Medium
15:21 - GeForce 1440p Medium
15:33 - GeForce 4K Medium
15:55 - GeForce 1080p Ultra
16:16 - GeForce 1440p Ultra
16:44 - GeForce 4K Ultra
16:59 - GeForce 1080p Ultra RT
17:40 - GeForce 1440p Ultra RT
17:59 - GeForce 4K Ultra RT
18:14 - Hogsmeade 1080p Medium
18:27 - Hogsmeade 1440p Medium
18:46 - Hogsmeade 4K Medium
18:57 - Hogsmeade 1080p Ultra
19:12 - Hogsmeade 1440p Ultra
19:25 - Hogsmeade 4K Ultra
19:34 - Hogsmeade 1080p Ultra RT
20:38 - Hogsmeade 1440p Ultra RT
21:05 - Hogsmeade 4K Ultra RT
21:32 - Final Thoughts
 

Hitman928

Diamond Member
Apr 15, 2012
5,366
8,175
136
Here's one example linked to what @JoeRambo is talking about: HUB just published a new video on memory scaling for both RPL and Zen4. Conclusion is Zen4 is very sensitive to memory timings. You would think from the primary timings matter more and the second DDR5 6000 kit used by HUB would outperform the one given to them by AMD. Turns out it brings a 10% loss in min FPS instead. Also, notice the brutal drop in performance with the cheap DDR5 6000 kit: it shows that bandwidth matters less than latency in this game.

View attachment 76536

You know, someone who was interested might see something really strange with HWUB's results from this video, like really strange. Just look at their average results when both the 13900k and 7700x are using slow memory, they are showing the 13900k as only 19% faster. Clearly they have no idea what they are doing and can't be trusted.

1676465965707.png

Obviously we need to refer to the reviewers who know how to benchmark CPUs and do it right. Let's check. . .

1676465921121.png


Oh wait. What an unexpected result. When both outlets are using slow memory for both CPUs their results line up exactly. It's almost as if Zen4 gains significantly more from faster memory than RPL does and reviewers who use faster memory for both would show Zen4 being much more competitive with RPL than those who restrict their memory speeds. If anything, Computerbase is overselling Zen here since they give Zen4 even slower memory than RPL but the 13900k is still only 19% faster. If only we had reviewers we could look at who use fast memory on both platforms and test a large number of games to see how the CPUs perform across a large variety of gaming workloads. I guess we can only hope :rolleyes:
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
21,675
10,936
136
Sure, lets cherrypick numbers and reviews to match our conclusion.

Hi! Welcome new reg who is (apparently) maybe biased in the same fashion as a number of other new regs we've had over the last few months who totally doesn't seem to be part of a guerilla marketing campaign. In general it's not a good idea for anyone to cherrypick numbers, including you. But people keep trying to do so anyway. So um, take your own advice and have a nice day!

Let's be fair here, we can't openly criticize Intel for throwing efficiency out the windows while also complaining the E cores aren't scaling high enough. The E cores are there in a support role, they better run with moderate clocks.

I was actually expecting e-core speeds to go down by necessity, to bring them back to their efficiency range. There are twice as many. Anything else throws too large a power share at the e-cores, possibly at the expense of the Raptor Cove power budget.

Geekbench is a better all around benchmark for ST and MT

Ugh no thank you. Cinebench has its limitations, but . . . bleh.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,825
3,654
136
Windows 11 22H2 shouldn't be used for benchmarking at all in its current state. There is a glaring flaw in it which is that the CPU utilization metric in task manager doesn't get reflected in CPU utilization showed while using any kind of overlay, be it MSI Afterburner, AMD or NVIDIA overlays, and even Windows Game Bar while using Win + G. The overlays barely show any utilization at all, while task manager reports the correct usage.

Until issues such as this are fixed, reviewers should stay away from Windows 11 22H2.
 

poke01

Senior member
Mar 8, 2022
767
765
106
People buy and do all sorts of stupid and dangerous things.

Companies should be more responsible and not sell the product at a ridiculous and dangerous setting just to be able to top the charts.

View attachment 69942

I quickly cut the irrelevant low power part of the table off. Even at 250 watts it performs as high as 7950X. I will run my CPU capped at 180W, that is what my air cooler can handle. I am now playing with 13600K and 13700K, not sure which one I will keep.
IMO, keep the 13600K because this is a dead platform and it's always good to have some extra money in your bank account. Also I think the big changes are coming soon with Arrow Lake. So the i5 13600k will get replaced in 2-3 years as you I believe are an enthusiast.
 
  • Like
Reactions: Tlh97

Hitman928

Diamond Member
Apr 15, 2012
5,366
8,175
136
That OCUK thread is quickly becoming a nothing burger. Somebody already proved the 7950X can fully saturate a 7900XTX in Cyberpunk 2077 Ultra RT even at 720p. Now they're comparing SpiderMan results, turns out performance is fine as well.

Yep, pretty much the same story that happened here. Big claims made about Intel’s massive superiority over Zen4 based on very little/shaky data. When more data is added showing evidence that it’s not true, that data is just ignored and the big claims are repeated ad nauseum.
 

Saylick

Diamond Member
Sep 10, 2012
3,208
6,542
136
So not showing your favorite team (not saying you btw) winning is biased? That seems to be the prevailing sentiment in this thread when it comes to gaming performance.

Anything remotely seen as negative towards AMD is seen as bias, regardless of how many times it's confirmed. It's ironic that HWUB was among the first reviewers to "unwittingly" demonstrate a performance discrepancy between AMD and Intel when it came to RT workloads before Raptor Lake and Zen 4 launched with Alder Lake and Zen 3, but now they seem to be intentionally avoiding it in their CPU reviews.
It's one thing to be a hardware enthusiast and to post your own benchmarking results on Twitter, but it feels weird to see the benchmarking tool vendor, who I'd argue should be a neutral party, putting their fingers on the scales and choosing sides. I'd be more cool with it if he was posting all this stuff on his own personal Twitter account from the POV of a hardware enthusiast with the caption "all Tweets are my own", or something along those lines, rather than on the official CapFrameX account, especially since a lot of his content is just that: his own personal testing. His tweets come off more like, "Hey guys, look at which CPU is yet again fastest here. Intel's better at gaming. Told ya." rather than "Hey guys, look at what CapFrameX fps monitoring software can do for you. Here is a sample of it working on *insert latest game here*". You know, typical self-promotion type language. When it's more of the former, I begin to wonder if the whole point of writing the software was just to push an agenda rather than writing the software for the sake of making a useful tool that can be used by others.
 
Last edited:

Schmide

Diamond Member
Mar 7, 2002
5,587
719
126
Lets be clear here, when you focus on an outlier and make it representative of a myriad of factors, that is when you run into trouble. I watch hardware unboxed and I've seen them run into outliers and more often than not they either remove them from the final average and or remind the viewer of such caveats in the final thoughts.

In this case they did their do diligence and came up with a viable explanation for the performance disparity.

I'd much rather have good analysis than gotcha click bait
 

Hitman928

Diamond Member
Apr 15, 2012
5,366
8,175
136
Computerbase.de did some testing on gaming power consumption and found that the 13600K used 88w on average across 12 titles, whle the 7600x used 60w. Just 28w difference.

That's on average but the example you are using from HWUB is their largest difference. From computerbase's test suite, a 13600k's highest average power over a 7700x is 41W. The 7600x on average uses 20% less than a 7700x. Even if we give some margin for this test and say it was 10 - 15% less, you're still talking a difference of 50W+ between a 13600k and 7600x at the high end in gaming.

I see that, but compared to the bulk of reviews, it's definitely not common I wager.

Bulk of reviews using decent speeds and timing for both CPUs across a decent amount of games? Doubtful.

Also, Eurogamer used a RTX 3090 at 1080p, which means GPU bottleneck. Tweaktown used a 3090 Ti but at GPU limited settings, which is why their benchmarks looked bunched up compared to other outlets.

I actually went back and recalculated and I messed up one of the numbers from my previous calculation, the 13600k is actually only 1.6% faster than the 7600x in Eurogamer's tests. Your argument that the 13600k was being held back by the 3090 doesn't hold much water when you see that the 7950x in the same test was 7.3% faster than the 7600x, which is as much or more than other sites where both were tested, including computerbase.

This is why I said I wasn't going to get into a back and forth again earlier in the thread, because no matter how much evidence is provided from multiple review sites, you'll just continue to point to the same sites using slow memory and/or limited game tests and ignore anything to the contrary. You have an awesome system, I hope you enjoy it. I'm not going back around on this ride. The data is out there so everyone can make up their own minds.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
With 8 extra E cores they can dial down frequencies in MT and have higher thtoughput at lower TDPs, you have noticed that they aknowledge that the competition has the perf/watt crown and that they hope to get it back in 2024.

Ok, but they'll need the full TDP again to compete with Zen 4. I don't really believe they'll cut the top TDP figure, because it can be used for more performance. The trend is increasing TDPs.

That doesn't mean chips are less efficient. They are just increasing the dynamic range. The low power chips are awesome nowadays.
 
  • Like
Reactions: dark zero

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
But, your all E core idea is a good idea for certain use cases. That is why Intel is pursuing that idea with Sierra Forest with rumors of 128 E cores.

It doesn't have to be 1.25x the area, because if they are designing a new SoC with all E cores, things can be moved around to optimize for it. Right now I bet you there's a fair bit of empty space cause you are trying to fit differently sized rectangles into a one large rectangle.

Also Sierra Forest with Intel 3 in 2024 is definitely going above 128 E cores. I'm expecting 256 or even more. If we're expecting Granite Rapids to be at least 120 cores, then 120 E cores on the same node is kinda underwhelming. By pure ratio of core sizes, we should expect something like 384 cores.

Crestmont is probably going to outperform Sunny Cove in perf/clock and it's something they'll need to go against 2024 competition.

So PMICs are being used because they are smaller than capacitors? Or relatively cheaper?

Beancounters. That's it.

All this talk about being more reliable hasn't resulted in anything more reliable because they take more reliable components and use it to save on costs instead. So you end up being the same, or even worse than before.

Case in point when I bought a broken e-bike to fix. I noticed the MOSFETs for the power controller short circuited. The datasheets showed 56V for the maximum, while the battery is rated 44V, or 12 LiPo cells. Well, when fully charged, the 12 LiPo cells would reach 51.2V, dangerously close to the absolute maximum rating of the MOSFET.

I bet over time it got degraded to the point where it went down to 51V. We're talking about an e-bike that would have cost $3,000+ US when new.

20 years ago, they didn't design it that way. Sure the technology improved but the mindset went the opposite. Most prevalent among Chinese vendors, but rest of the world adopts it to compete.

Same with SSDs using "no moving and reliable parts" but they turn it around to save on cost.

Also the faux green movement doesn't help. Lead-free solder is brittle, unlike leaded solder. And they can create solder whiskers, and short out components over time.

If they were really being "green" they would make it more repairable, and make things like laptops more modular. Companies like Apple penalized repair shops for years now.
 
Last edited:
  • Like
Reactions: igor_kavinski

pakotlar

Senior member
Aug 22, 2003
731
187
116
I wouldnt count on a 5.3GHz all 16 cores, to be within the >35% MT uplift 5GHz is enough, so that should land by there.

In any case the performance uplift this upcoming generation isn’t impressive outside of server parts, going by rumors. Much better than we had in Intel dominating days, but I’ve gotten spoiled by AMD in recent years and want more high-IPC cores in consumer space. There’s only so much they can do keeping core counts low, or using efficiency cores with circa Skylake IPC.
 
Last edited:

pakotlar

Senior member
Aug 22, 2003
731
187
116
Golden Cove shows strong performance in Cinebench because of its unified scheduler and larger L2. Larger L2 alone should give Zen 4 a sizeable boost in Cinebench, though not that it matters much for the end user in the end.


Thats about R15, now 2 Cinebench releases out of date. Do we have any indication on how R23 performs (the Cinebench Golden Cove is typically compared to Zen 3 on)?

Edit: Interesting article btw. One thing they mention is the large L3 improves IPC by avoiding the large latency spike by going to memory. This will, btw, also improve overall performance because Zen 3 incurs a huge power spike (large relative to L3 access) when going to main memory, so when accessing data from L3, Zen 3 can sustain higher clocks (all else held equal).

Alder Lake behavior is different. Its performance in L3 is abysmal bothin terms of latency and in terms of the power it takes to fetch a byte, compared to Zen 3. So for Alder Lake, and Raptor Lake, its very important to stay in L1 and L2. That’s why Raptor Lake’s doubled L2 is so important. We’re going to see better performance for latency sensitive applications that have many accesses that spill out of L2 currently on ADL, as well as lower power, and that benefit will be disproportionate to the benefit Zen 4 would get.

On this note, we should expect Raptor Lake and Zen 4 to converge in terms of power efficiency due to microarchitecture (not entirely of course), rather than diverge, because Zen 3 was already so darned efficient, and Alder Lake so poor (at the clocks needed to match Zen 3 performance). Regression to the mean. Zen 4 moving to TSMC 5nm will decrease the strength of convergence due to microarchitecture, unfortunately for Intel.
 
Last edited:

FangBLade

Member
Apr 13, 2022
199
395
106
That's What happens when your L2 size goes Up by 60% but your associativity remains the same(10Way for Client , but 16-Way for Servers)
Yeah, the reason why Zen 3 benefits so much of extra cache is because latency increase is minimal, excellent implementation, and Zen 4 l2 cache was already tested and it is also excellent, minimal latency increase while having 2x size, can't wait to see how Zen 4 scales in gaming with 3dcache + double l2.
 
  • Like
Reactions: Makaveli and Kaluan

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I talked about it here: https://forums.anandtech.com/thread...ure-lakes-rapids-thread.2509080/post-40772218

DLVR is not the focus. I hate how the press doesn't even read the content and just post whatever they feel like.

(If there is a part of society that needs to go off the face of the earth, it's the press. I am not talking just about mainstream media. I mean same with tech channels and youtube channels, all of them. GONE)

The secondary VR is the point. As I said in my conclusion, the second regulator has to be off for the most efficient point, because the secondary regulator reduces efficiency. With higher loads it has to be engaged more and that's why the drop happens(among other reasons).

The secondary regulator is not much more than a guarantee that the CPU can work at lower voltages without causing stability issues. If say 50% is the cutoff point where the second regulator has to be active, then you' get less and less gains as you go above that.

And typically overclocking and adjusting BIOS messes with power management features. I can't imagine it working well in that case at all.

Also yes in heavy MT loads you lose gains. That's why it's a mobile feature and likely benefit low load and burst scenarios.
 
Last edited:

Det0x

Golden Member
Sep 11, 2014
1,031
2,964
136
OK now if 2300 ST can be had without too much trouble, Raptor Lake starts looking attractive.
Seems like 2300ST is already reachable today with a Alder Lake @ 5.5ghz
1659806329207.png
LIMITATIONS
  • Use Geekbench 5.4.5. and HWinfo v7.26
  • Maximum Frequency/cache limitation 5500MHz
  • Disabling CPU cores/HT/SMT NOT allowed.
  • A VALID Geekbench 5 link is required.
  • A CPUZ 2.01 or newer Validation link is required, registered on your HWBOT username.
  • A verification screenshot is required, using the official wallpaper, GB 5 score, CPUZ tabs for CPU & Memory and HWinfo.
  • Only members of the rookie, novice, enthusiast league may participate.
  • No Extreme cooling allowed (chiller, Single Stage, Cascade, Dry ice, LN2)

 
Last edited:
  • Like
Reactions: lightmanek