Question Raptor Lake - Official Thread

Page 135 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hulk

Diamond Member
Oct 9, 1999
4,525
2,518
136
Since we already have the first Raptor Lake leak I'm thinking it should have it's own thread.
What do we know so far?
From Anandtech's Intel Process Roadmap articles from July:

Built on Intel 7 with upgraded FinFET
10-15% PPW (performance-per-watt)
Last non-tiled consumer CPU as Meteor Lake will be tiled

I'm guessing this will be a minor update to ADL with just a few microarchitecture changes to the cores. The larger change will be the new process refinement allowing 8+16 at the top of the stack.

Will it work with current z690 motherboards? If yes then that could be a major selling point for people to move to ADL rather than wait.
 
  • Like
Reactions: vstar

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
View attachment 70265

The latency advantage with DDR4-4000 CL15 is signficant. Should translate to better game performance in latency sensitive game engines.

DDR5 has latency mitigation tech though which allows for superior memory parallelism compared to DDR4.

Speaking of DDR5, I picked up my DDR5 7200 today! My motherboard is supposed to arrive on Saturday, but I probably won't be able to install everything until Tuesday next week.

Can't say I'm looking forward to it. It's going to take hours :eek:
 

ondma

Diamond Member
Mar 18, 2018
3,005
1,528
136
I honestly don't think there's much of a difference. The 13900K is known to be bottlenecked by an RTX 4090 at 1080p high settings in many games, so you can imagine how much of a limitation the RTX 3090 is.

To mitigate the GPU bottleneck, he should have tested at 720p low settings, but then I'm sure he would have gotten complaints from less knowledgeable folk who don't understand how bottlenecking works.
I understand how bottlenecking works, but I am not sure a cpu bound scenario at 720p is a reliable predictor of how a given cpu will perform at higher resolution with a more powerful card.
 
Jul 27, 2020
20,040
13,738
146
Can't say I'm looking forward to it. It's going to take hours :eek:
Because it's been years since you built a PC? I was afraid that I would mess things up too when checking the i5-12400 with an ASUS mobo. Thankfully, it booted on first try. But the 40+ seconds it took to display the screen was unnerving. My previous build was an i7-5775C maybe 6 months before and before that, an i3-2100 in 2013 (upgraded to a used i7-3770 in 2019 and had the seller install it on the same mobo).
 

Hulk

Diamond Member
Oct 9, 1999
4,525
2,518
136
I agree, especially with the quoted part. I might have posted this before, but I too keep Excel calculations with various scenarios going. See the image below. This particular one is just a simple take from Chips And Cheese's analysis of efficiency (https://i0.wp.com/chipsandcheese.com/wp-content/uploads/2022/01/image-151.png?ssl=1). I took a major assumption that isn't quite correct, that 4 E cores and 1 P core were the same area. But, it is pretty close to accurate.
View attachment 70252

In all power cases, given roughly equal area, you want as many E cores as possible. I hear a ton of people wanting an all P-core CPU and you can see that it would absolutely suck on Intel's node for anything multithreaded. This is pretty much the same as your conclusion if you look at your 164W data, the ones with the most E cores win in every single grouping.

But, there is a big tradeoff that you mention: an all E core CPU sucks at single threaded performance. Which, despite the lack of attention it gets in reviews, ST is what most people use in their day-to-day tasks. So, a hybrid approach given Intel's limitations is the best move that they have. Once we get to 32+ E cores it will be all very clear. Much more clear than the 8+4 ugly stepchild that we have in some Intel chips now.

I'm trying to make a valid P vs. E power consumption analysis but the E's are slippery. If I set the P's to 5GHz and the E's to 4GHz and then test with 16 and 4 E's activated it should be a simple matter to use subtraction of the two power levels and divide by 12. All well and good except that if you go with 0 and 4 E's you get a different result. And another different result with 4 E's and 8's. It's confounding.

I do know that an E at 4GHz uses somewhere between 6 and 8 Watts. But those E are like electrons and seems to follow the laws of quantum mechanics and just can't be accurately measured if you isolate them! They seem to behavior differently in different situations. They are their own little double slit experiment.
 

dullard

Elite Member
May 21, 2001
25,510
4,004
126
Just a wild idea, but if MTL has a 6P 8E tile, how about putting 2 of those together for 12P/16E?
You have two big problems with 12P/16E. One problem is cost. It is doable, but it would be costly.

The main problem though is power. In order for the P cores to perform well, they need a lot of power. A theoretical 12P/16E chip either needs to completely blow through power budgets in a way that hasn't been done before in desktop chips (think 500 W at turbo). Or it needs to drastically lower the frequency. Lower the P core frequency so much that the P cores are no longer worth it.

Yes, a theoretical 12P/16E chip without worry about practical limitations would be fantastic. But unfortunately, those practical limitations make large numbers of the P cores pretty unappealing. Maybe that would change drastically with a better process, but the P cores are just are a power-hungry design. They would also need a lot more changes than just a different node.

As for games you mention, there still is a fundamental problem that many games are reaching the limits of where more cores help. A programmer needs actual uses for the additional cores. Just throwing more cores at them doesn't necessarily make a game faster (having 12 ovens won't cook your Thanksgiving turkey any faster than one oven). And we already have GPUs for the highly parallel gaming tasks. You can only do so much with more cores as a game developer.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Because it's been years since you built a PC

Not only that, but also because it's going to take probably an additional two weeks or so of testing before the final settings are dialed in.

I plan on limiting the P cores to 5.2ghz and the E cores to probably 4.2ghz at hopefully 200w or less with undervolting. I'm going with air cooling so I don't want the kind of wattage this CPU can produce at stock settings, as that will definitely overwhelm my cooler. Although this setup I'm building is supposed to be probably the best for an air cooled rig.

Another thing that's going to suck the joy out of building this thing is that it won't be complete because I haven't bought a GPU yet for it. If I pay scalper prices and buy an RTX 4090 off of eBay, this rig will have cost me almost $6000, by far the most amount of money I've ever spent on a computer.

I'm hoping that AMD's RDNA3 delivers in a big way, so that Nvidia will be forced to cut prices and ramp up production of their RTX 4000 cards. Luckily I'll know by tomorrow whether it's worth waiting for RDNA3 or biting the bullet and paying 2 grand for an RTX 4090.

Until then, my trusty old Titan Xp is going to look out of place in all this fancy new hardware and 4K monitor, which it can't even drive :(
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I understand how bottlenecking works, but I am not sure a cpu bound scenario at 720p is a reliable predictor of how a given cpu will perform at higher resolution with a more powerful card.

If a CPU distinguishes itself and performs well at 720p with an RTX 4090, you can be assured that high resolution gaming will be a cake walk because the CPU will 100% be GPU limited.
 
  • Like
Reactions: controlflow

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
You have two big problems with 12P/16E. One problem is cost. It is doable, but it would be costly.

The main problem though is power. In order for the P cores to perform well, they need a lot of power. A theoretical 12P/16E chip either needs to completely blow through power budgets in a way that hasn't been done before in desktop chips (think 500 W at turbo). Or it needs to drastically lower the frequency. Lower the P core frequency so much that the P cores are no longer worth it.

Yes, a theoretical 12P/16E chip without worry about practical limitations would be fantastic. But unfortunately, those practical limitations make large numbers of the P cores pretty unappealing. Maybe that would change drastically with a better process, but the P cores are just are a power-hungry design. They would also need a lot more changes than just a different node.

As for games you mention, there still is a fundamental problem that many games are reaching the limits of where more cores help. A programmer needs actual uses for the additional cores. Just throwing more cores at them doesn't necessarily make a game faster (having 12 ovens won't cook your Thanksgiving turkey any faster than one oven). And we already have GPUs for the highly parallel gaming tasks. You can only do so much with more cores as a game developer.
Neither of those two reasons matter because they only have to be favorable against the next ryzen...
Ryzen is already up to 240W just to get enough clocks to barely hold up to raptor so AMD has the exact same two problems, they will either have to increase power by a ton and also add cores because realistically how high are you gonna go,or they will have to add a ton of cores and still add power because more cores need more power.

Even if TSMC makes a huge jump in reducing power draw, if they stay at 16 cores they either keep the same performance as the 7950x but at much lower power draw, which wouldn't be great if the next intel chip has higher performance, or they use the power budget to add another ccx which would mean increased cost and still high power draw.
 

dullard

Elite Member
May 21, 2001
25,510
4,004
126
Neither of those two reasons matter because they only have to be favorable against the next ryzen...
Intel's biggest competition is the chips already out in people's hands. Those existing chips are far bigger competition than AMD (same goes for AMD, their biggest competition is those who don't feel the need to upgrade). If there is no reason to upgrade, then Intel suffers.
 
  • Like
Reactions: Doug S

DrMrLordX

Lifer
Apr 27, 2000
22,065
11,693
136
Yes, a theoretical 12P/16E chip without worry about practical limitations would be fantastic. But unfortunately, those practical limitations make large numbers of the P cores pretty unappealing. Maybe that would change drastically with a better process, but the P cores are just are a power-hungry design. They would also need a lot more changes than just a different node.

Yeah extra power rails etc. Might be doable with an IVR. However the P cores are only "extra power hungry" at higher clocks. They can be downclocked like Zen3 or Zen4 cores to stay within a specific power envelope in MT scenarios. Intel has pushed Golden Cove/Raptor Cove well beyond its efficiency range at all clockspeeds, hence the high power draw.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,158
136
Yeah extra power rails etc. Might be doable with an IVR. However the P cores are only "extra power hungry" at higher clocks. They can be downclocked like Zen3 or Zen4 cores to stay within a specific power envelope in MT scenarios. Intel has pushed Golden Cove/Raptor Cove well beyond its efficiency range at all clockspeeds, hence the high power draw.
small caveat there m'lord. In gaming the watt draw is within the wash range. at production the 13th will edge out the 7950x at a higher power draw. but this comes down to how the test was done. in real life scenarios minus video export, you would use your gpu to render and not cpu cycles unless you use arnold for work. gpu would be faster but given modern gpus the power draw would be greater than that of the cpu. there is left time spent doing that work. cinebench is a good example. If you use maxon's rendering software cinema 4d you're using a third party render engine that relies on gpu export and not cpu. your work is done faster but with a higher power draw.

the only time cpu export is the best would be video. h264 h265 and av1 export though hardware are not the best visually. there is h266 on the horizon that aims to fix this via compression algorithms and aims to be very limited on fees licensing. it fixes what went wrong with h265. With the threat of av1 here with all three graphics card makers which is still weird to say now, the h26x group needs to offer a compelling reason for the industry to hold onto their product lineage.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
Intel has pushed Golden Cove/Raptor Cove well beyond its efficiency range at all clockspeeds, hence the high power draw.

I think the way they review it(whether it's fault of Intel encouraging such a practice or reviewer stupidity) is the issue.

Because we're not in the 90's anymore. You can adjust the TDP settings very easily. If they simply told the manufacturers and reviewers to limit it at 258W, then no one would complain.

Let's say instead Raptorlake was reviewed at 160W for example, then most would claim the performance is not enough, even though the efficiency complaints will mostly disappear.

So we're having problems with how it's "stock" settings are, when CPUs are almost as flexible as software buttons in terms of changing settings. This why the T series CPUs sell, despite the difference being mostly due to stock settings being different from the K.

Perhaps a KT chip with 200W unlimited PL2 is the solution?
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
13900k allegedly bottlenecks a 4090 at all major three resolutions. it is the fastest gaming and production cpu, and also the hottest. there is either some bad driver overhead on nvidia or the 4090's true power can't be unleashed.

If you looked at the reviews you would know that this is false. The RTX 4090 can itself be a bottleneck depending on the game and the settings used.

Some games are inherently CPU bottlenecked due to being predominantly single threaded for instance, and other games just hammer the CPU due to all the entities and calculations taking place on screen even if they are multithreaded.

And then you have the opposite. Large open world games like Cyberpunk 2077 which puts a heavy burden on the GPU because of how much it has to render.
 

Hulk

Diamond Member
Oct 9, 1999
4,525
2,518
136
Dumb question from a non-gamer, well not since Quake III anyway. 13900K or 7950X and 4090. What are typical frame rates and are they not high enough to be playable or are we talking about the 0.0001% for competitive gamers?
 

jpiniero

Lifer
Oct 1, 2010
15,223
5,768
136
Dumb question from a non-gamer, well not since Quake III anyway. 13900K or 7950X and 4090. What are typical frame rates and are they not high enough to be playable or are we talking about the 0.0001% for competitive gamers?

Native 4k144. Even the 4090 can't deliver that in a lot of games... and for sure with RT.
 

Grimnir

Member
Jun 8, 2020
27
10
51
Dumb question from a non-gamer, well not since Quake III anyway. 13900K or 7950X and 4090. What are typical frame rates and are they not high enough to be playable or are we talking about the 0.0001% for competitive gamers?
I mostly run VR, which offers the great feature of being both CPU and GPU limited. Draw calls can murder the CPU, and I typically run resolutions of 4864z2448 (5408x2736 would be considered "native") aiming for 120fps. Suffice to say my 3080 gets to work.

---

Can't remember where I saw it, maybe it was der8auer, but I've seen some bracket thingy for LGA1700. It supposedly helps get a more even contact between cooler and IHS.

Is it worth getting one of those things? I intend to get the 13600K and overclock it. For cooling I have the Corsair H150i (older version) with Noctua fans.

Not sure what to expect from 13600K in terms of OC and temperatures. My (delided) 8086K at 5Ghz 1.35V can get pretty hot during stress testing, but it's rather comfortable in normal use.
 

Exist50

Platinum Member
Aug 18, 2016
2,452
3,102
136
All of that to say that a 6P/24E processor with other modest improvements, more L3, and higher clocks would likely have even better game performance than the 13900K does, and is why I don't doubt that meteor lake will perform quite well.
That assumes that other significant improvements exist. Therein lies the problem with MTL.
 

Doug S

Platinum Member
Feb 8, 2020
2,784
4,746
136
Intel's biggest competition is the chips already out in people's hands. Those existing chips are far bigger competition than AMD (same goes for AMD, their biggest competition is those who don't feel the need to upgrade). If there is no reason to upgrade, then Intel suffers.

Someone who gets it.

For years the improvements have been slowing and replacement cycles have been lengthening. We've seen the same thing in smaller scale in the smartphone market - 10 years ago yearly upgrades were common at least for those who have some disposable income and even slightly tech inclined. Now even the biggest Apple or Samsung booster would admit there's little point to upgrading an iPhone 13 to an iPhone 14, or an S21 to an S22. Now those people are upgrading every 2 or 3 years, and the people who were calling the people who upgraded yearly stupid back in 2012 went from 2-3 to 5+ year upgrade cycles.

People get excited about a "double digit" performance boost, but when that double digit is something like 15% it isn't getting very many people who have Zen 3 to get Zen 4, or Alder Lake to Raptor Lake. Most of the Zen 4 buyers will be people coming from Zen 1 or maybe Zen 2 for power users. Similar for Intel - I'm probably buying a Raptor Lake later this year or early next, to replace a Skylake. And the main reason I'm upgrading has nothing to do with feeling it is too slow, I'm upgrading because I want to switch to a dual m.2 motherboard for mirrored SSDs. The CPU upgrade is mostly incidental.
 
  • Like
Reactions: Tlh97 and Exist50

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
Yeah extra power rails etc. Might be doable with an IVR. However the P cores are only "extra power hungry" at higher clocks. They can be downclocked like Zen3 or Zen4 cores to stay within a specific power envelope in MT scenarios. Intel has pushed Golden Cove/Raptor Cove well beyond its efficiency range at all clockspeeds, hence the high power draw.
13900k allegedly bottlenecks a 4090 at all major three resolutions. it is the fastest gaming and production cpu, and also the hottest. there is either some bad driver overhead on nvidia or the 4090's true power can't be unleashed.

@TheELF You know as well as everyone else how much you love intel so your notion that intel may produce apowerful next generation processor is silly. we all know they will. I'll firmly state amd was likely caught off guard about 13th gen's performance. I'd been following the intel leaks like a heat seeking missile and i was awed at the performance.
If intel just reduces power on the new core by 10% to increase IPC/clocks combined by 10% on the same power and has 4 e-cores more they will have a strong enough CPU to make enough sales.
Even without the e-cores it would be the normal 10% improvement they made all of their money on in the past.
That is to only compete against their own older CPUs.
I'll firmly state amd was likely caught off guard about 13th gen's performance.
So tell us, if AMD weren't caught in the dark, what would they have released?
I already gave some guesses on future CPUs they could do, what do you think AMD could have done if they knew exactly what intel would bring out?