Intel processors crashing Unreal engine games (and others)

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

jpiniero

Lifer
Oct 1, 2010
14,569
5,200
136
Unfortunately the power consumption numbers from the Techpowerup review tell the same story we've seen in this thread, the one that probably contributes to system instability issues in the OP.

Still unlikely to be a problem. What could be a (is probally the) problem is MCE.
 

coercitiv

Diamond Member
Jan 24, 2014
6,173
11,800
136
Still unlikely to be a problem. What could be a (is probally the) problem is MCE.
MCE is a collection of settings that can vary from one mobo manufacturer to another, in both scope and values. The more appropriate way of addressing this issue is to consider and discuss particular settings or limits that may lead to instability. I happen to agree with others in this thread who think that ICC_max is a very likely candidate. (or at the very least the first setting to bring back to stock before figuring out the problem by tiral and error)

The very high average power numbers we're seeing with 14900KS are not direct evidence that current limits are above Intel's spec, but they do increase the likelihood significantly.
 

jpiniero

Lifer
Oct 1, 2010
14,569
5,200
136
MCE is a collection of settings that can vary from one mobo manufacturer to another, in both scope and values.

The issue is that MCE sets the all core turbo to whatever the single core turbo is... and if you get a dud chip, the chip may not be able to do that.
 

H433x0n

Senior member
Mar 15, 2023
873
937
96
49W vs 188W in gaming, 3.8x power consumption of AMD. I guess those Cinebench E-cores really help with "gaming efficiency", amirite? It's no wonder Intel's new "gaming accelerator" utility disables them. o_O
It’s 160W, what’s the point of cherry picking when the results are already quite lopsided? Intel’s gaming utility doesn’t disable “Cinebench” ecores. It disables 1 or 2 e-cores per cluster so that there is more L2$ available per ecore.

508W in multi-threaded workloads with no power limit. Remember, that's what virtually every Intel enthusiast mobo does when you enable XMP.

Even at stock, "253W TDP" is nowhere to be found. Again, I wonder why Intel still isn't mandating reviewers lock PL1 / PL2 and disable XMP, despite Intel being aware their CPUs fail certain workloads. Hmmm, what could the reason be?

Comedy gold.
This post hit all of the cliche AMD-homer talking points and was dripping with bad faith. I don’t get the point of this type of stuff. I guess it probably felt cathartic for you while typing it out though.
 

Rigg

Senior member
May 6, 2020
467
958
106
The issue is that MCE sets the all core turbo to whatever the single core turbo is... and if you get a dud chip, the chip may not be able to do that.
No it doesn't. There is a max all core turbo by default for the P cores that is typically 200 MHz below the single core turbo. There is also TVB which depending on temp will have an even higher delta between all core and single core. In the case of the 14900 KS the all core is 5.7, single core is 5.9 (turbo boost 3.0), and the TVB is 6.2.
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
14,569
5,200
136
No it doesn't. There is a max all core turbo by default for the P cores that is typically 200 MHz below the single core turbo. There is also TVB which depending on temp will have an even higher delta between all core and single core. In the case of the 14900 KS the all core is 5.7, single core is 5.9 (turbo boost 3.0), and the TVB is 6.2.

200 Mhz might not sound like much but it could be all the difference.
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
28,414
20,374
146
This post hit all of the cliche AMD-homer talking points and was dripping with bad faith. I don’t get the point of this type of stuff. I guess it probably felt cathartic for you while typing it out though.
I am not going to ding/report you for it, but how many times do we have to go over this? Attack the post, not the poster. The first part of your post was cogent and informative. By going after the person it becomes flame bait, inviting return fire. Which only serves to obfuscate the topic and is unconstructive to the dialogue. Engage them with facts, data, and personal experience with the hardware, not personal attacks.

Thank you for your understanding and cooperation.

- Moderator DAPUNISHER
 

Rigg

Senior member
May 6, 2020
467
958
106
200 Mhz might not sound like much but it could be all the difference.
There are predefined clock multipliers for the P cores depending on how many cores are utilized. The CPU will run the P cores at the all core turbo multiplier during Tau without MCE. The CPU should be stable at stock clocks even with MCE because it's designed and binned to run at that freq during TAU. If it's not it should be replaced under warranty. Your speculation about instability while running at the all core max is baseless. There are also temperature limits that are likely to prevent the CPU from maintaining the all core frequency indefinitely without an extreme cooling solution. Since you're clearly unfamiliar with how turbo, Tau, power limits, and temp limits work on a modern Intel platform, and appear to have not spent any time playing around in an LGA1700 UEFI, you should probably stop commenting about it.
 
Last edited:
  • Like
Reactions: DAPUNISHER

moinmoin

Diamond Member
Jun 1, 2017
4,944
7,656
136
A parity bit will only protect against a single bit error. If the higher power is making errors more likely than it's also making double bit errors more likely and those will go undetected because the parity will still check out. The higher voltage needed to push the clocks would also make tunneling more likely so we would expect an increase in the chances of bits flipping somewhere along the line.
While all true I'd think once we are at that point we are talking about much more corruption than what the OP is about.

It usually the data paths outside of the chips that are the weak part when OC'ing, and ECC would ensure that the ones between the IMC and RAM is at least being checked.
 

coercitiv

Diamond Member
Jan 24, 2014
6,173
11,800
136
Not necessarily.
Absolutely necessarily. All cores must be able to boost to advertised clocks, each core has it's own VID table to make sure it hits those clocks safely. Final VID request is obtained by pickng the max VID requirement from all cores on the voltage rail, including a variable offset introduced by load line mobo settings.

If you have a documented reason for why Intel CPUs would be unstable at higher MT multipliers than stock, but still within the CPU VID table, then please share it with us with more than just two words.
 
  • Like
Reactions: Tlh97 and Rigg

coercitiv

Diamond Member
Jan 24, 2014
6,173
11,800
136
Screenshot from Intel's review guide posted by der8auer in his 14900KS video:

1710764719500.png

der8auer did not have access to this review guide before benchmarking, so he used Auto values on his motherboard. This resulted in his test runs being limited by his AIO cooling, as the CPU reached 390W+ and was limited by high temps.

The 307A limit for ICC_max is old news now, the new limit is 400A. New PL1=PL2=320W, but who enforces that anyway...
 

Rigg

Senior member
May 6, 2020
467
958
106
I happen to agree with others in this thread who think that ICC_max is a very likely candidate. (or at the very least the first setting to bring back to stock before figuring out the problem by tiral and error)
My hypothesis is that Vdroop is causing the actual instability issue, and I suspect reducing ICC_max is just a round about way to solve it. I'd be interested to find out what would happen if the people in the reddit thread bumped LLC up a level or 2 instead of limiting ICC_max. I think this probably would have also prevented stability issues during shader compilation. As would a sane temperature limit although that is pretty much just an alternative way to limit current.

The 307A limit for ICC_max is old news now, the new limit is 400A. New PL1=PL2=320W, but who enforces that anyway...
Which begs the question ...Why doesn't intel put the leash on their board partners default settings if they're going to officially support these kind of power limits with these extreme power delivery profiles?
 

Trovaricon

Member
Feb 28, 2015
28
43
91
Today x86 CPUs are SoCs where most if not all ICs that could have affected performance at given settings are already integrated in the CPU package. Yet if you check the "motherboard reviews" you still see, same as with RAM reviews, that idea that light from $100 light might be "faster" that from the one bought many years ago from "$1 store" is given main focus to this day - tell me how is is possible that 5 boards running the same CPU and RAM settings exhibit different "I win internet points in benchmark xyz"?...
The only possible explanation is, that they are not actually running the same spec... or that some of the boards are actually faulty (e.g. signal integrity problem)

We are way to many years past "lets compare boards with single channel DIMMs, dual channel and with nVidia, ATi, VIA, Intel and other vendors northbridges" which actually had different memory controllers. Or even different on-board L2/L3 cache (e.g. [Super] Socket 7), FSB speeds etc.
 

BFG10K

Lifer
Aug 14, 2000
22,694
2,923
126
Water cooled, top-end thermal paste, undervolt, and a third party contact frame: still throttles. You have to delid all that to run properly.


The product isn't fit for purpose, plain and simple. Intel needs to be sued for false advertising and be forced to put a disclaimer on the box "warning, delidding required for correct operation".

I mean they're basically admitting themselves there's a problem:


Ship a defective product then charge $200 extra to "fix" it. Who could possibly defend this anti-consumer behavior? o_O

And still not a peep after they were "looking into" why their CPUs fail certain workloads. Much easier to just keep quiet rather than tell reviewers/board vendors to enforce PL1/PL2 and risk lower benchmark charts.
 
Last edited:

BFG10K

Lifer
Aug 14, 2000
22,694
2,923
126

Does anyone really think Intel will acknowledge the issue and tell board partners to enforce settings that'll lower benchmark charts? This is the same corporation that hid an industrial water chiller under the table during that Computex "demo". :rolleyes:
 

NTMBK

Lifer
Nov 14, 2011
10,225
4,998
136
I remember the days when AMD were ruthlessly mocked for their 220W FX-9590, the last hurrah of Piledriver. "Space heater", people called it. And now this is just a normal and acceptable power limit for Intel's main CPU? What the hell happened?
 
Jul 27, 2020
16,099
10,157
106
What the hell happened?
Zen 3. Then V-cache. Then Zen 4. Then V-cache of that too. By the time Zen 5 is done, reviewers may be recommending everyone to move to the South Pole to truly enjoy a top end Intel CPU without thermal throttling :D

But psychologically speaking, people just love big numbers. That's why quite a few folks will throw their money on really expensive (both initial cost and long run cost) 14900KS rigs, just to feel content that they have the "fastest" CPU on the planet.
 
  • Haha
Reactions: lightmanek

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,538
14,494
136
Zen 3. Then V-cache. Then Zen 4. Then V-cache of that too. By the time Zen 5 is done, reviewers may be recommending everyone to move to the South Pole to truly enjoy a top end Intel CPU without thermal throttling :D

But psychologically speaking, people just love big numbers. That's why quite a few folks will throw their money on really expensive (both initial cost and long run cost) 14900KS rigs, just to feel content that they have the "fastest" CPU on the planet.
Also, if you have anything that takes more than 8 heavy tasking cores or avx-512 or runs for more than 10 minutes, the 14900k/ks can't touch a 7950x (both at stock).

The Zen 5 should just take away any advantage Intel ever had in any conditions. The one thing I WOULD bet on regardless of the 40% thing. Genoa already does this for servers by a VERY wide margin. Zen 5 will continue that trend.
 

BFG10K

Lifer
Aug 14, 2000
22,694
2,923
126
And now this is just a normal and acceptable power limit for Intel's main CPU?
Bulldozer/Piledriver was a bad product and so are these K(S) SKUs from Intel.

Some people have an emotional attachment to products they purchase and the corporations that make them. Problem is, defending bad products doesn't help them or other customers.

apu.jpg

What the hell happened?
The Intel monopoly sat on their rectum for 7 years and allowed AMD to overtake them.
 

poke01

Senior member
Mar 8, 2022
713
680
106
What the hell happened?
Intel was dead in terms of innvoation in the last decade. Their IFS cannot even now ramp up Arrow/Lunar because of lack of ASML machines, needs TSMC. That will change in a few years if they secure enough machines for their own nodes.

As for client Core Ultra 9 200/Arrow Lake (what a horrible name) quality will determine Intels next 2 years of output.
 
Jul 28, 2023
98
345
86
A part of the reason why it’s accepted is that 14900K is still a very quick CPU when it does not crash, unlike the FX.

But yea, its power draw is embarrassing and I wish non-HEDT CPU never went past 100W at stock. Same goes for GPUs tbh, 300W+ monstrosities are unacceptable. 225W is the maximum I find reasonable.
 
Last edited:

Ranulf

Platinum Member
Jul 18, 2001
2,345
1,164
136
Bulldozer/Piledriver was a bad product and so are these K(S) SKUs from Intel.

At least they (AMD) ended up dropping the price. Here Intel is charging more for no real increase in performance, just fixing the problems with stability. In 1.5 years the real question will be what discounts Intel will be offering like they did with the 12th gen. Though I doubt it will be worth any hassles vs getting equivalent price/perf AMD products.
 
  • Like
Reactions: lightmanek

Hans Gruber

Platinum Member
Dec 23, 2006
2,130
1,088
136
I can confirm Intel has an issue with Unreal Engine games. I played PUBG on a 7600K and the game would run perfect and then introduce lag, latency and what seemed like video driver crashes but would recover. I think it's a CPU issue. I never thought the day would come where an i5 processor would be cut down by such an old game as PUBG. It's been around for 7 years. This reinforces the problem Intel CPU's have with unreal engine games. I hope they fix the problems.