Discussion Intel current and future Lakes & Rapids thread

Page 350 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hitman928

Diamond Member
Apr 15, 2012
5,244
7,793
136
It is pure garbage. They claim to use "machine learning" to feed "heuristics models" to detect threats. How exactly? With what training? How do they distinguish between real threats and just regular legitimate software running at the hardware layer? The entire notion is laughable.

Their machine learning promotional stuff I believe is for their more generic advanced threat protection, or whatever they are calling it. I agree it's a lot of hand waving and smoke and mirrors, but Intel is hardly a lone on this one, MS and many others are putting out the same hollow stuff.
 

coercitiv

Diamond Member
Jan 24, 2014
6,187
11,858
136
Then why didn't Intel just go ahead and compare it to the 5950X?
You have 3 equally valid and mutually exclusive comparison criteria:
  • same price
  • same flagship status
  • same core count
Choose one, and please make the correct choice. Anybody else is free to complain you did not choose one of the others.

Your choice is incorrect.
 

Hulk

Diamond Member
Oct 9, 1999
4,214
2,006
136
You have 3 equally valid and mutually exclusive comparison criteria:
  • same price
  • same flagship status
  • same core count
Choose one, and please make the correct choice. Anybody else is free to complain you did not choose one of the others.

Your choice is incorrect.

Hmm. My apologies I have not communicated my point correctly.

I believe most people shop CPU's based on performance and price. When two CPU's are at the same price point they will generally choose the one that performs better. I believe 11900k vs 5800x at the same price point will be very competitive and may come down to the specifics of the user. For example, for me the 11900k might be a better choice since I need low latency for multitrack recording and mixing (Studio One) and since I don't game (don't need discrete gpu) a strong iGPU is useful for my video editor of choice, Vegas Pro. This is NOT to say that Zen 3 doesn't perform well in NLE's as far as latency is concerned, just no solid data yet.

But... if Intel is pricing the 11900k with the 5900x then Intel is out of luck for me. I'll either spend less and get the 5800x or spend more and get something much stronger than the 11900k in the 5900x.
 
Last edited:

coercitiv

Diamond Member
Jan 24, 2014
6,187
11,858
136
I believe most people shop CPU's based on performance and price. When two CPU's are at the same price point they will generally choose the one that performs better.
And some smart people in marketing have figured out they shouldn't compete at the same price point, hence we now have competing products at different price points.

5800X MSRP is ~&450
10900K MSRP is ~$490
5900X MSRP is ~$550

That's why I joked about every choice being wrong, since we can always think of another criteria that favors one SKU over the other.

But... if Intel is pricing the 11900k with the 5900x then Intel is out of luck for me. I'll either spend less and get the 5800x or spend more and get something much stronger than the 11900k in the 5900x.
Or you'll spend less and get the 10700K, or the 10700 for that matter. There will be plenty of options from both Intel and AMD, and I don't see why people are fixating on this 10900K vs 5900X comparison with a very narrow scope (gaming).

For example, for me the 11900k might be a better choice since I need low latency for multitrack recording and mixing (Studio One)
When using a single CCD, Zen 3 inter-core latency stays within 15-19ns according to Anandtech.
For workloads which are synchronisation heavy and are multi-threaded up to 8 primary threads, this is a great win for the new Zen3 CCD and L3 design. AMD’s new L3 complex in fact now offers better inter-core latencies and a flatter topology than Intel’s ring-based consumer designs, with SKUs such as the 10900K varying between 16.5-23ns inter-core latency.

Do you know how 5800X performs in Studio One?

since I don't game (don't need discrete gpu) a strong iGPU is useful for my video editor of choice, Vegas Pro.
Are you sure choosing the most expensive mainstream CPU and relying on iGPU for video editing is the best idea? Here's two reviews with video processing acceleration in Vegas Pro 17 and Vegas Pro 18. In the second review they didn't even bother with the iGPU anymore (we could assume they dropped it for some benign reason, though results in Vegas 17 indicate otherwise).

One newer codec or technology is sometimes all it takes to make an iGPU irelevant from a compute PoV. My 6600K was great as a relatively low power Plex server CPU, but once HEVC became more prevalent it immediately required dGPU assistance or direct replacement.
 
Last edited:

scineram

Senior member
Nov 1, 2020
361
283
106
For example, for me the 11900k might be a better choice since I need low latency for multitrack recording and mixing (Studio One) and since I don't game (don't need discrete gpu) a strong iGPU is useful for my video editor of choice, Vegas Pro.
The X^e in it is one third of the Tiger GPU, I think. Will see how strong.
 

Hulk

Diamond Member
Oct 9, 1999
4,214
2,006
136
Do you know how 5800X performs in Studio One?

No I don't, which is why I specifically stated that it might perform really well based on latency tests in the review. But we won't know until there is some solid data for my specific application.

Vegas Pro and GPU. The GPU accelerates timeline playback, what I call assembly of the timeline, and final rendering to codecs that I personally consider inferior to frameserving to Handbrake. So for me the only thing the GPU is good for is accelerating playback while editing. I have found that even the lowly iGPU in my 4770k does a good job with playback compared to when I disable it. The Xe graphics in RL has a a LOT more compute than what I'm using now, so that will be a huge upgrade for me.

Why not discrete GPU? I have experimented with various discrete GPU's and found that system stability if much better with the iGPU.


Again, my main point about pricing is that when you come to the conclusion that the 11900k is comparable performance-wise to the 5800X the will have to be priced comparably. And that fact necessitates everything in the stack below the it being priced below it. And Intel also has to deal with the fact that the 5700X is also a standout performer. All I'm saying is it's been a long time since Intel has faced such stiff competition on the desktop.
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
Gen 11 is 1.5- 2X faster than the 630 in that review. Xe is even faster.

The iGPU in Intel systems, if the software supports it, can work in concert with the dGPU. The post below from an Adobe engineer explains this, and the video link below gives a demonstration of it along with a pretty scalding rebuke of most sites trying to do benchmarks with the iGPU disabled. This is with Adobe Premiere Pro, no idea if Vegas supports this.

1610718886340.png


This is how it looks like when Premiere is properly configured to use iGPU + dGPU :

1610718974300.png

Both iGPU and dGPU being used during encode. Per the Adobe engineer, the dGPU would be doing effects while the iGPU would be doing the encode :

1610719080663.png
 
  • Like
Reactions: Hulk

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
I have found that even the lowly iGPU in my 4770k does a good job with playback compared to when I disable it. The Xe graphics in RL has a a LOT more compute than what I'm using now, so that will be a huge upgrade for me.

Why not discrete GPU? I have experimented with various discrete GPU's and found that system stability if much better with the iGPU.


Again, my main point about pricing is that when you come to the conclusion that the 11900k is comparable performance-wise to the 5800X the will have to be priced comparably. And that fact necessitates everything in the stack below the it being priced below it.
The Xe iGPU is not free though, especially seeing how well you believe it'll help your work. I think with Xe, people are going to need to start appreciating and giving Intel some overdue credit for sticking onto iGPUs even while battling a resurgent AMD riding high on testosterones Zen 3 arch + TSMC silicon.
 

Hulk

Diamond Member
Oct 9, 1999
4,214
2,006
136
The Xe iGPU is not free though, especially seeing how well you believe it'll help your work. I think with Xe, people are going to need to start appreciating and giving Intel some overdue credit for sticking onto iGPUs even while battling a resurgent AMD riding high on testosterones Zen 3 arch + TSMC silicon.

Yes, for someone like me who doesn't need a discrete GPU but requires the compute for video editing RL with Xe is a big asset. No doubt about.

@shady28,

Thanks for posting that info. I'm pretty sure Premiere Pro is better coded to use GPU's than Vegas Pro. As I wrote above I have absolutely no use for the GPU accelerated codecs in Vegas Pro. I have found their quality vs compression to not even be in the same league as frameserving to Handbrake. Yes Handbrake is slower but the final result at the same file size is so much better. And since Vegas Pro uses the GPU for playback, it actually uses my iGPU to "assemble" the timeline, which is then sent to Handbrake for encoding. It's a pretty good use of both CPU and iGPU I've found. RL in this scenario will be well suited to my work flow.

As for Intel Quick Sync encoding, and this is just my assessment, it's completely useless. The encoding quality vs file size compared to Handbrake is terrible. And when you back off the Handbrake motion search (ie move to "fast"), with a strong CPU it's not even like Handbrake is much slower and the quality is still better vs file size. Again, this is just my experience. There might have been a reason for Quick Sync encoding like 10 or so years ago but not today with the CPU compete we have available. Perhaps if Intel had hardware encode as efficient as Handbrake's encoding engine, but as it stands now I'll pass.

Quick story. I've been involved with NLE's for quite some time having co-written a few books on the subject (https://www.amazon.com/Hdv-What-You...v+what+you+need+to+know&qid=1610725677&sr=8-2) , been a speaker at NAB in the '00's, beta tested for Sony and Ulead (remember them?) and I remember in the '90s speaking with some software engineers about when we were going to be able to edit MPEG-2 on home computers without hardware assist. They basically said, it's gonna be a loooong time. This is when one of the big selling points of the PII 300 was that it could playback 480i MPEG-2 streams! Actually it took more like a PI 400 but that's another story. Anyway I think it was like 2 years later when we had MPEG-2 editing capability. Even the engineers can't predict how fast hardware is moving. Hence Intel Quick Sync was kind of redundant before it left the factory door...
 

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
Hulk, are you at all interested in the prospect of an Intel Xe PCIe adapter fitted with a 96CU Xe module, like the one present on the TigerLake G7 Processors? I know that you didn't see a use for traditional dGPUs, but, Intel is pushing a dGPU version of the LP Xe that's present in TigerLake.
 

Bouowmx

Golden Member
Nov 13, 2016
1,138
550
146
As for Intel Quick Sync encoding, and this is just my assessment, it's completely useless
The i7-4770K with Intel QSV from 2012 (Gen7): of course that's gonna be useless since it's so far removed from Gen12 Xe.
 

misuspita

Senior member
Jul 15, 2006
400
438
136
How would the AMD APU fare in this? A 4750G for example would have a relatively powerful GPU alongside, no? Or am I missing something
 

Hulk

Diamond Member
Oct 9, 1999
4,214
2,006
136
Hulk, are you at all interested in the prospect of an Intel Xe PCIe adapter fitted with a 96CU Xe module, like the one present on the TigerLake G7 Processors? I know that you didn't see a use for traditional dGPUs, but, Intel is pushing a dGPU version of the LP Xe that's present in TigerLake.

Yes I think so (explanation below). I just did a quick render test with and without iGPU support using my 4770k. This is for a short, like 2 minute project that is very moderately complex with a few compute intensive video filters. Encoded to x264 via frameserve to Handbrake with the 1080p General setting.

13.8 fps with iGPU support, GPU load around 80%, CPU load 100%

3.7 fps without iGPU support, GPU load 2-3%, CPU load 100%

So as you can see even with my lowly i4770k based HD 4600 iGPU, offloading the "assembly" of the timeline to the GPU increases my preferred rendering workflow by a factor of three. Also, the iGPU isn't even at 100% so the CPU is probably the bottle neck here. Now if I move to 8 cores there will be a larger "demand" on the CPU to keep up but still I'm thinking even the 32 Xe graphics would probably be fine.

That's why I wrote "I think so" above. Currently except for really fx heavy parts of my 1080p timelines preview in nearly real time. Of course this will change as I start to edit 4k so that's why I'm thinking probably yes. But then again the Xe iGPU in RL might be enough for me. If I do go with RL I'll let you know.
 

Hulk

Diamond Member
Oct 9, 1999
4,214
2,006
136
The i7-4770K with Intel QSV from 2012 (Gen7): of course that's gonna be useless since it's so far removed from Gen12 Xe.

I'm pretty sure the quality vs encoding size is the same for all Intel hardware, the newer hardware is just faster at creating large files with low quality (compared to Handbrake).
 

mikk

Diamond Member
May 15, 2012
4,133
2,136
136
As for Intel Quick Sync encoding, and this is just my assessment, it's completely useless. The encoding quality vs file size compared to Handbrake is terrible. And when you back off the Handbrake motion search (ie move to "fast"), with a strong CPU it's not even like Handbrake is much slower and the quality is still better vs file size. Again, this is just my experience. There might have been a reason for Quick Sync encoding like 10 or so years ago but not today with the CPU compete we have available. Perhaps if Intel had hardware encode as efficient as Handbrake's encoding engine, but as it stands now I'll pass.


Handbrake is using bad out of the box settings for Intel Quicksync. Furthermore Xe Quicksync has greatly improved over Gen9 and also HEVC encoding on Xe is a full hardware based solution. On Gen9 HEVC encoding is a Hybrid of GPU and fixed function, speed and power consumption is much worse as a result. Xe CQP bitrate mode is actually really good in quality and speed, if you have a Xe device you just have to use 16 bframes and an offset of 2_6_8.

I'm pretty sure the quality vs encoding size is the same for all Intel hardware, the newer hardware is just faster at creating large files with low quality (compared to Handbrake).


This is wrong, Gen9 looks much worse. I have both.
 
  • Like
Reactions: Tlh97 and Hulk

cortexa99

Senior member
Jul 2, 2018
319
505
136
About the Xe graphic I hope Intel make some progress in compatibility issue which were often seen in their old IGP architecture, performance wise is another story and I think no one here care about IGP's game performance.
 

Det0x

Golden Member
Sep 11, 2014
1,028
2,953
136

Yet another "review" of Rocket Lake.

For those who dont want to watch the video, screenshots can be found here:

Intel Core i9-11900K 8 Core Rocket Lake Flagship CPU Benchmarked at 5.2 GHz, Faster Than Core i9-10900K In Single-Core But Slower in Gaming & Multi-Threaded Apps @ https://wccftech.com/intel-core-i9-...00k-5-2-ghz-overclock-benchmarks-gaming-leak/

Coming straight to the performance numbers, both CPUs were tested at an overclock frequency of 5.2 GHz across all cores. Do note that the Intel Core i9-11900K features a brand new architecture but has 2 fewer cores than the Core i9-10900K which relies on an enhanced Skylake architecture. Both CPUs were tested on a Z490 motherboard and the memory featured was 16 GB DDR4-3600 due to a lock on Rocket Lake. The Core i9-11900K voltage was set to auto and hit 1.48V. A chiller was required to maintain the 5.2 GHz overclock.

In CPU-z, the Intel Core i9-11900K is 11% faster than the Core i9-10900K in single-core tests but ends up 12% slower in multi-core tests. In Cinebench R15, the Intel Core i9-11900K once again takes a 12% lead over the Core i9-10900K but also ends up 12% slower in multi-core tests. The Cinebench R20 & Cinebench R23 results show a 16% gain for the Core i9-11900K over its Core i9 predecessor but again, when it comes to multi-threaded performance, the CPU is no match to its predecessor.

In x264 1080p, the Intel Core i9-11900K loses to the Core i9-10900K by 5 FPS. The same is true for V-Ray where the Core i9-10900K outpaces the Core i9-11900K by 14 more MPaths. Moving over to 3DMark results, our friend over at Twitter, Harukaze5719, has compiled a chart that compares the performance of the Core i9-11900K and Core i9-10900K in all said tests. The Core i9-11900K loses to the 10th Gen flagship in all tests.

Lastly, we have gaming performance tests and the results here are underwhelming for the Rocket Lake flagship. The Intel Core i9-11900K seems to be just slightly better or on par with the Core i9-10900K but there are also titles that show performance reduction versus the Core i9-10900K. The previous benchmarks also showed similar performance numbers and further mentioned how hot and power-hungry the flagship Rocket Lake CPU is going to be.
 

Hulk

Diamond Member
Oct 9, 1999
4,214
2,006
136
Handbrake is using bad out of the box settings for Intel Quicksync. Furthermore Xe Quicksync has greatly improved over Gen9 and also HEVC encoding on Xe is a full hardware based solution. On Gen9 HEVC encoding is a Hybrid of GPU and fixed function, speed and power consumption is much worse as a result. Xe CQP bitrate mode is actually really good in quality and speed, if you have a Xe device you just have to use 16 bframes and an offset of 2_6_8.




This is wrong, Gen9 looks much worse. I have both.

Thanks for bringing me up to speed on this.
 

Kedas

Senior member
Dec 6, 2018
355
339
136
If that review is about right for the 11th gen then I don't think anyone is going to rush out to buy it (supply problem solved). Wasn't gaming performance suppose to be better compared with the previous gen more like AMD?
 
  • Like
Reactions: lightmanek

Asterox

Golden Member
May 15, 2012
1,026
1,775
136
If those benches are accurate more like Rowboat Lake.

As far i remember, 10/20 i9 10900K for 5.2ghz overclock do not need "chiller for CPU cooling."Maybe they mixed "chiller" with regular CPU water cooling.


"Coming straight to the performance numbers, both CPUs were tested at an overclock frequency of 5.2 GHz across all cores. Do note that the Intel Core i9-11900K features a brand new architecture but has 2 fewer cores than the Core i9-10900K which relies on an enhanced Skylake architecture. Both CPUs were tested on a Z490 motherboard and the memory featured was 16 GB DDR4-3600 due to a lock on Rocket Lake. The Core i9-11900K voltage was set to auto and hit 1.48V. A chiller was required to maintain the 5.2 GHz overclock."

Well it is ok, if no one mentions 8/16 Rocket Lake power consumption.:grinning:
 
Last edited:
  • Like
Reactions: lightmanek

Kuiva maa

Member
May 1, 2014
181
232
116
If that review is about right for the 11th gen then I don't think anyone is going to rush out to buy it (supply problem solved). Wasn't gaming performance suppose to be better compared with the previous gen more like AMD?

I bet it is better than the previous gen 8 core in gaming. Good product but in the same ballpark with what we have out there already it seems.
 

SAAA

Senior member
May 14, 2014
541
126
116
I wouldn't trust that "review" at all especially given their "superb" ability at overclocking by setting things to auto and pumping 1.48V in the CPU.
That barely 10% faster at single core workloads when several more leaks are closer to 15-20% hints heavily their setup was at fault.

Hey they got 693 and 6723 in the super repeatable CPU Z test when just a week ago we saw this also at fixed 5.2 GHz:

Intel-Core-i9-11900K-CPUZ-5.2-GHz.jpg
 

uzzi38

Platinum Member
Oct 16, 2019
2,622
5,880
146
I wouldn't trust that "review" at all especially given their "superb" ability at overclocking by setting things to auto and pumping 1.48V in the CPU.
That barely 10% faster at single core workloads when several more leaks are closer to 15-20% hints heavily their setup was at fault.

Hey they got 693 and 6723 in the super repeatable CPU Z test when just a week ago we saw this also at fixed 5.2 GHz:

Intel-Core-i9-11900K-CPUZ-5.2-GHz.jpg

They said in the review manually setting the voltage wouldn't work.
 
  • Like
Reactions: lightmanek