Discussion Intel current and future Lakes & Rapids thread

Page 376 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ajay

Lifer
Jan 8, 2001
15,332
7,792
136
Hm, the competition already has lower prices before Rocket Lake launch.:mask:


Meh, I still think the 5800X should be going for about $450 US. But, what the market will bear determines the price.

Edit: Duh, Microcenter has them @ $450 right now - a really good price IMHO.
 
Last edited:

CP5670

Diamond Member
Jun 24, 2004
5,508
586
126
Not sure how the 5800X compares, but my 10700K at 5ghz shows maybe 60-110W usage in games. It uses 200W only in Prime95. Games don't really use more than 3 or 4 cores at full load consistently. In contrast, the 3090 always goes right up to its 350W power limit at all times (and some of the AIB versions use 400-500W). It would be nice to have lower power usage in games, but the CPU is not the main contributor to it.
 
  • Like
Reactions: Zucker2k

dullard

Elite Member
May 21, 2001
24,998
3,326
126
Hm, the competition already has lower prices before Rocket Lake launch.:mask:


Mindfactory: the company promoted on AMD's website and known to have better prices for AMD products than Intel products, supposedly has an unreleased Intel 11700K in stock at a high price (far higher than the other Intel _700K chips), before the allowable pre-sale date! I'm not sure that tells us much.

Interesting that over five 11700K chips have sold. o_O


1614291858233.png
 
Last edited:

Hulk

Diamond Member
Oct 9, 1999
4,191
1,975
136
Not sure how the 5800X compares, but my 10700K at 5ghz shows maybe 60-110W usage in games. It uses 200W only in Prime95. Games don't really use more than 3 or 4 cores at full load consistently. In contrast, the 3090 always goes right up to its 350W power limit at all times (and some of the AIB versions use 400-500W). It would be nice to have lower power usage in games, but the CPU is not the main contributor to it.

Correct. Unless all of those internal structures are being utilized, generally by artificial benchmarks specifically designed to do just that, then the power draw during actual use is much lower.
 

cortexa99

Senior member
Jul 2, 2018
318
505
136
For power consumption I think Gaming which has less threads loaded still doesn't show full picture, as well as the AVX512/Prime95 stress which are in another extreme situation.

Anand's review
116012.png


During AVX2 y-cruncher we can see something, first it seems when go from Coffeelake to Cometlake the efficiency lost quite a bit, and the most efficient parts at Intel's side were CFL below 9900k.
And comparison between AMD&Intel, take core/thread count/clock difference/native die into account, AMD has slight efficient advantage since Zen2.
But it leaves me a question: does Intel's higher consumption not only due to process node but also more complex FPUs even no matter what generation of AVX(1/2/512) usage?

Other sides that has similar comparison(cinebench)

gamersnexus
link
power-consumption-blender.jpg

legitreviews
link
power-consumption-10900k.png
 
  • Like
Reactions: KompuKare and Tlh97

naukkis

Senior member
Jun 5, 2002
702
571
136
But it leaves me a question: does Intel's higher consumption not only due to process node but also more complex FPUs even no matter what generation of AVX(1/2/512) usage?

Coffee / Cometlake has about equal FPU to Zen2, however Zen3's FPU is much more complex. So no, difference is coming from process node and not from more complex FPU. Those cpu's won't support AVX512.
 

Hulk

Diamond Member
Oct 9, 1999
4,191
1,975
136
Coffee / Cometlake has about equal FPU to Zen2, however Zen3's FPU is much more complex. So no, difference is coming from process node and not from more complex FPU. Those cpu's won't support AVX512.

I think there is some nuance to why Intel's parts use more power beyond the 14nm vs 7nm node difference. I think the rationale goes something like this.

1. Perhaps the larger structures in the 14nm process can handle more voltage/heat and are able to clock higher, albeit at the expense of power and heat of course.

2. It is more advantageous for Intel to advertise high clock speeds and the resulting performance they can obtain with those high clocks (namely gaming benches) than it is for them to advertise lower power consumption. Zen 3 forced Intel's hand so they had to crank up the clocks to compete.

3. Those last couple of hundred MHz comes at the expense of greatly increased power consumption as detailed in previous posts. It's bad enough when it one or two cores are cranked up to 5GHz+, but the numbers go through the roof (ie 250W+) when 8 or 10 cores are boosted into the 5GHz range. AMD either can't or doesn't feel the need to push clocks so high, especially all core.

Intel's is willing to push their 14nm process into the zone of "diminishing returns" in order to be competitive due to falling behind in architecture (IPC). Rocket Lake will of course even things up a bit.

I think that it's always better to have a little "left in the tank" when it comes to frequency. That way if your competition releases higher clocked parts or a new architecture you can quickly respond with higher clocked parts. Intel has been stalled on Skylake for 5 years, and nearly stalled with process as well. Their only response was to continually refine the process to push the clocks.
 

Hulk

Diamond Member
Oct 9, 1999
4,191
1,975
136
The sweet spot in the pricing for me looks to be the 11700. Since the only difference down the stack from 11900k to 11900 to 11700k to 11700 is binning I'd go with the 11700 and provide good cooling. If history repeats (Comet Lake history that is) then it'll only be a few hundred MHz off the 11900k.
 

Justinus

Diamond Member
Oct 10, 2005
3,167
1,509
136
11600kf for $279 seems like a killer price if true, not sure if any of the 8c chips are really a better value than the AMD counterparts though. Either way I can't wait for reviews to drop :D

My guess is any of the SKU's with similar prices to the zen 3 chips will be on par or slightly faster. I'd think only the 11900 will clock high enough to have any meaningful speed increase over zen 3, and they're clearly going to charge a lot for it no matter how small the increase is. 11900k 33% more than a 5800x, and how much faster than a 5800x do you think it's going to be?
 

VirtualLarry

No Lifer
Aug 25, 2001
56,226
9,990
126
It'd be sort of annoying since it would be a BIOS setting. Are there any games which actually use AVX-512? The small cores might have some use.
What are the odds that OEM rigs with ADL, WON'T have a toggle in the BIOS, and will effectively, have AVX512 opcodes disabled "permanently". (After, who needs AVX512 in a Dell consumer box, right?)
 

dr1337

Senior member
May 25, 2020
310
510
106
how much faster than a 5800x do you think it's going to be?
Unless the leaked geekbench numbers show intel sandbagging (very unlikely at this point IMO), I think the best they could hope for is 20% with the 11900k. Maybe the 11700 will match the 5800x but with the way it looks right now, I'm not confident with how much intel is leaning on clocks for performance still. Or maybe the IPC improvements will really shine at lower clocks and the $359 11700f might be enough to match the $449 5800x in both single and multi-core. And as long as intel actually does have solid IPC gains, they should be beating AMD in SC perf across the stack. Just my speculation
 
Last edited:

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
What are the odds that OEM rigs with ADL, WON'T have a toggle in the BIOS, and will effectively, have AVX512 opcodes disabled "permanently". (After, who needs AVX512 in a Dell consumer box, right?)

AVX512, small cores and HT all have performance penalty when they are not used.
Small cores and HT being obvious - if your "ST" task is scheduled wrongly ( on small core when it needs big one, or on HT core when it needs full one ) performance will suffer directly and then suffer once more if it gets rescheduled to proper core that now needs to warm up by rising clocks and taking cache misses.
AVX512 is less obvious, if your software has no support for this instruction set, you pay the tax on every context switch. If you core was working on task A and has FP dirty bits, bad luck for you, now instead of saving 16x256bit AVX256 registers you need to save 32x512bit + mask registers to the thread state on each context switch. That is 4x more data to save that eats into caches.
And context switches do happen on rescheduling, user/kernel transitions and some other cases. They don't happen when you have 1-2 threads per core busy rendering or encoding and thus things look great in benchmarks that support AVX512 and are relevant for barely anyone?

So not having AVX512 is perfectly fine if your software is not optimized for it ( easier to list things that support like rendering, encoding, than all others that don't like games and desktop computing ).
Smart Alder Lake owners will just disable small core clusters and AVX512 and stay with 8 proper cores, while leaving their rendering and e-p competitions to next gen Threadrippers. I will certainly won't cry a single tear about missing AVX512 on desktop, got proper systems to take care of me.
 

RTX

Member
Nov 5, 2020
90
40
61
Yes it does, a larger cache means fewer trips to memory. As always the performance increase varies depending on the workload. If the program is particularly memory sensitive then the IPC may increase by over 10%, whereas code that contains less memory operations will witness a much smaller increase (if any at all)

On average from looking at various benchmarks and calculating comparisons, I’d estimate that doubling the size of a cache (regardless of whether it’s L1, L2, or L3) normally returns a 4% - 5% increase in IPC.
How would the 8893 v4 perform vs the 5775C with the 60MB L3 vs 128MB L4? Both monolithic quadcores
 

coercitiv

Diamond Member
Jan 24, 2014
6,151
11,686
136
How would the 8893 v4 perform vs the 5775C with the 60MB L3 vs 128MB L4? Both monolithic quadcores
The L3 is likely twice as fast as the L4, so in this (purely academic) case the 8893 v4 would be faster.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
I wonder if Intel is pushing with RocketLake because Alder Lake is turning out better than expected ( in perf and yields ). Fun year ahead of us, hopefully overall availability will improve in both CPUs and GPUs.
 

Hulk

Diamond Member
Oct 9, 1999
4,191
1,975
136
Assuming these prices are correct this is one of the best examples of binning currently available.

11900K - $600 - 8/16 core, 16MB L3 - 10900K 5.3/2c, 4.9/10c
11900 - $510 - 8/16 core, 16MB L3 - 10900 5.2/2c, 4.6/10c
11700K - $485 - 8/16 core, 16MB L3 - 10700K 5.1/2c, 4.7/8c
11700 - $390 - 8/16 core, 16MB L3 - 10700 4.8/2c, 4.6/8c

Outside of yields and turbo modes the sand is organized exactly the same in all of these. Well I guess the sand is just a little better organized as you go up the stack;) Let's assume the turbo modes are going to be the same for 11 generation except for #1 below.

My questions/observations:

1. Since for 11th generation but 11900 and 11700 will have 8 cores there is nothing to distinguish between them except for clocks. Looks like Intel will have to increase the all core clock of the 11900 to 4.8 or it will probably perform worse than the 11700K.

2. Moving down the stack you are paying huge percentage increases in price for tiny increases in performance. Like 2-3% more performance for 15% more money. Dollars per all-core GHz goes something like this down the stack $122.44, $106.25, $103.19, $84.78. Assuming #1 is correct. Pretty easy to see the price/performance champ here.

3. With Intel squeezing every last MHz out of their parts "automatically" given adequate cooling and power supply what is the actual value of the K parts? How much headroom from the 11900 to the 11900K is there? Tests seems to show 100MHz and better power consumption numbers. Basically "overclocking" guidelines are built into the parts these days and only need adequate cooling to attain them.

4. It will be interesting to see how close 11700K's can clock to 11900K's and I wonder as time progresses and yields get better if the gap will close or they both will perform better than early samples?
 
Last edited: