Question Alder Lake - Official Thread

Page 15 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hitman928

Diamond Member
Apr 15, 2012
5,320
7,995
136
This below quote from that link is what makes me leary of Alder Lake: (not to mention the temps)
"AMD Zen 3 is more efficient
At the upper end of the power, it becomes clear that a Ryzen 5 5950X with 16 "large" Zen 3 cores is faster with 142 watts (max. permanent power consumption ex works) than a Core i9-12900K with up to 241 watts, i.e. much more efficient. And even in the 65-watt region, things are getting tight for Alder Lake. Although Ryzen 5000 with Zen 3 cores consumes comparatively little energy even under full load, this architecture can also work much more efficiently if the clock is lowered.
In concrete terms, this means that a Ryzen 9 5950X in Eco mode with a maximum of 88 watts beats a Core i9-12900K with 88 watts by 8 percent in the editors' course, the 65-watt configuration highlighted by Intel even by 33 percent (with 35 percent higher consumption). A test of the Ryzen 9 5950X with 65 watts is still pending, but both platforms should not take much at this level – AMD's classic approach with a type core is intel's hybrid approach here at least equal and at the upper end of performance still clearly superior.
"

Where is this quote from?
 

nicalandia

Diamond Member
Jan 10, 2019
3,330
5,281
136
Last edited:

positivedoppler

Golden Member
Apr 30, 2012
1,103
171
106
"In concrete terms, this means that a Ryzen 9 5950X in Eco mode with a maximum of 88 watts beats a Core i9-12900K with 88 watts by 8 percent in the editors' course"
So 8% less efficient (if that's how you calculate that) for $200 less, I guess in some countries electricity is expensive enough for this to make your money back pretty fast but otherwise...

You have to factor in more expensive cooling, mb, and ps. And if you cannot sufficiently supply ample power and cooling to the 12900k, then hardware longevity at continuous 90C will be bugging me for the duration of it's use. If you have the money to afford the optimal power, cooling, platform upgrade, and slight increase in electric bill, ya 12900K is the cpu to get right now.
 

Racan

Golden Member
Sep 22, 2012
1,109
1,986
136
Thanks. It looks like that's based only on the results of 3 render benchmarks though (CB20, CB15, and POV-RAY). So, while nice to see, not exactly a well rounded test suite.
Yeah, that's all I could find for now, I hope we see some comprehensive IPC tests soon.
 

Abwx

Lifer
Apr 2, 2011
10,956
3,474
136
"In concrete terms, this means that a Ryzen 9 5950X in Eco mode with a maximum of 88 watts beats a Core i9-12900K with 88 watts by 8 percent in the editors' course"
So 8% less efficient (if that's how you calculate that) for $200 less, I guess in some countries electricity is expensive enough for this to make your money back pretty fast but otherwise...

To bridge this perf gap the 12900K has to be set at roughly 125W, that say it better.

 

Abwx

Lifer
Apr 2, 2011
10,956
3,474
136
A single benchmark isn't really telling of IPC though. I also don't know which instructions CB15 supports. Does it use AVX(2) or just prior SSE sets?

Up to SSE 4.2 and also double precision FP instead of SP.

Agree that Computerbase ST perf is exclusively FP, methink that for INT there s AT s review with Spec_int wich could be more or less reliable.
 

Dayman1225

Golden Member
Aug 14, 2017
1,152
974
146
And still using a 2080Ti for gaming tests. Absurd. I used to wait eagerly for The AT review for new cpus, but now I pretty much ignore it.
Old dGPU, using Windows 10 for ADL(edit: this was wrong they tested both) and 2 hours late for embargo lift… oh and they haven’t done proper dGPU reviews in years(?) Anandtech quality is certainly falling off apart from their smartphone reviews.
 
Last edited:
  • Like
Reactions: yuri69 and ondma

insertcarehere

Senior member
Jan 17, 2013
639
607
136
Meh. What does "particularly valuable" mean?

To me it's worth ~$15 to $20, max, even now you can get a used Quadro with DVI and display port for that price on eBay with its own memory to boot. I mean I won't build with any F processors because you basically never save more than but... the number/percentage of folks buying an i9 and using the integrated graphics is what?

To me on the high end Ryzens and even if the i9 didn't have an iGPU it woudn't change my mind about them at all.

All these comparisons to the 5600X and 5800X also seem to ignore that the 5600G and 5700G exist and are fine - most people would never know the difference CPU wise and you get a usable gaming GPU (by many standards) included instead of a integrated stand-in. When we are evaluating the relative value of the 12400 and the like I feel like that it will be more pertinent.

¯\_(ツ)_/¯

Maybe it's worth more in your estimation, and we can both have our own opinions :)

Different strokes for different folks I guess, I don't live in the US and the local marketplace for anything usable below US$50 atm makes me wanna throw up. From my perspective being able to use a PC build for some things while waiting out the insane GPU market without spending money on an interim solution does have some value to me.

5600G and 5700G are fine CPUs in their own right but since they are Cezanne, they have only 16MB L3 Cache available to the 8-core complex vs 32MB for the 5800X/5600X, which significantly affects IPC and therefore would noticeably hamstring performance, including gaming with dGPUs.
 
  • Like
Reactions: Zucker2k

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
ADL has over double the L2 cache per core (0.5MB vs 1.25MB) than Zen 3 and 30 MB vs 32 MB of LLC. I don't think cache capacity is really an advantage for Zen3. Yes, DDR5 will help ADL, but that will be mostly for heavy MT scenarios or a couple of benchmarks that are more memory bound than anything, that doesn't really tell us much about the IPC or architecture performance though.

There is more to caches than sizes tho. Out of those the only real advantage Intel has in L2 cache, 1.25MB help big time to reduce misses versus 512kb.
Now in everything else about Intel's L3 is mediocre at best and so is their memory subsystem. 50% slower latency, fraction of AMDs L3 bandwidth. AMDs cache simply works better to hide memory latency in any way You slice it even before considering things like prefetches. And there is an elephant in the room in the form of 2xCCD chips that are the real competition for 12900K ( since lower Intel chips have less cache ). So in things like MT SPEC there is extra help.

Faster DDR5 is already helping ADL big time. Does not take a rocket scientist to look at those DDR5 6000C36 results to see massive impact compared to other tests like Anandtech. Different platforms aside, it is same chip in same tests being tested after all, so delta between Z3 and ADL gives away impact of memory on "IPC".
 

Ajay

Lifer
Jan 8, 2001
15,468
7,870
136
please excuse me if this has been posted before:


seems DDR4 is the best way to go for now.
Haven't watched the video yet (I'm subscribed), but I love it when the media says company X must cut prices. Companies don't have to cut prices when their industry is supply constrained. If they choose to, that's their business. Intel are producing more 11th gen CPUs right now than 12th gen (and they are cutting 11th gen prices).
 

ondma

Platinum Member
Mar 18, 2018
2,721
1,281
136
Overall, I am pleased with Alder Lake. Power use is high (not so bad in gaming), but not as horrible as the naysayers were predicting. For a totally new architecture, it seems to work well in most cases, including the fact that the better IPC actually translates to improved gaming (in contrast to RL).

Still not sure that 10 or 12 big cores would not be better for desktop if Intel had smaller, more power efficient cores, but within the limits of their architecture and process status, the big.little architecture seems to be a good compromise with surprisingly few glitches.
 
  • Like
Reactions: Zucker2k

Ajay

Lifer
Jan 8, 2001
15,468
7,870
136
Old dGPU, using Windows 10 for ADL and 2 hours late for embargo lift… oh and they haven’t done proper dGPU reviews in years(?) Anandtech quality is certainly falling off apart from their smartphone reviews.
I don't read smart phone reviews, but Ian still does a great job within the structure of AT's test procedures. The Win10/11 point is pretty muddled. W11 advantages Intel, at the expense of AMD and vise-versa. It seems as though AT doesn't have anyone to do full on dGPU reviews anymore. Ryan (and Ian, IIRC) does Architectural breakdown - so for benchmarking I go elsewhere. I do wish they'd get back to it, maybe the $$s just aren't there for them in this very competitive area.