Question Alder Lake - Official Thread

Arkaign

Lifer
Oct 27, 2006
20,736
1,376
126
Lol. "Arbitrary Standard" should be the official trademark of Apple.

And forget 2020 Apple performance, given that the newest M1 derivatives still lag in peak ST performance to these releases.

I think the leapfrog wargames between Intel and AMD will make the M family a bit of an afterthought at best, as it will be good enough for Apple loyalists, very good for Apple profit margins, but continually irrelevant for people needing either good value or high performance options. Alder Refresh, Zen4, etc seem primed to ignite significant carnage.

Edit : Oh no, I've angered an Apple fan 😂😂 The downvote, so devastating 🤣🤣

Hilarious, and revealing at the same time.
 
Last edited:

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
23,087
11,812
136
This part of your post is totally wrong "Throughout the advent of the Zen architecture and subsequent iterations those chips always consume more power in gaming, even when churning out inferior fps. "

In the first place, Zen 3 wins in almost all cases in games. And as for the rest, I don't remember them being power hogs, so I don't know where you get that. Looking at the 3900x, all the systems are within a few watts of each other. Not that I believe that, but its your own slide !

And why are we talking about Zen power consumption in an Alder Lake thread. No reviews to compare to yet....
Oh, and not to derail this thread more than you already have, but those graphs do NOT take into account that the system could be getting more FPS with some systems, and the power draw could be the video card working harder for the better CPU. So saying ANYTHING about CPU power draw while gaming can NOT be gotten from those graphs.

Now lets get back to taking about Alder lake, until the reviews come out tomorrow, then we can discuss anything in the reviews, including power draw of both camp's CPU's.
 

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
23,087
11,812
136
One could argue that the problem isn't Intel's technology. The actual problem is their insistence to be on top and their refusal and self-denial in accepting defeat. This is what is forcing them to create power hogging products. If they simply accept that they have a good product and their product is good at doing certain stuff, then they can simply market it as such and keep power levels in check because then they don't have to prove anything to anyone. People who see value in their product will still buy it.

Case in point: their ARC GPU. They know it can't compete on performance with the established players so they are being creative in their marketing and showing users what they would gain from their product. This same approach on the CPU side would benefit them too, if they can somehow overcome their massive ego.

You don't see ARM running ads showing how their CPU designs are better than Intel or AMD CPUs. They created their own niche and became successful as a result.
I re-iterate what you wrote:
'"never again will they be in the windshield" .

This kind of stupidity in a corporate environment is what makes companies go bankrupt. Not saying it will happen anytime soon, but they need to face reality, and be honest with their clients. And try harder. A little modesty might actually work. Well, that and a little better hardware.
 
  • Like
Reactions: Mopetar

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
23,087
11,812
136



Your problem is that you think desktop is about wcg, primegrid, and folding@home, etc. A 12600k is more suited to the average desktop user because of superior few thread superiority over the 5950x in the majority of desktop usage scenarios. The only scenario where a 5950x makes sense is where you need those many cores. How many desktop pc users do rendering and encoding all day? Yet, people game all day, and here, even the 12600k is the better gaming processor. You claim the 12900k is "useless" because the 5800X3D is about to dethrone it. So, you've drawn your conclusion now even though no one can buy a 5800X3D yet. Meanwhile, other users have been enjoying the 12900k and other excellent ADL chips in gaming and productivity for months now. Yet you think you're not biased?
For as far back as 6 months+ now, you've been recommending potential buyers wait for the 5xxxX3D release because it was going to be released soon, at the end of the year, and yet what happened? We're in the 2nd quarter of '22 and even the watered down version of Zen3D, the 5800X3D is nowhere in sight. You need to tone down your bias a bit, and adopt a wait and see approach like you do with Intel releases. Oh, and be a bit objective too. You seem to be on a crusade to bash everything Intel, on sight. You're the reason this thread is being derailed because you can't keep your bias to yourself. That's the only decent thing to do at this point.

PS: What's the position of the 5950x on that list? What do you think accounts for that?
First, I mentioned nothing about DC. Second, I never said anything about the 12600k, but at the moment, it is good for gaming and average desktop use. Quit twisting my words.

Next, in your 3 reviews, the 5900x is there in all 3 and the 5950x is there once or twice. And the one mention that you will need a beefy cpu cooler for the 12900k. I have said all of this.

And last, I am not the one derailing, I was talking about the 12900ks being hot power hungry, expensive and ridiculous. I did mention (as others did ) that it was brought out to try and deal with the April 20th release of the 5800X3D. You can't buy either one right now, so its all speculation.

Edit: the "Best value productivity" CPU was the 5950x in the techspot review.
 
  • Like
Reactions: Drazick

Carfax83

Diamond Member
Nov 1, 2010
6,189
978
126
I disagree; the 12900K is only 10% faster than the 5900X in productivity, and in gaming the 12600K is hot on it's heels for half the price. If you're mainly gaming, then it makes no sense to go for the 12900K; if you're into productivity the 5900X is better value.
I don't disagree that the 5900x is a better value than the 12900K for productivity. I was speaking purely from a performance perspective. The 12900K isn't much further behind than the 5950x despite having significantly less threads and performance cores.

Those efficiency cores help out undoubtedly but they are still apparently fairly weak for heavy workloads, and it's obvious that the performance cores have to be clocked very high to catch up with the 5950x, causing the power consumption to skyrocket.

Future efficiency cores with higher IPC, greater counts and wider vectors should be able to remedy that vulnerability and lead to lower power consumption overall.
 
Last edited:

Zucker2k

Golden Member
Feb 15, 2006
1,732
1,047
136
Some interesting slides from TPU:

1636032951068.png

1636033017507.png

1636033153156.png

1636033199916.png

1636033294623.png

1636033324948.png

1636033341367.png

1636033361774.png

1636033380575.png

Not everything ran perfectly, though. In several of our tests, the workload got scheduled onto the wrong cores. We did use Windows 11 for all our testing, which has proper support for the big.LITTLE architecture of Alder Lake and includes the AMD L3 cache fix, too. Intel allocated extra silicon estate for "Thread Director," an AI-powered network in the CPU that's optimized to tell the OS where to place threads. However, several of our tests still showed very low performance. While wPrime as an old synthetic benchmark might not be a big deal, I'm puzzled by the highly popular MySQL database server not getting placed into the P cores. Maybe the logic is "but it's a server background process"? In that case, that logic is flawed. If a process is bottlenecked by around half (!) and it's the only process on the machine using a vast majority of processor resources, doesn't it deserve to go onto the high-performance cores instead? I would say so. Higher performance would not only achieve higher throughput, and faster answers to user requests, but it would also reduce power consumption because queries would be completed much faster. Other reviewers I've talked to have seen similar (few) placement issues with other software, so it seems Intel and Microsoft still have work to do. On the other hand, for gaming, Thread Director works pretty much perfectly. We didn't have time to test Alder Lake on Windows 10 yet—that article is coming next week.
Link

Edit: So scheduling is going to need some ironing out still but even with its high power consumption and temps, the 12900k is competitive against the 5950x in highly parallel code and cruises away in everything else.
 

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
23,087
11,812
136
Finding large primes isn't exactly something what 99.99% of people would consider meaningful. Source: I'm a math student.
Well, you don't know me very well.... Thats the exception. I don't care about primes. It spends 24/7/365 @100% load trying to cure cancer.That was an example since people know prime stresses a CPU. From the multiple replies, its obvious nobody believes me, that its a hot CPU and would throttle down quickly and stay there under sustained load at 241 watts, where its optimal performance is supposed to be.

So you all believe what you want, I am done here with morons.
 

guidryp

Platinum Member
Apr 3, 2006
2,208
2,483
136
Does this have to devolve into "No my favorite CPU beats yours in this benchmark..." (usually by some irrelevant amount)?

This is the Alder Lake thread, and it's a large jump that puts Intel back in the game. Zen 3 was also a great jump for AMD and they have had a great run with it. Apple M1 SoC have really shaken things up, both for the very good CPU but also all the integrated co-processing.

All really great products, that can be appreciated without having bun-fights over minutia.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,376
126
I think early sales figures could be fairly deceptive, and looking at Micro Center on Thursday (built a 10400 for a client who just needed a workstation for an auto dealership, at $179, it was by far the best price/perf combined with a $119 Asus Prime B560), things are not looking healthy for CPU sales at the moment in general.

They had a fair amount of 12th gen in stock, and mountains of Zen3 accumulating. Motherboards on AM4 were almost comically overstacked. I asked about what was selling and the manager said there's been a fair amount of excitement with Alder but that overall things have been pretty slow outside of the campers for periodic GPU batches. It feels like the pandemic stimulus checks contributed to a solid buying wave in 2020 to early 2021, but it's been declining steadily over time now.

The answer isn't hard to find. The only GPU for less than $1000 in stock was a $199 2GB Quadro P620, basically a card just to give simple video output. With no GPUs in sight, what you're left with are the extreme edge cases of buyers with a couple grand+ for a total new build with scalper level GPUs, and the handful of prosumer guys needing newer faster stuff for their work. The average consumer is boxed out of the market entirely.

It ends up pretty dire looking forward. I've seen all the reviews, looked at all the great stuff. The W11 nonsense, overpriced boards, first wave DDR5 teething, all this shall pass. $200 12400 will be a return to outstanding value in that segment. But in the end, for what? For who? It's mostly tier for tier a bit better than Zen3, but that's been out for ages, and anyone who had money that needed a current gen platform probably already bought for the most part with the stimulus waves. A slightly better product to pair with no GPU at all (at any reasonable price at least) makes no sense to most people.

I don't really see much light in this tunnel my friends, and if all this product starts backing up in distribution and retail, it doesn't really make Raptor, Zen3D, and Zen4 look all that hopeful in turn. Increasingly great products, for a vanishingly small market, shrinking segment.
 

Makaveli

Diamond Member
Feb 8, 2002
4,515
786
126
I don't have a link handy but I saw where one review site did exactly that, tested cinebench r20(I think) at a fixed frequency to compare relative IPC, with alder lake P cores as the reference 100% performance zen 3 cores were at 99%, so effectively in cinebench the IPC is identical. The performance gains seem to be largely the higher frequencies.
This is it here.
1636910297746.png

 

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
23,087
11,812
136
There are 8 charts there, and the 5950x only came ahead in two instances, by a small margin. Yet, somehow you couldn't figure out how to apportion a clear winner even in this clearly open and shut instance. This is why you get accused of bias. If the numbers fit your bias they're good, if it doesn't they're bad.
Then there's also this:
AGAIN, you and that articles text are comparing the 5900x, not the 5950x, so I won't bother replying anymore, SINCE YOU CAN'T READ.
 

TheELF

Diamond Member
Dec 22, 2012
3,623
528
126
AMD could in fact "simulate" the Big/Little concept with Zen 3. Take the 5950X. If the Thread Director can be used by AMD to "figure out" that an app is really only using 6 cores then clock those 6 cores up as high as possible while clocking down the rest of the cores, performance would increase and power might not go through the roof.
That is a very basic functionality of turbo and has been used for years and years now by both intel and amd.
Ryzen master even tells you which cores are "the best" reaching highest clocks with the least Vcore.
 

igor_kavinski

Diamond Member
Jul 27, 2020
3,759
2,222
106
It's fair. I haven't had an Intel rig of my own since 1997.
I only owned a Sempron once, overclocked from 2GHz to 3GHz. Sadly, didn't get to spend as much time enjoying it as I could have. Had to go out into the cruel world and make a living. My brothers used it for a year, until the overclock messed up the mobo USB ports and the whole thing got unstable. They upgraded to Intel something. I've bought AMD laptops for my friends and they are happy with them. But generally, I've noticed people being hesitant towards AMD because they don't want second best. They want THE best. Convincing people even when AMD was the absolute best with 5000 series didn't prove fruitful. My IT guy, for example, refused to get AMD laptops for our company, even when they offered better price performance ratio. He didn't trust them to be as good as Intel.
 

Hulk

Diamond Member
Oct 9, 1999
3,379
877
136
Have you tried changing the power plan to Best Performance?
Yes I did. No difference. Power plans really only change how fast various components ramp up not how CPU's are utilized. The easiest way to to adjust things is with process lasso. It's pretty easy actually and I can tailor to my workflow. For example, I have DxORaw set to use 6 P's regardless. So when I'm editing in Photoshop I have 2P's and 4E's, which is plenty. Meanwhile DxORaw is processing the RAW images with the 6P's.

Honestly the 4E's processing RAW files in DxOPure Raw can almost keep up with me as I edit in PS. 4P's would probably be enough. By "keeping up" I mean the time for DxO PureRaw to process a RAW is about the same as me to edit with PS. 8E's would probably have done it. I shouldn't have cheaped out and gone for the 12900K!
 
  • Like
Reactions: lightmanek

uzzi38

Platinum Member
Oct 16, 2019
2,235
4,591
116

i3 12100 review. Manages to be faster than the 3600 in the games that they tested.
I don't particularly doubt the results but:

All benchmarks were recorded in 1080p at medium-to-high graphical settings. Credit goes to Testing Games. Ryzen 5 3600 benchmarks are included for comparison (at identical settings and with an Asus ROG X570 Crosshair VIII motherboard).
I'm going to wait until pretty much anyone else does the testing.
 

AtenRa

Lifer
Feb 2, 2009
13,784
2,956
136
It is either performance devided by power (perfunit/W) or work devided by energy (workunit/Wh) - perf/Wh makes no sense with respect to power effciency.
You are correct, but since we dont have moving parts in computers the performance is the job/work done in a unit of time, for example how much time it will take to finish a render or how many jobs the computer can finish per unit time, for example how many different transactions it can do in a unit time.
Both of the above workloads are the performance/work done by the computer.

So perf/wh is perfectly fine


For example, if the Ryzen 5600X takes 10 hours to finish the render in Blender by using 60Wh and Ryzen 5950X takes 4 hours to finish the same render (job/work) using 120Wh then Ryzen 5950X is more efficient because it finished the same job/work at LESS than half the time using double the energy.

Or lets take the example of transactions per time unit, Ryzen 5600X can do 1000 transactions per hour using 60Wh when 5950X can do 2500 transactions per hour using 120Wh. Again Ryzen 9550X is more efficient because it can do more than double the work using only double the energy.
 

TheELF

Diamond Member
Dec 22, 2012
3,623
528
126
Because they just dumped billions (probably $10 billion or more) on N3 capacity. Plus an unknown amount of cash on N6, N5, and possibly N4. And they're building out a fab in AZ and OH. They're begging the Biden administration more money so they can afford to pay off TSMC and still carry out node research/build fab capacity. Alder Lake's successes won't make up for all of that.
They are closing in on 4 years of double the net income to make up/pay for that.
Alder will not sell any better or worse than previous gens, all their fabs are running at capacity anyway and that's why they are expanding, and buying supply from outside in the meanwhile, to be able to produce more in the future to be able to make even more money.
 

Carfax83

Diamond Member
Nov 1, 2010
6,189
978
126
Okay but the 5600X also loses to the 5800X, which at least as of 2020 was kind of abnormal in a lot of game benchmarks (in some game benches at Vermeer's release, the 5600x was the fastest of the lot). Look at the minimums, something's going on there when moving to the 5800X. And we still don't know what clockspeed the 5600X is running either so . . . what conclusions can we draw?
It's not just the 5800x and the 5600x that shows unexpected abnormalities, the 10600K is also a bit faster than the 10700K despite having less cores, less cache and lower frequency. That said, I did some digging and in a Reddit AMA thread for the in house Schmetterling engine which is used in the game, the developer claims the engine can scale to 64 threads provided there's no GPU bottleneck.

Source

The only reviewer that mentioned CPU scaling was Dsogaming, and it appears the game might use up to 8 threads, but doesn't like hyperthreading beyond a certain point. AMD's intercore latency for the 5600x and 5800x should be less than Intel's Comet Lake CPUs due to a much larger L3 cache, which may explain why the 5800x gains slightly compared to the 5600x while the opposite is true for the 10700K and 10600K which has much less L3 cache than both Zen 3 and Golden Cove per core.

That's the only explanation I can think of for the results. It also shows why Alder Lake has much better scaling than the other CPUs, because of the varying amounts of L3 cache between the CPUs.

Also, this game loves fast memory and benefits greatly from DDR5 which shows that it accesses the main memory frequently, hence why it may also love large L3 caches:

 

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
23,087
11,812
136
DC applications generally don't like more cores? That's certainly news in this forums. And, the 12700kf does more than excelling at gaming. It excels in a lot more other areas than it's Zen 3 competitors @$300 so let's sweep that under the carpet and talk about 300w AVX-512 consumption even though even that feature can be disabled in bios. Moreover, if Zen 3 is better than ADL in DC applications, stick to Zen 3 and spare us the constant bashing, especially if you're not going to be a fair referee in your tests or do Apple to Apple comparisons @Markfw
First, I believe the gaming tests done by web sites, they are the experts. As far as other desktop, again, listen to website testing. As for DC, I am the ONLY one that I know of that has done any testing, and I am somewhat of an expert in that field, and since I run one, I give my feedback. As far as AVX-512, when you disable the e-cores, it turns on, and 300 watt is that it takes for 8 cores, thats fact, not conjecture (from the wall), As I said, in one case, it is faster per core, but not with e-cores enabled. The case igor showed was with them disabled, and it lost. Everybody knows that e-cores are not as fast as p-cores. For DC work, due to this, Zens3, specifically the 5950x is better.

Why is it that facts bother you so much ? Alder lake has no flaws or weak areas in your opinion ? Do you have one ?

And lastly, as I said above, its DOES win in primegrid 321 on a per core basis. But at $313 vs $580 (current pricing) you have to buy 2 of them to beat a 5950x. More exspensive, then components, then OS (if you don't run linux). Right now its loosing even on a core to core basis, but on average it is faster (2:10 for the 5950x, vs 2:22 for 12700f)

This must be taken with a grain of salt, the only way to truly know the results is at 100% completion, but for your enjoyment, how it stacks up with some others. Note the nighlighted line vs the green line. Note the 64 core EPYC, 8:42 completion estimate per task ! But it does 8 at a time. When running all 32 threads, and on windows, the 5950x is 3:44 ! 2:43 for a 3950x.
1647703455714.png
 
Last edited:
  • Like
Reactions: Drazick

ASK THE COMMUNITY