With the release of Alder Lake less than a week away and the "Lakes" thread having turned into a nightmare to navigate I thought it might be a good time to start a discussion thread solely for Alder Lake.
You are just grasping at straws here dude and it looks really bad on you10% is significant on it's on. In addition, produces less fps compared to Intel competitors.
Oh, and not to derail this thread more than you already have, but those graphs do NOT take into account that the system could be getting more FPS with some systems, and the power draw could be the video card working harder for the better CPU. So saying ANYTHING about CPU power draw while gaming can NOT be gotten from those graphs.This part of your post is totally wrong "Throughout the advent of the Zen architecture and subsequent iterations those chips always consume more power in gaming, even when churning out inferior fps. "
In the first place, Zen 3 wins in almost all cases in games. And as for the rest, I don't remember them being power hogs, so I don't know where you get that. Looking at the 3900x, all the systems are within a few watts of each other. Not that I believe that, but its your own slide !
And why are we talking about Zen power consumption in an Alder Lake thread. No reviews to compare to yet....
I re-iterate what you wrote:One could argue that the problem isn't Intel's technology. The actual problem is their insistence to be on top and their refusal and self-denial in accepting defeat. This is what is forcing them to create power hogging products. If they simply accept that they have a good product and their product is good at doing certain stuff, then they can simply market it as such and keep power levels in check because then they don't have to prove anything to anyone. People who see value in their product will still buy it.
Case in point: their ARC GPU. They know it can't compete on performance with the established players so they are being creative in their marketing and showing users what they would gain from their product. This same approach on the CPU side would benefit them too, if they can somehow overcome their massive ego.
You don't see ARM running ads showing how their CPU designs are better than Intel or AMD CPUs. They created their own niche and became successful as a result.
First, I mentioned nothing about DC. Second, I never said anything about the 12600k, but at the moment, it is good for gaming and average desktop use. Quit twisting my words.![]()
Best processors 2022: the best CPUs for your PC from Intel and AMD
The best processors in 2022 that will make your PC happywww.techradar.com
![]()
The Best CPUs: Productivity and Gaming
After all the extensive testing you're familiar with, the TechSpot CPU buying guide narrows things down to a handful of recommendations you can trust.www.techspot.com
![]()
The Best CPUs for 2023
Whether you're upgrading your desktop PC or building a new one, choosing the right processor is the most crucial and complex choice you will make. Let's simplify it! Here's how to make sense of AMD's and Intel's lines, backed by dozens of our deep-dive reviews.www.pcmag.com
Your problem is that you think desktop is about wcg, primegrid, and folding@home, etc. A 12600k is more suited to the average desktop user because of superior few thread superiority over the 5950x in the majority of desktop usage scenarios. The only scenario where a 5950x makes sense is where you need those many cores. How many desktop pc users do rendering and encoding all day? Yet, people game all day, and here, even the 12600k is the better gaming processor. You claim the 12900k is "useless" because the 5800X3D is about to dethrone it. So, you've drawn your conclusion now even though no one can buy a 5800X3D yet. Meanwhile, other users have been enjoying the 12900k and other excellent ADL chips in gaming and productivity for months now. Yet you think you're not biased?
For as far back as 6 months+ now, you've been recommending potential buyers wait for the 5xxxX3D release because it was going to be released soon, at the end of the year, and yet what happened? We're in the 2nd quarter of '22 and even the watered down version of Zen3D, the 5800X3D is nowhere in sight. You need to tone down your bias a bit, and adopt a wait and see approach like you do with Intel releases. Oh, and be a bit objective too. You seem to be on a crusade to bash everything Intel, on sight. You're the reason this thread is being derailed because you can't keep your bias to yourself. That's the only decent thing to do at this point.
PS: What's the position of the 5950x on that list? What do you think accounts for that?
I don't disagree that the 5900x is a better value than the 12900K for productivity. I was speaking purely from a performance perspective. The 12900K isn't much further behind than the 5950x despite having significantly less threads and performance cores.I disagree; the 12900K is only 10% faster than the 5900X in productivity, and in gaming the 12600K is hot on it's heels for half the price. If you're mainly gaming, then it makes no sense to go for the 12900K; if you're into productivity the 5900X is better value.
I'll take 10% slower performance for 17% cheaper price any day. And that's only accounting for the CPU price alone.It's also a good bit slower.
![]()
LinkNot everything ran perfectly, though. In several of our tests, the workload got scheduled onto the wrong cores. We did use Windows 11 for all our testing, which has proper support for the big.LITTLE architecture of Alder Lake and includes the AMD L3 cache fix, too. Intel allocated extra silicon estate for "Thread Director," an AI-powered network in the CPU that's optimized to tell the OS where to place threads. However, several of our tests still showed very low performance. While wPrime as an old synthetic benchmark might not be a big deal, I'm puzzled by the highly popular MySQL database server not getting placed into the P cores. Maybe the logic is "but it's a server background process"? In that case, that logic is flawed. If a process is bottlenecked by around half (!) and it's the only process on the machine using a vast majority of processor resources, doesn't it deserve to go onto the high-performance cores instead? I would say so. Higher performance would not only achieve higher throughput, and faster answers to user requests, but it would also reduce power consumption because queries would be completed much faster. Other reviewers I've talked to have seen similar (few) placement issues with other software, so it seems Intel and Microsoft still have work to do. On the other hand, for gaming, Thread Director works pretty much perfectly. We didn't have time to test Alder Lake on Windows 10 yet—that article is coming next week.
So, do people like you still prefer 3DMark2001 for testing GPUs??? Do you like this juvenile game? We could play some more...I said "People Like You"....
Well, you don't know me very well.... Thats the exception. I don't care about primes. It spends 24/7/365 @100% load trying to cure cancer.That was an example since people know prime stresses a CPU. From the multiple replies, its obvious nobody believes me, that its a hot CPU and would throttle down quickly and stay there under sustained load at 241 watts, where its optimal performance is supposed to be.Finding large primes isn't exactly something what 99.99% of people would consider meaningful. Source: I'm a math student.
This is it here.I don't have a link handy but I saw where one review site did exactly that, tested cinebench r20(I think) at a fixed frequency to compare relative IPC, with alder lake P cores as the reference 100% performance zen 3 cores were at 99%, so effectively in cinebench the IPC is identical. The performance gains seem to be largely the higher frequencies.
AGAIN, you and that articles text are comparing the 5900x, not the 5950x, so I won't bother replying anymore, SINCE YOU CAN'T READ.There are 8 charts there, and the 5950x only came ahead in two instances, by a small margin. Yet, somehow you couldn't figure out how to apportion a clear winner even in this clearly open and shut instance. This is why you get accused of bias. If the numbers fit your bias they're good, if it doesn't they're bad.
Then there's also this:
That is a very basic functionality of turbo and has been used for years and years now by both intel and amd.AMD could in fact "simulate" the Big/Little concept with Zen 3. Take the 5950X. If the Thread Director can be used by AMD to "figure out" that an app is really only using 6 cores then clock those 6 cores up as high as possible while clocking down the rest of the cores, performance would increase and power might not go through the roof.
I only owned a Sempron once, overclocked from 2GHz to 3GHz. Sadly, didn't get to spend as much time enjoying it as I could have. Had to go out into the cruel world and make a living. My brothers used it for a year, until the overclock messed up the mobo USB ports and the whole thing got unstable. They upgraded to Intel something. I've bought AMD laptops for my friends and they are happy with them. But generally, I've noticed people being hesitant towards AMD because they don't want second best. They want THE best. Convincing people even when AMD was the absolute best with 5000 series didn't prove fruitful. My IT guy, for example, refused to get AMD laptops for our company, even when they offered better price performance ratio. He didn't trust them to be as good as Intel.It's fair. I haven't had an Intel rig of my own since 1997.
Yes I did. No difference. Power plans really only change how fast various components ramp up not how CPU's are utilized. The easiest way to to adjust things is with process lasso. It's pretty easy actually and I can tailor to my workflow. For example, I have DxORaw set to use 6 P's regardless. So when I'm editing in Photoshop I have 2P's and 4E's, which is plenty. Meanwhile DxORaw is processing the RAW images with the 6P's.Have you tried changing the power plan to Best Performance?
I don't particularly doubt the results but:![]()
Intel Core i3-12100 Review: The New Ultra-Budget Gaming King - Art of PC
Intel's recent release of the Core i3-12100 and 12100F breaks a near two-year drought in the realm of what most consider "budget" gaming CPUs: AMD's Ryzen 3artofpc.com
i3 12100 review. Manages to be faster than the 3600 in the games that they tested.
I'm going to wait until pretty much anyone else does the testing.All benchmarks were recorded in 1080p at medium-to-high graphical settings. Credit goes to Testing Games. Ryzen 5 3600 benchmarks are included for comparison (at identical settings and with an Asus ROG X570 Crosshair VIII motherboard).
You are correct, but since we dont have moving parts in computers the performance is the job/work done in a unit of time, for example how much time it will take to finish a render or how many jobs the computer can finish per unit time, for example how many different transactions it can do in a unit time.It is either performance devided by power (perfunit/W) or work devided by energy (workunit/Wh) - perf/Wh makes no sense with respect to power effciency.
They are closing in on 4 years of double the net income to make up/pay for that.Because they just dumped billions (probably $10 billion or more) on N3 capacity. Plus an unknown amount of cash on N6, N5, and possibly N4. And they're building out a fab in AZ and OH. They're begging the Biden administration more money so they can afford to pay off TSMC and still carry out node research/build fab capacity. Alder Lake's successes won't make up for all of that.
It's not just the 5800x and the 5600x that shows unexpected abnormalities, the 10600K is also a bit faster than the 10700K despite having less cores, less cache and lower frequency. That said, I did some digging and in a Reddit AMA thread for the in house Schmetterling engine which is used in the game, the developer claims the engine can scale to 64 threads provided there's no GPU bottleneck.Okay but the 5600X also loses to the 5800X, which at least as of 2020 was kind of abnormal in a lot of game benchmarks (in some game benches at Vermeer's release, the 5600x was the fastest of the lot). Look at the minimums, something's going on there when moving to the 5800X. And we still don't know what clockspeed the 5600X is running either so . . . what conclusions can we draw?
First, I believe the gaming tests done by web sites, they are the experts. As far as other desktop, again, listen to website testing. As for DC, I am the ONLY one that I know of that has done any testing, and I am somewhat of an expert in that field, and since I run one, I give my feedback. As far as AVX-512, when you disable the e-cores, it turns on, and 300 watt is that it takes for 8 cores, thats fact, not conjecture (from the wall), As I said, in one case, it is faster per core, but not with e-cores enabled. The case igor showed was with them disabled, and it lost. Everybody knows that e-cores are not as fast as p-cores. For DC work, due to this, Zens3, specifically the 5950x is better.DC applications generally don't like more cores? That's certainly news in this forums. And, the 12700kf does more than excelling at gaming. It excels in a lot more other areas than it's Zen 3 competitors @$300 so let's sweep that under the carpet and talk about 300w AVX-512 consumption even though even that feature can be disabled in bios. Moreover, if Zen 3 is better than ADL in DC applications, stick to Zen 3 and spare us the constant bashing, especially if you're not going to be a fair referee in your tests or do Apple to Apple comparisons @Markfw