That didn't sound like you automatically assume malevolent intent or at least dishonest generalization behind my comment at all. Imagine if it did!You forgot: "Competition is good for the consumer." lolz
That didn't sound like you automatically assume malevolent intent or at least dishonest generalization behind my comment at all. Imagine if it did!You forgot: "Competition is good for the consumer." lolz
What if the lead in this particular instance is this large? Is it automatic rigging for you, too?That didn't sound like you automatically assume malevolent intent behind my comment at all. Imagine if it did!
literally wth...What if the lead in this particular instance is this large? Is it automatic rigging for you, too?
What if the lead in this particular instance is this large? Is it automatic rigging for you, too?
Excellent point. This is why, at this point, I can't wait till ADL is released. There is conflicting data, though the trend appears to be that of a significant performance uplift, and there are conflicting opinions on what a given benchmark means. And then we go down a rabbit hole of pure misery of speculating on motives and accuracy and the meaning of each and every benchmark leak. . I really should ignore this thread till reviewers had silicon in hand.literally wth...
I meant: no matter how ridiculous the rather specifically selected 16C/24T nature of the new version of the benchmark is, a ~35-40% jump compared to last gen is so substantial, that the whole argument loses its meaning, at least in relation to the actual performance increase.
In simpler words, even IF AotS is a vast outlier (which it may or may not be), ADL seems to finally be a REALLY big step forward, which Intel has totally lacked in the past years.
So may I please ask, what on Earth are you on about?
Sorry for the misinterpretation, dude. Cheers!In simpler words, even IF AotS is a vast outlier (which it may or may not be), ADL seems to finally be a REALLY big step forward, which Intel has totally lacked in the past years.
I am not sure if AoTs is relevant for real world game performance at all ?
It is relevant for the half-dozen of us that still play the game. 🤣
If Alder Lake destroys Zen 3 by 40% in a AMD sponsored gaming benchmark, it would be hilarious and I would expect Intel to put it on their presentation.
6th to 9th gen is all based on the initial Skylake architecture, even though DDR3 was pretty much dead starting with the Kabylake refresh. Raptor Lake isn't Golden Cove apparently.
What happened to striving to be the former?If Alder Lake destroys Zen 3 by 40% in a AMD sponsored gaming benchmark
A reasonable person would say that AotS benchmark is extremely branchy and has an unusually high dependency on memory performance and so should not be taken too seriously as a pre-release release leak.
A less reasonable person comes to the forums and posts conspiracy theories that can be sanity checked by looking at a few benchmarks.
Strive to the be former.
Well then my comment of you assuming wrong intentions in mine became unnecessary as well, now I understand why you wrote it Cheers indeed.Sorry for the misinterpretation, dude. Cheers!
In all fairness, it was sponsored to show the heavier use case of asynchronous compute in GPUs. 360 years ago. Then later the game itself came out a full year before even the first Ryzen was released. Now, the company is in a partnership with AMD for developing cloud GPU computing. If you honestly lack any doubt after seeing such an arbitrary numerical change, that is your choice - but can you quit insulting people's intellect just because you fail to make a connection? I also don't think that those devs would have been bribed ( ) by Intel, I think they just simply like their CPUsAbout as funny as the AMD fanbase accusing a AMD sponsored dev of taking bribes from Intel. If Alder Lake destroys Zen 3 by 40% in a AMD sponsored gaming benchmark, it would be hilarious and I would expect Intel to put it on their presentation.
we all love a good misery, don't we?Excellent point. This is why, at this point, I can't wait till ADL is released. There is conflicting data, though the trend appears to be that of a significant performance uplift, and there are conflicting opinions on what a given benchmark means. And then we go down a rabbit hole of pure misery of speculating on motives and accuracy and the meaning of each and every benchmark leak. . I really should ignore this thread till reviewers had silicon in hand.
Best practice is to run the validation apps AFTER running the main benchmark.its a multi-core test and score, but the utilization chart shows only 1 core was utilized
its a multi-core test and score, but the utilization chart shows only 1 core was utilized
Could be that last 60s were spent opening CPU-Z and task manager ? It seems to be perfectly legit CB23 score to me, right on where i expect it to be. CB23 does not care about memory latency or bw and as long as it fits L2 with some L3 spills, it will run just fine.
One minute to open CPU Z and a task manager..?.
On my PC CPU-Z opens in ~7s, since he has 3 open, that is already ~30s if we give some time to open the tabs, return to explorer or desktop to launch next one etc.
if we turn into retarded area51 investigator mode, in fact spikes of CPU load in that graph start and end consistently with starting those 4 apps and they do start well into that minute.
The fact remains, CB23 score is consistent with previuos leaks and right where i expect it to score. How it will score where memory subsystem will matter - remains to be seen both stock and OC.
The thing is that we dont know at wich frequency this CPU run during the bench, here it display 5.3 despite the cores being idling, so this could had been benched at 5.3 as well.
The thing is that we dont know at wich frequency this CPU run during the bench, here it display 5.3 despite the cores being idling, so this could had been benched at 5.3 as well.
Assuming that the small core contribution to the score is 33% then the 8 PC should be around 20 000 pts at whetever frequency, FTR a 5800X is around 15800@4.5GHz.
Cant really say i'm impressed with either Alderlake or Intel "7nm" IF it took PL2 255w to achieve those 30.5k points in Cinebench r23..