It's funny that you don't read your own sources thoroughly because if you did you'd know that the 9900K in the TPU review was not maintaining boost clocks, hence making the gap to the 2700X look closer than it really is under proper turbo configurations.You can bet your shirt that they selected the best combination of hardware that made the 9900k look as weak as possible,just as they did at the original presentation of ZEN.
Being on par in cinebench doesn't mean much for userbench is what I'm saying,the 2700 especially the x was already pretty much on par,just minus 5% against the 9900k at stock.
https://www.techpowerup.com/reviews/Intel/Core_i9_9900K/9.html
So you are saying the benchmarks that were shown today at CES are fake ? That the new cores are now NOT on par with the 9900k ?
But a proper (CES spirited) test for new chip would have been some CPU limited gaming scenario, AMD has just moved I/O and MC further away from cores, that's where THE real questions are.
Cinebench mimics a real world workload and scales well enough that the results can be extrapolated to other CPU-heavy tasks pretty reliably. It's also well-known and has a massive database of results. You propose they instead use a CPU limited gaming scenario where the results are by nature going to be extremely application-specific, a huge pain to verify and revelant only to a very small subset of consumers.
Might as well ask them for javascript benches
I also want a pony for my birthday but expecting a full suite of benchmarks 5-6 months ahead of launch is a bit unrealistic when they don't even have the final silicon yet.Why not show them all? I have nothing against CB, but i feel it plays into AMD core strength too much, while hiding memory latency/bw effects too much. Why not add some gaming, GeekBench 4 into the mix? And JavaScript is fine to me as well, i am certain ZEN2 will shine in it due to great cache subsystem.
Why not show them all? I have nothing against CB, but i feel it plays into AMD core strength too much, while hiding memory latency/bw effects too much. Why not add some gaming, GeekBench 4 into the mix? And JavaScript is fine to me as well, i am certain ZEN2 will shine in it due to great cache subsystem.
I mean according to AMD the first ZEN was already supposed to blow intel out of the water,they made it look sooooooo gooooood against intel hedt in their presentation.
Why not show them all?
They used CB for all demos since the release of Zen, how are people not expecting them to continue with this trend?
General performance does not equal gaming.
This is a silly both-sides-ism.Nor does CB.
That's exactly what's happening, the Matisse sample was running at roughly 35W lower power than nominal values for AM4 TDP. We have every reason to believe this can translate into another 10% performance at launch, that is unless they unleash the power beast (ignore TDP on X570 boards) like Intel did with 9900K, in which case we're likely to be bound by 7nm max frequencies and can expect higher than +10% performance.I have to wonder if they are sandbagging here too? They did say it wasn't running at final clocks. And whatever clock they chose was just enough to pass the 9900K. We shall see when the parts hit the market.
Even if it did, we're bound to see performance gains there as well. Even if it's just the equivalent of 2700X running at 4.7Ghz, it still means a healthy 10% gain.General performance does not equal gaming.
This is AMD showing that they can beat Intel with one hand behind their back.
I believe those were system power draw, not CPU power draw.
Based on the numbers Anandtech gave us (75W) and assuming 15W for the I/O they could probably fit the second chiplet with only a 5% drop in clocks for this workload.Could they double the core count and stay within a 125W envelope with frequencies very similar to this?
Well I think that was CPU power.
Anandtech said 170W for CPU i9 9900K at 4,7GHz. They are basically contradicting themselves with AMD article.
Very much so.That's exactly what's happening, the Matisse sample was running at roughly 35W lower power than nominal values for AM4 TDP. We have every reason to believe this can translate into another 10% performance at launch, that is unless they unleash the power beast (ignore TDP on X570 boards) like Intel did with 9900K, in which case we're likely to be bound by 7nm max frequencies and can expect higher than +10% performance.
Even if it did, we're bound to see performance gains there as well. Even if it's just the equivalent of 2700X running at 4.7Ghz, it still means a healthy 10% gain.
The thing with this CES demo is AMD showed us they either have a very strong IPC gain or a very healthy frequency gain. It can be one of these, a mix of these, but cannot be none of these.
The CB15 demo may be less relevant for overall performance than we'd like (considering we have yet to know arch specifics in relation with latency bound loads), but what it does tell us is AMD has a strong advantage in manufacturing and looks poised to bring significant all-round performance increase through brute force if they have to.
The other aspect is that they were likely running increased voltages for stability as it was a live demo. It's possible that production SKUs consume even less power. That being said, the ES was not at final clocks, so a bit of give and take in terms of what production SKUs might be capable of.Based on the numbers Anandtech gave us (75W) and assuming 15W for the I/O they could probably fit the second chiplet with only a 5% drop in clocks for this workload.
Direct quote from the Anandtech article: "The Intel system, during Cinebench, ran at 180W. "
Quite literally if those power consumption and performance are accurate.
Wonder how much of the CPU power consumption is in the chiplet and how much is in the I/O. Could they double the core count and stay within a 125W envelope with frequencies very similar to this?
That's like a 6-7% uptick in power usage. Hardly contradictory when you haven't verified they're using the same tool to measure power usage."by contradicting" I mean ther own test with i9 9900K eating on HW info64 just below 170W.
The other aspect is that they were likely running increased voltages for stability as it was a live demo.
The article you linked measures the maximum power draw with POV-Ray which uses AVX2 instructions while CB doesn't. Intel has always let AVX2/512 workloads run out of spec and the power draw numbers aren't indictive of general use."by contradicting" I mean ther own test with i9 9900K eating on HW info64 just below 170W.
Wow, I was just playing cinebench last night. Really had a lot of fun. Cant wait to get home from work tonight and play it again. Guess I can ignore my whole steam library now. I have no problem with cinebench as a preliminary benchmark, but it *is* pretty much a best case scenario for AMD. We will just have to wait for more benchmarks, but gaming is still a question, and a primary use case for many (most) users. Final clocks may be better, but actually the only thing I am impressed with so far is the power consumption. By the time it comes out it will be almost a year after the 9900k, and to only match it in a best case artificial benchmark is underwhelming to me.Cinebench mimics a real world workload and scales well enough that the results can be extrapolated to other CPU-heavy tasks pretty reliably. It's also well-known and has a massive database of results. You propose they instead use a CPU limited gaming scenario where the results are by nature going to be extremely application-specific, a huge pain to verify and revelant only to a very small subset of consumers.
Might as well ask them for javascript benches