• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion Geekbench 6 released and calibrated against Core i7-12700

Page 41 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Last edited:
It's not hitting even 2 GHz consistently.

View attachment 140479
I know but the OPN state 3.1GHz, so look like AMD has delivered chips running at this frequency even if this run seems to be below the specs, doesnt mean that all tests will be at 2GHz, the scores of the other submission is more in line with 3.1 than with whatever else, beside its not even sure that GB track the frequencies accurately.
 
I always find "low" frequency testing of ES chips to be... Inconclusive. When your cores are running less than half of their target frequency, cache load pressure is substantially different from what it's like at full tilt. It's great for functional testing, and reasonable for micro benches that don't step out of L1, but beyond that, it's not going to tell you a whole lot worth comparing otherwise.
 
I always find "low" frequency testing of ES chips to be... Inconclusive. When your cores are running less than half of their target frequency, cache load pressure is substantially different from what it's like at full tilt. It's great for functional testing, and reasonable for micro benches that don't step out of L1, but beyond that, it's not going to tell you a whole lot worth comparing otherwise.
Well no, cache pressure is in full, but mem pressure is lower.
Consider GB to have .9 freq to perf scaling.
 

Binary Optimization seems to be able to prop up GB6 1T and MT scores for Arrow Lake (would be interesting to see the same binary running on Zen 5, not sure if that can be hacked in any way).
 
(would be interesting to see the same binary running on Zen 5, not sure if that can be hacked in any way).
Or Raptor Lake. Adding a dynarec with whitelist for benchmark wankery exclusive to chips with nearly identical ISA support is a new low for Intel but no one seems to care. icc with spec-specific optimizations was the previous low but they've gone dirtier.
 
Or Raptor Lake. Adding a dynarec with whitelist for benchmark wankery exclusive to chips with nearly identical ISA support is a new low for Intel but no one seems to care. icc with spec-specific optimizations was the previous low but they've gone dirtier.
Well, it's one thing to try to get THE shipping code to prop up your processor or even break performance on competition.

But when you do an optimizer pass or binary swap altering the shipping code on the owner's PC, that sounds fair to me (well, it could get out of hand - imagine a position of market dominance where you abuse the fact that the ecosystem will be willing to optimize for you but skip optimizing for your small competitor - but that is gonna happen anyway, sadly).

Still got to see the specifics though. Does the binary optimizing involve the software vendor (basically paying them to produce an Intel-optimized alternative binary?) Or is it a standalone binary optimizer that could run completely outside dev ivolvement and potentially be used by software freely (in theory, in practice it's unlikely to be "unlockable").

It sounds like potentially interesting approach to use generally (obviously I'd like AMD to have it, though).
It is true that you lose one of the benefits of central binary software distribution - reasonable protection against miscompilation errors and fringe bugs due to library/toolchain differences between users.
 
Last edited:
But when you do an optimizer pass or binary swap on the owner's PC, that sounds fair to me
It would be "fair" to show the performance difference of the same binary vs previous and competitor chips. And so far Intel refuses to make that comparison. Their optimizations are basically equivalent to recompiling targeting x86_64 v4 then comparing them to other chips running whatever msvc garbage is shipped as baseline these days. This is relevant in the GeekBench thread given the craptastic Windows build.
 
Unless they do something crazy, I'd say it might be possible to intercept and dump the optimised files for use on another PC.
 
Still got to see the specifics though.

It's a service that substitutes unoptimized codepaths of supported executables with optimized ones in real time. My guess is that since some instruction latencies are better on Arrow Lake, the code is being re-organized to align it with those latencies to finish the work faster.

It sounds like potentially interesting approach to use generally (obviously I'd like AMD to have it, though).
They are kind of declaring war and if AMD takes it seriously, they could create a tool that lets end users do the same for Ryzen/TR/Epyc CPUs. AMD usually does right by the consumer in such matters.
 
It's a service that substitutes unoptimized codepaths of supported executables with optimized ones in real time. My guess is that since some instruction latencies are better on Arrow Lake, the code is being re-organized to align it with those latencies to finish the work faster.
Now here's a contradiction: https://forums.anandtech.com/threads/arrow-lake-builders-thread.2622775/post-41586860

That marketing slide is showing that actual binaries are generated.
 
Now here's a contradiction: https://forums.anandtech.com/threads/arrow-lake-builders-thread.2622775/post-41586860

That marketing slide is showing that actual binaries are generated.

When you're compiling code you can select an ISA target (basically what's the oldest CPU you want this to be able to run on) and a scheduling target (what do you want the optimal instruction scheduling efficiency for)

While you can select them separately most developers won't - because you would prefer the optimal scheduling happen on the oldest/slowest CPUs you support. So if you are targeting everything Skylake and newer for example it would generate code for and schedule optimally for running on Skylake.

It could run faster using newer instructions available only on subsequent CPUs or being scheduled based on the width/execution resources of newer CPUs, but at a cost of running even slower on those old CPUs. Engineers make tradeoffs with this sort of thing all the time - stairs are designed for everyone so they are only "ideal" to the stride of people of a certain leg length.

They have to be designed for children to be able to navigate, which is why I more efficiently walk up two steps at a time - but I'd probably do better with steps that are slightly less than double the rise/run, while that 7'9" basketball player on Florida's team might want something closer to triple.

If you recompile the binary (machine code to machine code translation) you can make those optimizations exact to the CPU you are running on. Sort of like having steps that could check out how tall you are (or maybe more specifically how long your legs are) and adjust their rise and run to perfectly match to you - and the speed/efficiency gain you get walking up stairs would be greater the longer your stride just like the benefit of these optimizations would be greater the newer your CPU (and the older the 'target' the code was originally compiled for)
 
maybe AMD can make their own suited to Zen5. Windows can really use it.
Apple received a lot of flak for their failure to leverage AI, but I would argue Microsoft's failure is an order of magnitude bigger. Not only did MS sabotage their own OS, they also sabotaged new gen hardware by effectively enforcing that NPU tax.

For a sense of scale, the NPU in AMDs 300 series chips takes up roughly the same area as 4x Zen5 cores + L2 or 4x Zen5C cores + L2 + 8MB L3. It may not seem that much for a premium chip like Strix Point, but it sure is a lot for a cost oriented chip like Krackan Point.

Cost oriented chips are the ones that will end up powering laptops competing with Apple's Neo. So Microsoft somehow managed to handicap x86 on both sides, software and hardware. Even the Copilot keyboard button is completely worthless, we can't even remap the thing properly, it will stay there as a reminder of complete organizational failure:
 
Wow. It's like John Poole JUST found out about this! Flagging all results for iBOT supported CPUs is a bit extreme.
It seems he hasn't received the binaries. GB can't see if the optimizations actually removed the work the benchmark is supposed to measure. Optimizing compilers have a habit of doing that in general but PGO on a fixed work benchmark in particular often make optimizations that don't generalize.
 
Back
Top