• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion Geekbench 6 released and calibrated against Core i7-12700

Page 42 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Last edited:
then conveniently forget to optimize anything else...
They will get serious with Nova Lake iBOT optimizations for APX. Unless AMD has been developing APX support for Zen 6 in stealth mode, they will find themselves at a serious disadvantage if gains from automatic APX optimization of legacy code are properly realized.
 
Help me understand: What production level software is out there that has actual compiled binaries that support APX code? I know that there are compilers that can emit APX binaries, but is anything actually out there that's been compiled for it? I don't believe so, and, if that's true, AMD is missing nothing with Zen6 because they appear to be on board for Zen7. So, in 2+ year's time, once there are a handful of programs out there that actually support APX, they'll have a product for it.
 
View attachment 140632

Ok this explains it better. With the amount of x86 apps and games, this is just a marketing gimmick for now.

So wait, if this "happens in Intel's labs not your PC" and "the original binary on disk is never modified", it must be shipping an optimized binary that comes from "Intel's labs" to run on your PC, right? This "user mode service" that watches for the relevant binaries causes the new binary you downloaded from Intel to be run, instead of "the original binary on disk".

By their nature, benchmarks don't always use the most efficient algorithm possible. Let's say Geekbench included quicksort as one of its benchmarks. Intel's labs could replace that with a faster sort algorithm which generates the same answer but it is NOT measuring the same thing.

I think John is 100% correct to blanket ban use of these results for Geekbench until he can inspect the binary and see what is being done. The problem is even if he inspects it today and decides "OK, this is just some better optimization, I'm OK with it" Intel could later update the "improved" binaries they are distributing with ones that go further and change the algorithms being used.

There are a lot of games you can play with a benchmark, without even "optimizing" anything. Imagine you're an OEM selling Android phones using basically the same SoC everyone else is using. It is hard to raise yourself above the pack. Some have done it by overclocking those SoCs a little. It looks better in a specs comparison and if people believe it is faster they might choose that phone between other similar ones that are not overclocked. But if that overclocking doesn't help Geekbench because the phone is overheating and throttling, well here's how you fix it. You do something like Intel's service, providing an "optimized" Geekbench binary that makes the brief pauses between subtests a little longer to give the SoC more time to cool down. Now you get the benefit of the overclocking and customers may be more likely to choose your phone - even though it hasn't become any faster in real world tasks!

Now for actual applications and games maybe its a different story. If it runs faster for you, you may not care if you're running the original binary or one Intel has optimized. But what happens if you experience problems - I'm gonna bet the first thing the application vendor tells you is to disable Intel's optimization for it and see if you can reproduce the issue.

I'd also be worried about potential hacking of this service. If Intel (with a USER MODE service) can override what binary you run, maybe a hacker can leverage it to replace the binary you expected to run with one that includes some malware. Your malware scanners wouldn't catch it, because "the original binary on disk is never modified". I can't imagine corporations that care about security would permit this to be enabled.
 
Help me understand:
The theory/speculation is that Intel will aggressively work with its ecosystem partners to compile new binaries with APX support. Furthermore, they may use dynamic code rewriting tricks to get older code to run with APX support. Obviously, I don't know the specifics of their plan but if they don't go this route, APX will just end up being as useless as AVX-512 has been due to less than stellar adoption rates from software developers.
 
So wait, if this "happens in Intel's labs not your PC" and "the original binary on disk is never modified", it must be shipping an optimized binary that comes from "Intel's labs" to run on your PC, right? This "user mode service" that watches for the relevant binaries causes the new binary you downloaded from Intel to be run, instead of "the original binary on disk".

Based on that chat, the new silicon has specific hardware hooks for this purpose. I don't believe that there are binaries shipping with the package since it's barely 100MB in size. More like detecting an executable by name, waiting for it to call a function that executes poorly on Intel silicon and executing the rewritten function instead. And other tricks too like avoiding eviction of cache data that might not be immediately required but will be called frequently seconds later based on Intel's internal code profiling.
 
I think John is 100% correct to blanket ban use of these results for Geekbench until he can inspect the binary and see what is being done.
Well, that would help I guess. You can probably get those changed binaries if you have a system with iBOT installed, they are somewhere in the data Intel installs/stores on your SSDs.

By their nature, benchmarks don't always use the most efficient algorithm possible. Let's say Geekbench included quicksort as one of its benchmarks. Intel's labs could replace that with a faster sort algorithm which generates the same answer but it is NOT measuring the same thing.
I really doubt Intel would even try that. That gets messy quickly and just the testing to determine results are not changed will be lot of work.
They state they reorder instructions or branches to improve prediction success rates on the given microarchitecture, that sort of thing (they explicitly state they don't use the software's source code, they don't decompile and don't reverse engineer). And that's legit.

I can understand people that want to know baremetal performance with exactly the bad (suboptimal) binary, but I think a score that involves automatic LOSSLESS optimizer is completely valid. The only real issue I can see in it is if the optimization requires human engineer work to get the more efficient code and it's not just an automatic tool. Even that would be useful for productivity and games of course, but not legit for benchmarking.

After all, you hear people harp about benefits of hardware-software cooptimization all the time, for example in Apple camp...
 
Last edited:
Hmm they sat that future application updates may break iBOT.
This is what makes it so weird, it requires continuous updates and validation from Intel. Their effort also grows exponentially as they release new generations from now on (linearly more CPUs x linearly more apps), and apparently they have to profile all SKUs in a family. For example right now U9 386H has GB "acceleration" while U9 388H does not.

In the video they also mention the reason why only ARL Plus and Panther Lake can be used with iBOT, they claim vanilla ARL lacks some feature(s) enabled in the the newer chips. If I had to do a blind guess, I'd say they need those features to gather the profiling data. (assuming it's true ofc)
 
It is not.
They're all the same stepping. No silicon change. Just differently configured. What feature is disabled in e.g. the 265K that is now enabled in the 270K? It is simply marketing.
My interpretation of that was that the excuse was the different NGU and D2D link clocks.
 
It is not.
They're all the same stepping. No silicon change. Just differently configured. What feature is disabled in e.g. the 265K that is now enabled in the 270K? It is simply marketing.
I'll quote from the video so everyone can make their own mind:
architectural wise they are very-very-very similar, from a code that's running on the CPU side there are certain changes that were made that allow iBOT to work. So there are certain things that we're doing and have access to on the 200S Plus processors that woudn;t necessarily work on prior generations, including 200S
 
I have a feeling that they are telling the truth. Remember, the original Arrow Lake launch was rushed on Pat's orders so he could save his neck in front of the board and investors. A delay would've looked bad for Intel, especially after the Raptor degradation fiasco. The silicon probably has the required features but disabled in firmware because they never got the chance to properly validate it or there was some showstopping bug in that feature that prevented them from enabling it later.

Also, they were desperate enough to do heavy price cuts on the 265K. If the feature could've been enabled to prevent doing that, they would've done that instead of waiting for the Refresh.
 
I'll quote from the video so everyone can make their own mind:
So that likely confirms that these optimisations won’t work on any AMD CPUs.

I guess this might be one way for Intel to claim leadership in gaming and content creation for the upcoming Nova Lake. If AMD can’t make it work on their CPUs they might be forced make their own optimisations for future Zen CPUs.
 
I have a feeling that they are telling the truth.
You are wrong. Exact same silicon. Just different clocks here and there. Same errata.

But the feeling is why marketing is trying so hard to not say what they're actually doing: shipping alternative binaries to sell their new hotness.

These binaries will run perfectly fine on ARL. There are no differences between the chip from their own technical documents.

In any case, GB6 worked around this by marking unverified binaries as vendor wankery sitewide. As they should.
 
find evidence of those binaries to confirm that this is really what's happening
The mechanism isn't particularly important. The lack of documented changes is proof enough that the marketing arm is lying as usual. There is no technical difference between identical silicon that prohibits these binaries from running on earlier ARL chips. What firmware changes are needed in order to run a different binary? There are none. Intel's simply working on the tools to substitute alternate binaries into the closed-source, binary-focused Windows world.

Whatever magic change the marketing team implies to exist here would work for static compilation in SPEC and yet it won't because these optimizations are often already done when compiled with semi-reasonable settings.
 
Last edited:
Reminds me of SafeDisc where the stub exe would decrypt the real one if your CD was legit
We're going to be pirating iBOT binaries now
 
Reminds me of SafeDisc where the stub exe would decrypt the real one if your CD was legit
We're going to be pirating iBOT binaries now
No, the future is worse than mere piracy. The pessimist sees a future where we will have AMD, Qualcomm, Intel, ARM, Nvidia all substituting binaries on Windows and the poor developers will sort this all out somehow. The optimist see this as short term idiocy that will disappear as soon as Intel gets their act together and produces a competitive chip.

And the clown thinks that Microsoft will do it in the operating system (not just for x64->ARM64 but also x64->APX) instead so there is only one tool with a known set of optimizations it can perform.
 
Back
Top