• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion Arrow Lake Builder's thread

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
It's akin to letting Intel get away with bogus SPEC-specific optimizations in their compiler. But gamers are a peculiarly self-sabotaging bunch (RIP Radeon) so why not let it descend into AMD and Intel both "working with" (paying) different game developers to distribute optimized versions of their games to win in some benchmarks.
You can't optimize games that are no longer maintained some reviewer needs to properly test the feature enabled and disabled and in games. This feature is a problem for games with anti cheat.
 
Excellent presentation skills from Robert Hallock, so refreshing to no longer have to listen to corporate robot emulation. Thanks Steve!

The BOT situation will be a lot of fun though. I can see it skewing results in mainstream reviews by targeting titles commonly used in benchmark suites. Finally some drama! 😎

My view on this is that both APO and BOT can only be "win more" tools, and they may actually hurt Intel until they go back to "native" performance parity with AMD. If gamer sees bigger Intel bar with BOT enabled that only matches AMD's, they will only think of the performance difference when vendor optimization is not available.

In other words, we'll see people saying "buy ADL+ and you'll get X3D performance with Intel optimization enabled" and average gamers will hear "you want X3D to have optimization everywhere".

That being said, I'm open to see how all of this plays out in the long run. Can I get my "game" called Firefox optimized too? :smilingimp:
 
I guess in a way Microsoft is already doing dynamic code optimization when emulating x64 (still exclusive for Qualcomm at this point) so why not.
Apple has been doing that in production since the PowerPC to x86 transition. And even before that Digital did it to run x86 Windows apps on Alpha (look for FX!32).
 
  • Like
Reactions: 511
It's akin to letting Intel get away with bogus SPEC-specific optimizations in their compiler. But gamers are a peculiarly self-sabotaging bunch (RIP Radeon) so why not let it descend into AMD and Intel both "working with" (paying) different game developers to distribute optimized versions of their games to win in some benchmarks.

I guess in a way Microsoft is already doing dynamic code optimization when emulating x64 (still exclusive for Qualcomm at this point) so why not.
At first I was "why would you not allow this?" but I guess it may be a legit position, to not use it for regular review benchmarking.

But the recompilation or binary replacement (which could even be done in cooperation with devs) as a general usage concept, I don't think it's a bad thing. GPU drivers already do ridiculous things, so why not allow it for apps. There should probably be some distinction between content consumption × critical app so that say banking app is not hijacked - that's too sensitive. But developing a framework with MS to do this with games? Soudns like a fair game.
 
But the recompilation or binary replacement (which could even be done in cooperation with devs) as a general usage concept, I don't think it's a bad thing.
Yet it introduces yet another compiler (with all its bugs) into the loop.
Would you accept GB6 scores as valid if Intel did dynamic binary substitution? It's just silly. They're playing very stupid games now.
 
Yet it introduces yet another compiler (with all its bugs) into the loop.
Would you accept GB6 scores as valid if Intel did dynamic binary substitution? It's just silly. They're playing very stupid games now.
GB6 is a benchmark, so I'd say no.
But games and apps are another story. I've heard horror stories from game devs that convinced me their releases were just junk from an optimization point of view; they don't use aggressive optimization during dev because it lengthens build times too much; so when it's time to release they often prefer to use binaries they worked on for months since it's safer (which ironically brings us back to your original point of introducing bugs with another layer).
 
Yet it introduces yet another compiler (with all its bugs) into the loop.
Would you accept GB6 scores as valid if Intel did dynamic binary substitution? It's just silly. They're playing very stupid games now.
I would not but i would like to see the score lol
 
GB6 is a benchmark, so I'd say no.

I guess i would be interested in seeing what the app can do with performance though (assuming it would be a tool you could take and apply to any exe you find). Were there any binary optimiser tools before, for x86 code?

But games and apps are another story. I've heard horror stories from game devs that convinced me their releases were just junk from an optimization point of view; they don't use aggressive optimization during dev because it lengthens build times too much; so when it's time to release they often prefer to use binaries they worked on for months since it's safer (which ironically brings us back to your original point of introducing bugs with another layer).
 
Back
Top