I'm not sure any company now gives any detail about branch prediction except some vague terms such as TAGE. I don't think even BTB sizes are known (but I might be wrong on this). This has become so tricky that it looks like black magic with a lots of secret sauce, some covered by patents, some not. Giving too many details, might break some competitive advantage and open the door to patent trolls.
Snap 8G4 is known to cost $250 - 300. So that is a healthy revenue increase for QCOM.
There could be another way to improve branch prediction. Store branch prediction information provided by the developer by running a tool from the CPU manufacturer and run through common use cases with their software. Once enough data is collected, it can be "compressed" into a sort of branch prediction "fingerprint" of that particular executable so instead of using a general branch predictor, the CPU can look up the developer populated custom BTB structure instead of trying to use a one-size-fits-all approach. When the developer doesn't populate the custom BTB structure, go with the general BTB. The developer populated BTB can also be much smaller in size.IBM is historically decent about this, at least - there was a whole paper at ISCA about z15's branch prediction structures, for instance, and IBM Systems Journal had an in-depth article about z13 uarch that covered branch prediction as well.
There could be another way to improve branch prediction. Store branch prediction information provided by the developer by running a tool from the CPU manufacturer and run through common use cases with their software. Once enough data is collected, it can be "compressed" into a sort of branch prediction "fingerprint" of that particular executable so instead of using a general branch predictor, the CPU can look up the developer populated custom BTB structure instead of trying to use a one-size-fits-all approach. When the developer doesn't populate the custom BTB structure, go with the general BTB. The developer populated BTB can also be much smaller in size.
As to how the developer will tell the OS to use the custom BTB, it will require some OS level change to let the OS know before starting execution (maybe some flag in the EXE header) that the executable to be executed has its own "hints" and load the hints from the "BTB hints" portion of the EXE into the CPU's custom BTB structure which will hopefully yield a higher hit rate.
Sorry if I wasn't able to explain that too well.
Branch Consistent conditionally to a label at a PC-relative offset, with a hint that this branch will behave very consistently and is very unlikely to change direction.
That’s not point. Multi-day battery life is a good thing.
Sounds good.The X Elite and X Plus chips, used for Windows on ARM (WOA), will reach about 2 million unit shipments in 2024, with expected year-on-year growth of at least 100–200% in 2025.
What kind of modificiation can we expect? Also when is X Elite G2/Oryon V2 landing then?The X Elite and X Plus will have modified versions in 2025, with a reduction in end product prices.

Sounds good.Additionally, Qualcomm plans to launch a low-cost WOA processor codenamed Canim for mainstream models (priced between $599–799) in 4Q25. This low-cost chip, manufactured on TSMC’s N4 node, will retain the same AI processing power as the X Elite and X Plus (40 TOPS).
Having better chip efficiency will also prolong the battery lifespan (less recharge cycles).Why?
I can only assume you feel having "too much" battery life at first is good because when it is degraded after a few years it'll still provide one day of battery life? Because otherwise I see no point to having a battery that still has 60% charge left when you've used it all day.
I wonder if we'd see smaller battery sizes if batteries could be made so that degradation happened 10x slower than it is does now? I'm sure part of the size in devices limited by expected battery life (i.e. phones and laptops like MBA, as opposed to laptops like MBP that have batteries just under the FAA size limit) is to account for the expected decline in battery capacity over time.
Oryon also has hardware accommodations for x86’s unique memory store architecture – something that’s widely considered to be one of Apple’s key advancements in achieving high x86 emulation performance on their own silicon.
Qualcomm has implemented hardware acceleration for x86 emulation?
Weird as in, “what does he mean, so many other CPU designs have the same architecture “ or as in “x86 so dominates the industry, even things only they implement should be referenced as typical.”Anandtech calling the x86 memory model a "unique memory store architecture" is really weird.
Good to see, though. Should make life easier for the WoA emulation folks.
Weird as in, “what does he mean, so many other CPU designs have the same architecture “ or as in “x86 so dominates the industry, even things only they implement should be referenced as typical.”
Yes! You get it!I can only assume you feel having "too much" battery life at first is good because when it is degraded after a few years it'll still provide one day of battery life?
I wonder if we'd see smaller battery sizes if batteries could be made so that degradation happened 10x slower than it is does now? I'm sure part of the size in devices limited by expected battery life (i.e. phones and laptops like MBA, as opposed to laptops like MBP that have batteries just under the FAA size limit) is to account for the expected decline in battery capacity over time.
4P vs 8P cluster, which is better?Interestingly, the Snapdragon X’s 4 core cluster configuration is not even as big as an Oryon CPU cluster can go. According to Qualcomm’s engineers, the cluster design actually has all the accommodations and bandwidth to handle an 8 core design, no doubt harking back to its roots as a server processor. In the case of a consumer processor, multiple smaller clusters offers more granularity for power management and as a better fundamental building block for making lower-end chips (e.g. Snapdragon mobile SoCs), but it will come with some trade-offs with slower core-to-core communication when those cores are in separate clusters (and thus have to go over the bus interface unit to another core). It’s a small but notable distinction, since both Intel and AMD’s current designs place 6 to 8 CPU cores inside the same cluster/CCX/ring.
4 cores in 2024? No thanks. Fewer cores would get overwhelmed more easily by heavily threaded applications or heavy multitasking so don't see the point unless you just want to do light tasks. On a tablet or maybe netbook, such a cluster might be fine but in a work laptop? Get the hell out of my work laptop!4P cluster
• Better power efficiency
It isn’t a weird reason, it is a straightforward one. It makes sense to be willing to endure the greater effort and reduced flexibility of thicker, bulkier, heavier devices. i’ve done it myself probably will again.Yes! You get it!
If the battery is not easily replaceable, I would prefer that it lasted longer and one way to make it last longer is to charge it LESS, which a bigger battery enables you to do.
Plus, if you were to do an experiment where you offer people two laptops for the same price, one with smaller battery and one with larger battery, people would overwhelmingly choose the one with the larger battery, despite the extra weight because if there is no difference in price, why go for less?
It's Apple that started this trend of "small battery is good" with the Macbook Air because for some weird reason balancing a stupid laptop on the palm of our hand somehow makes us wizards of computing. I was fine when phones were fat with replaceable batteries. Now I worry that the phone I use daily and don't want to get rid of, will some day start having battery issues and then I will have to take it to some mobile repair shop for "open surgery" and who knows if the phone will work the same after that. I much preferred replacing the battery myself.
It isn’t a weird reason, it is a straightforward one. It makes sense to be willing to endure the greater effort and reduced flexibility of thicker, bulkier, heavier devices. i’ve done it myself probably will again.
But it always has been, and will always be a trade-off, never a feature.