Not anymore, at least in Microsoft or Sony's case. They are determined to stay on AMD CPU/GPUs for future console platforms. Maintaining backwards compatibility from a hardware standpoint also has positive benefits for complex projects like AAA games since publishers will be able to pull off faster release schedules for new hardware ...
We shall wait and see. If the difference in 5 years is anywhere near what Intel vs M1 currently is, It's all but guaranteed to change.
Past decisions have not been very good indications of future ones for console before.
I remember people being hostile on this very forum when I mentioned sometime in 2013 that this generation we'll probably see something akin to PS4 Pro as it's x86 and PS4 will be too weak in ~5 years (and much easier to upgrade). At least some posters got angry for posting such a "stupid idea" , "everybody knows console perf is fixed for the entire generation" (always had been!).
CPU emulation isn't going to be the main problem. It's the GPU emulation that's going to be the issue with modern consoles depending on how many scary hardware features games are going to be using ...
Not a problem if it uses AMDs GPU + ARM cores.
Well first of all we aren't comparing like for like scene/area and second of all Metro Exodus is just one sample so you're going to have to give more data than this. Lastly, the M1 also has a process advantage so I wouldn't be surprised if the M1's GPU came out ahead down to this fact ...
You are grasping at straws here. No way it's the CPU:
1. It's running in x86 emulation mode, guaranteed to be a 20-30% hit at least
2. It's running against Tiger Lake (which is faster in geekbench ST vs M1
using Rosetta)
3. On top of that it's using
MoltenVK to convert Vulcan calls to Metal adding another layer of abstraction (and slowdowns)
And It's not because of different scenes either. I posted a 20 minute M1 gameplay video of the Moscow levels having both indoor and outdoor scenes, with the framerate hovering around 30FPS near constantly. The framerate in these levels is quite stable in these areas (meaining the scene doesn't matter that much, know as I have played it. In at least two youtube videos Xe is only able to hit 30FPS at 720p Low (and this is backed by notebookcheck) in scenes taking place in the first few levels.
And the process excuse is also getting tiresome. Apple's A13 was still in the same performance ballpark A14 is, and it was made on TSMC 7nm (perf was down -20% mostly due to clocks, that might not have been an issue in a laptop). And yes, Apple's GPU benefits more from 5nm, but Tiger Lake is pulling 28 - 50W in challenging games. Air is on passive cooling and has at least 1/3 better performance despite emulation. This is way more than one shrink alone would do.
As for licensing, there's a relatively simple legal fix. Why doesn't Apple broker another agreement with ARM Ltd again so that they can share their CPU designs for other ARM vendors ? A legal excuse is just an easy way out for Apple to not share their technology ...
And why should ARM be interested in changing the licence just to lose a huge chunk of their core licencing income (as Apple's cores are better)?
Even if they were, it would mean that the licence would be that much more expensive for Apple, why should they want it? Just to generate more work for them making their designs licencable and selling them?