Me myself I can't wait until all the legacy crapp has been recompiled and X86 can be shown the door.
Theoretically it would be appropriate to design a new ISA freed of constraints and limitations of the 70s and 80s (CISC, cumbersome instruction coding, few registers, in order, memory management and coherency, etc.), perhaps with an ‘explicit management’ of ILP (a la E.P.I.C.)
I mean that, at least on paper, many of the fundamental ideas behind Merced / Itanium are certainly reasonable.
Likely today's technology could allow the development and proliferation of alternative solutions to the classic Von Neumann's Computer Architecture, such as Warren Abstract Machines, Lisp machines or maybe Turing machines…
However it's very complicated to develop and optimize compilers and development tools for new architectures, especially if they’re very complex...
Intel wanted to leave x86 years ago
But Intel, above all, wanted to cut out the competition: when Merced was in gestation, besides AMD there were also Cyrix, IDT's Centaur Division, Rise, Transmeta...
But AMD came with AMD 64 and held us captive as MS had zero interest in nothing that wasn't X86 back when . So It was AMD and MS that held back innovation and not Intel as the EU stated.
AMD's Hammer was a good move in that scenario: a good fit for legacy performance (16/32 bit X86), modern computation (AMD64, later named X64), energy efficiency (CnQ, not exaggerated max TDP), security (NX), topology (on die northbridge, HT).
It would have been better if they included more registers or refined other details, but anyway X64 ensured a better transition from 32 to 64-bit for servers, workstations and PCs.