I believe that the biggest battle, going forward, in the server space, is going to be rack density and power. As more and more applications move to a thin client model where, essentially, your entire application is hosted in a web browser, it matters less and less what architecture is living on the other end of that internet connection. It could be x86, it could be ARM, it could be RISCV, it could be POWER. It doesn't mater at all to the end user in almost all situations where a rack-dense server might exist to host them. What's going to matter to them? The cost will! How much is that company going to have to pay, per worker, to provide the part or service that they are in the business of creating?
If ARM can prove that they can host more virtual servers, or virtual dock hosting instances, or whatever other metric that is relevant in that time frame, than x86 or any of the other platforms can, and at a usable performance level, with a lower power draw per unit of performance, then they will get business. The better they are at that, the more business they will get. It's really that simple.
In most cases, its nearly trivial to compile a given program for a different architecture on the back end where that program isn't explicitly asking for very specific features (such as AVX-512). Yes, there are some highly specialized programs that need very particular things, but, those are far more rare in use than many might believe, and most can be rearchitected to use ANY CPU platform that their author might decide to, just with differences in performance that may or may not be important. Not surprisingly, most of those applications have moved on to actually using add in cards, like compute ASICS, customized GPUS, etc. Again, with appropriate drivers, those could be installed in most ANY CPU architecture type servers.
The only thing holding back a wholesale shift to ARM platforms is purchase cycles and the investment it would take in providing software to monitor, manage, and deploy the bare metal, as well as any software work will need to be done for the operating systems themselves that will run on it. So, assuming that the NeoVerse V1 or N2 is the most amazing thing to ever grace a server rack, you're still looking at a year or more before production will hit enough volume to even begin to make a dent in the installed setups in server farms around the world, and, we're assuming that neither of the two leading x86 vendors does nothing to at the very least keep the performance gap from getting large enough to make performing such a migrations anywhere near cost effective.