It's hilarious that there was even a thought that Nvidia could somehow get a stranglehold over x86 vendors just by purchasing ARM Ltd ...
CUDA offloading on Arm was only
introduced under a year ago with the CUDA 11 SDK so the vast majority of customers on CUDA still run their host code on x86 CPUs and Nvidia would be screwing over many of their own customers if they tried to lock CUDA to their own ARM based systems. If immediately they find out that they can't use CUDA on x86 systems, most developers will just prefer to drop CUDA instead and stick to pure C++ and optimize their kernels with AVX or even AVX-512 over rewriting their host code to run on Arm. Some developers, if they're brave enough will make an attempt to transition to other heterogeneous compute platforms like ROCm or oneAPI if they're looking for more performance ...
Most developers don't optimize their CUDA kernels with low level PTX assembly and heavily rely on NVCC to optimize their code for big gains in speed up just to make it even worthwhile to port their C/C++ kernels in the first place. By comparison, far more code is optimized for x86 architectures because of their stable ISA and is more ubiquitous in nature ...
CUDA GPUs have tons of undesirable limitations like their unstable ISA (PTX changes every generation!), can't run all C++ kernels, and can't be used to accelerate every parallel algorithm. X86 CPUs do not have any of these drawbacks so they're far easier for programmers to maintain long-term projects and are much more widespread ...