What is something better? No idea really; I just have heard Linus Torvalds opinion and I also think that trying to make a cpu act like a gpu is probably a bad idea. It seems like Intel's kludge before they finally gave up and decided to design a gpu. Also, if they were going to support it, I would have expected it with the new architecture (Zen 3). I guess it may be plausible that they are waiting for Zen 4 (5 nm and chip stacking to deliver more bandwidth).
So you basically just admitted to making up a soundbite and then rolled with it ?
There would be some special programming required to make use of a GPU style compute unit, but I have to wonder how much AVX512 is actually just compiled with auto-vectorization from basic C++. I would expect most AVX512 use is through hand tuned low level libraries or it is HPC code that is also hand tuned. Making use of a cache coherent gpu compute unit wouldn't be much different as long as you supply the necessary libraries. There is probably some other possibilities with a chiplet gpu compute unit that is cache coherent; there is no unnecessary copying and latency could be very low.
Libraries are fine. What's not OK is needing to rely on another compiler on a really unstable platform. At least with AVX-512, you only deal with one compiler with minor microarchitecture specific backends. With GPUs, you see new compilers sprouting up all the damn time because the new hardware generation is incompatible with the previous generation's ISA ...
Most people in the HPC sector in their right minds don't want to deal with GPUs because it's a very hopeless game of trying to micro-optimize for current hardware when new hardware will inevitably OBSOLETE all of their old code because the hardware vendor in question wants to keep chasing higher efficiency by making more incompatible ISAs ...
CUDA which is a leading edge GPU compute platform requires massively more effort to maintain compared to x86 CPUs. A GPU only has like a maximum of 7 years of support (sometimes even less) before it becomes total paperweight with future releases of a GPU compute platform like CUDA deprecating them ...
Somehow programmers are supposed to believe that the future is constantly rewriting tens of thousands of lines of code on an annual basis for optimal performance, a platform deprecating their hardware only after several years, vendors who discourage against micro-optimizing for specific architectures, and then possibly having to deal with buggy toolchains with deprecated hardware ?
Sheesh, it's no wonder why most programmers are cranky about GPUs and they'll always be a distant second when it comes to the HPC sector ...