It's not a one way ticket like you think it is. Architecting for graphics does not mean a weak correlation for architecting GPU compute as well. Graphics IS the stepping stone for GPU compute since programmable shaders were not originally designed for doing professional compute applications but were designed for graphics operations. If Intel's discrete graphics can't show any promise that they are performant in high-end graphics then what do you think about the prospects of Intel succeeding in professional GPU compute where the kernels are far more complex ?
CUDA really wasn't originally designed for server workloads. Heck, you can't even run Tensorflow, PyTorch or any other machine learning frameworks before Kepler based GPUs which is the cornerstone of GPU compute. OpenCL is rubbish for the most part since it's not even an equivalent to CUDA. OpenCL isn't even single source like CUDA! It's a totally different compute API that's very much designed around the limitations of a graphics API. AMD's main compute API is HIP/OpenMP since they've clearly given up hope on OpenCL being truly competitive in the near future ...
It has everything to do with their integrated graphics because of the fact that they have a really awful development environment. Almost no graphics developers put in any effort for their integrated GPUs and you can be sure no one is interested in their OpenCL implementation or other compute solutions. They may as well just start out with being focused on high-end graphics like the others did. What would even be the point of introducing a high-end compute solution so early when they still don't even have a single source compute API ? Using your strategy might even harm to Intel's potential growth early on when they're building a developer community but to them it's just cheaper to replace hardware later on compared to rewriting and maintaining code so do you truly think that Intel even with all of it's resources are prepared alone to start tackling high-end GPU compute from the beginning ? For reference it's taken AMD nearly 3 years to be close to upstreaming their GPU compute acceleration support for the most popular machine learning frameworks so can you imagine how long it would take Intel to do the same without community support and where they still don't support an implementation for a singe source compute API ?
It is when the discussion start with the Intel tossing out everything compute to compete as a discrete GPU.
Intel isn't going to start a design on new GPU from the ground up to have to toss it almost immediately to build a whole new arch for one the includes compute. Intel may understand that for them to succeed they have to give Nvidia especially a hard time in the enthusiast market. But that won't come at the cost of having nothing for the market that they really care about, the compute market. Personally I think it will be descete be damned, but I'll give them the benefit of the doubt here. But Intel also has to realize that slow and slow process development will mean that they have to get into the market (servers) running. They aren't going to have a whole lot of opportunity to stay ahead (if they can at all) If they are going to be stuck on a process for 4+ years. We are almost done with the ebbs and flows of one doing a process change taking the lead for a bit, the next one does 2 years later and takes the lead. AMD blew it last time. Whoever has the best archs will keep a lead for a lot longer and a lot more effort across the board will be done to prep for the next node shift. I know people want Intel to come in and shake up gaming and they may. But they aren't going to tie their hands behind their backs when all they really want is that high margin 2k-10k server compute card sales.
As far as the API's are concerned. Do we know went Intel is launching this hardware? What the specs are? What their software guys are doing? Have we never seen Intel put the cart in front of the horse?