That'll work great for compute tasks. Big question is, how to do it for GFX without killing performance.
The issues with interconnects are problems in compute too. It's a huge thing in the HPC world. That's why they focus so much on it. Especially if you want thousands of them, even 1% loss in scalability will reduce performance of the end system by a significant amount.
With client systems games are moving away from Crossfire and SLI setups. Game developers can make it work, but its their choice to do so and supporting it adds to development time. It's the latency and reduction in bandwidth introduced by off-die solution that causes all the nasty micro-stutter and compatibility issues both games and devs don't want to deal with.
Nvidia with CUDA is the defacto standard in GPU compute world. Software is the ultimate differentiator. Once the momentum is there, it doesn't matter if your hardware is better. The software development work already went through all the little details(like dealing with scalability issues on a particular GPU architecture), and they don't want to do it again with another vendor.
There's simply no guarantee Intel will even reach AMD levels of volume share in GPU compute. Even among x86 CPUs, the software still need optimization to maximize performance out of them. Nevermind in GPUs where the architecture is completely different.
I think Intel has a better chance of succeeding in client dGPUs if they have a good product. At least it has a big user base already. If it scales well, and they put in a lot of effort into software development, and has great perf/watt/$, sure people will go for it. It won't be an overnight success. Things never are. But it can be enough to build the basics to make it one.
This thread isn't about GPUs though. We can move the discussion there.