In a modern context, CPUs are more latency sensitive than GPUs. Moreover, there are marketing, political, and manufacturing problems with putting the memory controller on the GPU die.@Exist50 It's precisely because of the iGPU's latency and bandwidth requirements that Arrandale/Clarkdale opted for GMCH as a separate chiplet. If anything modern iGPUs are even more sensitive, not less. Sure you will lose some CPU performance compared to a monolithic setup which is why chiplets are a compromise, not a magic bullet like some believe.
In terms of marketing, Intel sells its products primarily based on CPU benchmarks. Compromising the CPU to boost the GPU is thus a poor marketing tradeoff.
In terms of politics, the Core team is much more closely linked to the SoC's development than the GPU team is (both the Alder Lake and the Core team are part of IDC), so their desires will factor in first.
Finally, one of the primary benefits of chiplets would be the ability to swap out dies instead of needing to tape out new products, and the GPU would likely see the greatest variation. It's easy to imagine a "GT1" variant for desktop and low end mobile, a mainstream "GT2", and a flagship/premium "GT3". The GT1 market would certainly not care about any latency benefit to the GPU if it meant taking away from the CPU.
Let me propose a theory. Instead of that slide describing end-user configurations available, it described the dies they were going to tape out? An 8+8 for high end/benchmark winner, and a cheaper 6+0 config they could throw into a shit-ton of i5 OEM and gaming systems.I agree with this too. But the leak showed 8+8+1 125W, 8+8+1 80W, and 6+0+1 80W.