Maybe it will go the other way. Perhaps we will all be running dGPUs, with little CPUs inside. Isn't NVidia going that way in the future?
I guess I just don't see any way around the laws of physics. Given a power supply of 300W, and sufficient cooling, and a given process technology, I just don't see IGPs catching up to the performance of dGPUs. Ever.
But perhaps Linus knows some clever hacks around the laws of physics. I'm sure that he's smarter than I am...
Intel has an ever increasing manufacturing lead. Density is very important for GPUs. TSMC's 16nm won't offer any improvement in that area. It won't be until ~2019 that dGPUs will have another real node after 20nm in 2015. In the meantime, Intel is rapidly increasing both density and power/performance. I'm not sure how a 7nm IGP with germanium in 2018 will have any difficulties competing against a 20nm chip with FF+.
There is also price. Nvidia is a fabless company, so TSMC gets the
foundry tax. There doesn't seem to be any
improvements in the price/transistor at 20nm, 16nm and probably also 10nm. So the price of GPUs will keep increasing.
Meanwhile, Intel, which doesn't have to pay a foundry tax, plans to scale density aggressively in the coming nodes. At 7nm, they'll get another boost in price/transistor from the 450mm wafers.
The cost/transistor deficiency of Nvidia and AMD could be as much as 4-10x for the lower density (16FF+ is 6x less dense than 7nm, but if ARM's slide is true, price/transistor would be less than 28nm, which is 13x less dense; divided by 1.5 to compensate for ~1.5x higher wafer costs than 20nm), 2x for the foundry tax and 1.5x for the lack of 450mm wafers for a grand total of at least a 12x higher price per transistor (16FF+ vs. 7nm in 2018).
Sure, Intel won't release a competitor for a GTX Titan, but that isn't necessary to heavily reduce the dGPU market.