Again, it has more to do with how well they can get the nodes working (and to an extent how much money they can weasel out of the US/Europe) than anything special about being a foundry.
No they MUST become a foundry if they want to keep their fabs, because they don't sell enough CPUs to compete in the future. Every new process generation is more expensive to develop, the fabs are more expensive to build, the equipment is more expensive to buy. This math is why the foundry industry has consolidated down to two players (three once Intel is in it for real) at the leading edge.
You can only manage those ever escalating fixed costs if you run more wafers every year, and while the PC market was growing Intel did. Unfortunately the PC market isn't growing anymore, so if they stay in client/server CPUs those fixed costs become a larger and larger portion of their per wafer cost and thus per CPU cost. Meanwhile TSMC is making leading edge chips for the large majority of everyone else in the entire world and thus has a lot more wafers to spread out their roughly similar fixed costs.
To run more wafers Intel's first thought was to develop more "Intel inside" lines of business, i.e. move beyond client/server CPUs. They tried to enter the mobile SoC business, that flopped. They talked about "IoT", and people laughed at the idea anyone would want x86 for IoT. They've done NAND, then Optane, but those are commodity products they don't get the per wafer pricing CPUs do. They bought an FPGA company, which did bump their wafer count and allowed them to create a niche for FPGA capabilities in server. Lately they are dabbling in discrete GPUs (too soon to tell) and crypto (they got in right as the bubble burst)
In short, they've tried every strategy under the sun to push out more Intel branded silicon but nothing has worked, at least not to the degree it needs to work as far as increasing wafer runs. So they are forced into selling non-Intel branded silicon, i.e. becoming a true foundry.