Looks like many of them are tightly coupled to memory, so probably not possible. Maybe the encryption stuff though?
Training, largely true, but inference is surprisingly dominated by CPUs. The GPU advantage diminishes until tight latency constraints or small batch sizes, and CPUs are obviously more flexible if you're not going to be doing inference all the time.
For training, GPUs are great if your model fits in VRAM (or is amenable to being streamed in), but for really, really large models, they sometimes have to fall back to CPUs.
Jim Keller remarked on this in talk once. I skipped to the relevant part, but the whole thing is worth a watch. Timestamp at 42:47, if the media embed messes it up.
TLDW: He estimated at the time (about 3 years ago) that AI was something like 80% CPU, 20% GPU, 0% other, and says that if things moved quickly, it would be something like 1/3 each in 5 years, but things probably wouldn't move that quickly.