Not now, no, because most of them learned. However, that is exactly what the prior console generation was, and the Toshiba/Sony part of it was allegedly going to revolutionize everything.
While Cell failed, current-gen console parts have essentially done the same thing all over again, only worse: instead of one "general purpose" core with several (6-8, depending on how many were fused off) SPEs, now we have 8 "general purpose" cores with, what, 1024 shader units? Those shader units can be used for computation just as easily as they can be used for graphics, too. Whatever lesson Cell was meant to teach to the industry as a whole, the one that firms like AMD and Intel don't seem to have learned is "parellelism in consumer hardware is bad, mmmkay".
That being said, those parts aren't necessarily the defining hardware in what will be the next generation of processors. Cell was meant to be used in darn near everything, while those custom AMD parts are pretty niche.
What is Skylake going to do that's any different? Skylake is another fat core design, doing even more fat core things.
Skylake is going to have Gen9 graphics, which will be just another step in the evolution towards APU-like behavior by mainstream Intel products. Sure, it'll have "fat cores" for the same reason that Cell and AMD's custom Jaguar chip (among others) have them. Fact is that a Skylake owner who runs software capable of GPGPU on Gen8/9 graphics will get much more out of their CPU than someone who does not . . . sort of like a Kaveri owner today.
What you will also see are more forum users on sites like this complaining that the IPC improvements for Skylake aren't so great. It remains to be seen what Intel can do to goose up "fat core" performance as they move forward. Judging by the early Broadwell results (yeah, I know, it's still early and that's a low-power part) I'm not holding my breath waiting for +%20 IPC over Haswell or anything like that.
If you read it it should be obvious: if it takes a great deal of effort to parallelize 10% for some arbitrary speedup, you're still taking >90% the original time, so why not speed everything up by a smaller amount, instead getting such a paltry speedup for one thing? And, if it's common enough, then fully offload it to special hardware and be done, instad of getting weak CPU cores for the job.
See, there's the rub: "offloading it to special hardware" is exactly what AMD and (eventually) Intel are going to want people to do. It's not like they're asking people to use add-on cards exclusively here (in Intel's case, they only want/expect a very small niche to use their Phi products). Intel is spending perfectly good die real-estate on Gen8/Gen9 iGPUs that are more than just an afterthought or convenience to the user.
If you really want to know why Intel is going this route (aside from the possibility that they just want to irritate Torvalds), go ask them. I will tell you this much, though: if anyone can provide the necessary software/compiler tools to make GPGPU accessible to everyday coders, it'll be Intel. Not so sure that their drivers will be up to snuff, but the compilers? Yeah, Intel will do great there.