The technology is intriguing, and has many applications, but consumer imaging probably isn't the best one, at least as far as the technology has been developed so far. The problem is that with the different layers of the chip being sensitive to different wavelengths, when it comes time to put the layers back together as an image the layer on the bottom is going to be less exposed than the layers on top of it. To overcome this you either have to boost the gain of that layer, which adds chroma noise in that channel, or use software algorithms to bring the levels back up to that of the top layer. This results in all the channels being clipped at different points of under and over exposure. Shadows and highlights would take on color casts. The way they've chosen to overcome that limitation thus far has been to clip all the colors whenever one would have been clipped, leaving over exposed areas in black and white. These limitations seem to be fundamental flaws in the technology which can't be overcome using what we know today.
Applications where true color isn't necessarily the goal might benefit greatly from Foveon though. Imagine a Foveon chip on the Hubble that could take photographs of stars and such in UV, IR, and various other non-visible bands, all simultaneously.
Something else to think about is that the eyes of living things have mosaic light sensitive organs like contemproary CCD and CMOS sensors, rather than layered ones like the Foveon.
I don't think that Foveon chips will replace contemporary CCD or CMOS sensors any time soon.