NVIDIA will definitely have trouble competing if the need for a low-to-middle range standalone graphics card is removed with having a GPU on-die. That's why they're pushing for Fermi to be a GPGPU for gaming and scientific research. I'm not sure how much of a niche market that is, but it might pay off for them as software tools for using a GPGPU mature.
nVidia's best hope, IMO, of making GPGPU to work is to get AMD/intel/ect to improve their OpenCL drivers. Cuda was the first, but it is nVidia only, which kind of puts off developers from using it. If they can encourage the use of OpenCL and its advancement, it would be better for them in the long run. It could make it so that Getting a GPGPU card would boost performance, which would save them from getting kicked out of the mid range market (the low end might still give them the boot)
I can't think of a reason why PCI-e shouldn't replace PCI altogether, as it's a faster and more flexible interface. PCI might stick around for awhile, just like PS2 ports with USB.
Honestly, the reason PCI slots have stuck around for as long as they have is because they are relatively easy to setup drivers/program for. PCI is a parrellel connection, which means that interfacing with it is as easy as saying "in portnum, inVar" and "out portnum, output". PCI-E, on the other hand, boasts a serial connections, that in and of itself makes it harder to work with. 16 serial connections all transferring data at the same time is a lot to manage efficiently.
There are similar reasons for why PS2 stuck around for so long, USB is a NIGHTMARE to program drivers/make hardware for. It is a wonder any mouse and keyboard actually went to USB over PS2 to be honest.
That's not saying that PCI-E couldn't replace PCI in the future, just trying to give the reasons that PCI has stuck around for as long as it has (and may persist further into the future)