There is nothing technical about the chips that would prevent them being used for system memory, all that is required is a memory controller. However they aren't drop-in compatible with standard DDR, and isn't optimized for the general tasks that main system memory needs.
Per
Wikipedia, "This memory uses internal terminators, enabling it to better handle certain graphics demands. To improve bandwidth, GDDR3 memory transfers 4 bits of data per pin in 2 clock cycles."
It still runs at a higher voltage (2.5V) than DDR2 (1.8V), which was reduced from DDR1's 2.5V, because they decided it's okay for video memory to run at a little bit higher heat output in order to get those extreme clock speeds.
A GPU must be designed to support the memory it's being connected to. Current GPUs are designed for GDDR3, and some might support plain DDR2, or even DDR1. They have directly integrated memory controllers.
A socketed GPU would need to have a memory controller designed to communicate over whatever bus is used by the socket. For an AMD system that would be HyperTransport. The GPU would use HyperTransport to communicate memory access requests to the system memory controller (integrated in the CPU), which would then do the actual access to the memory. This is what happens already with a PCI-Express or AGP video card which needs to store data in main memory. The big difference would only be that ATI would need to design a GPU with a HyperTransport interface, rather than PCI-Express or AGP, but that could technically even be done with a bridge chip.
With the socketed GPU, there'd be no need for GDDR in the system, however such memory access will still be slower than dedicated video memory, just like PCIe or AGP memory access is slower.