Indeed so far it's only been used for post processing, why is this? Is it simpler and less tine consuming to implement these effects in DC than traditional methods (like an in house shader algorithm I assume).
I think it's a case of low-hanging fruit.
Post-processing is usually little more than a simple filter. Games had been using conventional shaders already, for HDR bloom, motion blur, depth-of-field and such... In fact, even without shaders, some of these effects can be implemented.
But with GPGPU instead of conventional graphics shaders, you have a bit more flexibility in memory addressing and such. So you can implement more advanced filters, or implement the same filters in less render passes, improving performance.
Physics is much more difficult to do. Most game developers don't even use their own physics code, they use third-party libraries instead (Havok, PhysX). ID Software and CryTek are the exceptions to the rule, but neither has announced support for GPU acceleration so far. So I don't think there's much of a chance that game developers write their own physics code for DirectCompute or OpenCL either.
And since Havok is owned by Intel and PhysX by nVidia, I don't see DirectCompute or OpenCL support coming from that direction either. Neither company has any reason to support it. Intel doesn't have any GPGPU hardware yet, and nVidia already has support through Cuda.