It may sound silly at first, but it is not very far-fetched. After all, both light and sound are waves. Like light, sound can reflect and refract, so it is conceivable that many lighting models developed for VPU can be applied to APU and vice versa.
In such implemetation, 3D audio positioning may involve the vertex shader to do the geometry processing and sound map calculations, rasterizer converts sound waves into 'raster' sound format, and pixel shaders will mix sound effects to different sound samples like texturing does to pixels. Finally, the 'display controller' will mix all these sound channels into 2-8 'pixels' (speakers) at 16/32-bit resolution with 44-192kHz 'refresh' (sampling) rate. Best of all, geometry data and even some shader programs can be shared, increasing system efficiency.
As for which VPU will have that kind of power, one possible candidate will be NV40. Since NVIDIA has increased vertex and pixel shader features on NV40, it may have just enough processing powers to do that. After all, the bandwidth cost for processing audio is miniscule compared to the video counterparts.
In such implemetation, 3D audio positioning may involve the vertex shader to do the geometry processing and sound map calculations, rasterizer converts sound waves into 'raster' sound format, and pixel shaders will mix sound effects to different sound samples like texturing does to pixels. Finally, the 'display controller' will mix all these sound channels into 2-8 'pixels' (speakers) at 16/32-bit resolution with 44-192kHz 'refresh' (sampling) rate. Best of all, geometry data and even some shader programs can be shared, increasing system efficiency.
As for which VPU will have that kind of power, one possible candidate will be NV40. Since NVIDIA has increased vertex and pixel shader features on NV40, it may have just enough processing powers to do that. After all, the bandwidth cost for processing audio is miniscule compared to the video counterparts.