• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Integrating audio functions with VPU?

quanta

Member
It may sound silly at first, but it is not very far-fetched. After all, both light and sound are waves. Like light, sound can reflect and refract, so it is conceivable that many lighting models developed for VPU can be applied to APU and vice versa.

In such implemetation, 3D audio positioning may involve the vertex shader to do the geometry processing and sound map calculations, rasterizer converts sound waves into 'raster' sound format, and pixel shaders will mix sound effects to different sound samples like texturing does to pixels. Finally, the 'display controller' will mix all these sound channels into 2-8 'pixels' (speakers) at 16/32-bit resolution with 44-192kHz 'refresh' (sampling) rate. Best of all, geometry data and even some shader programs can be shared, increasing system efficiency.

As for which VPU will have that kind of power, one possible candidate will be NV40. Since NVIDIA has increased vertex and pixel shader features on NV40, it may have just enough processing powers to do that. After all, the bandwidth cost for processing audio is miniscule compared to the video counterparts.
 
The problem is the wavelenght, soundwaves have a typical size that is of the order size of your head-size of room.
I am quite sure most of these techniques use some type of "ray" model which does not work for this reason.

An example: It is relatively easy to calculate how different parts of a room will be illuminated, if you replace the lamp with a speaker and tries to calculate the sound pressure level at varius positions it is not as easy.
 
Back
Top