Integrating audio functions with VPU?

quanta

Member
Dec 1, 1999
71
0
61
It may sound silly at first, but it is not very far-fetched. After all, both light and sound are waves. Like light, sound can reflect and refract, so it is conceivable that many lighting models developed for VPU can be applied to APU and vice versa.

In such implemetation, 3D audio positioning may involve the vertex shader to do the geometry processing and sound map calculations, rasterizer converts sound waves into 'raster' sound format, and pixel shaders will mix sound effects to different sound samples like texturing does to pixels. Finally, the 'display controller' will mix all these sound channels into 2-8 'pixels' (speakers) at 16/32-bit resolution with 44-192kHz 'refresh' (sampling) rate. Best of all, geometry data and even some shader programs can be shared, increasing system efficiency.

As for which VPU will have that kind of power, one possible candidate will be NV40. Since NVIDIA has increased vertex and pixel shader features on NV40, it may have just enough processing powers to do that. After all, the bandwidth cost for processing audio is miniscule compared to the video counterparts.
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
The problem is the wavelenght, soundwaves have a typical size that is of the order size of your head-size of room.
I am quite sure most of these techniques use some type of "ray" model which does not work for this reason.

An example: It is relatively easy to calculate how different parts of a room will be illuminated, if you replace the lamp with a speaker and tries to calculate the sound pressure level at varius positions it is not as easy.