Originally posted by: Beachboy
I've always believed that one day computers will evolve to the point where the processor can handle
all of the tasks like sound and video. Hopefully with enough cores and proper coding of programs this will happen some day.
I remember my first "3D" card and it wasn't a whole lot better than running in software mode. Bring back software mode and let me throw some multiple cores at it!
You are much more likely to see the emergence of multi-core products wherein some of the cores are dedicated video and/or audio processors. AMD's Fusion is intended to meld general-purpose processing and video processing onto a single piece of silicon. The reason hardware acceleration is such a big deal is that GPUs are architected better for that particular kind of task, and they wildly outperform a modern CPU when used for that purpose.
To the OP's question: another of AMD's projects, Torrenza, is moving toward the use of specialized coprocessors (which might include audio processing) that simply drop into a socket. So in a literal sense, that type of solution (if the idea takes off) might be the eventual doom of the actual sound card. The functionality could also eventually be integrated into the processor, if the trend toward centralization really gains traction.
The sticking point for sound is the actual output hardware. One of the things that may distinguish a good sound card from a bad one is the DAC hardware, which converts the digital signal that the computer has been working with into its final, analog form. And I would imagine the noise concerns of running an analog signal very far in a computer motherboard (or the need to amplify the signal adequately) could create some issues. That is, unless a digital audio format (such as S/PDIF) gains popularity. With the current push by content providers for a transition to digital formats for as much of the playback chain as possible, it could very well happen.