Why no real time digital encoding?

Apr 25, 2004
44
0
0
I am trying to get to the bottom of this question so I figured I would ask around...

I knew when I purchased my Athlon 64 and Nforce 3 board, I would be leaving the world of Soundstorm behind. However, I figured that surely someone would have a PCI card that would do real-time Dolby Digital encoding. Of course, no such thing exists. Every time someone talks of such a solution, they are told "not enough bandwidth." I always bought this and went on my merry way. However, how is there not enough bandwidth?

PCM 16-bit/44.1k stereo -> (((16 x 44100) / 1024) / 8) x 2 = 172 KB/s -- "CD quality sound"
That gives us two channels of audio. Well, Dolby Digital is 5.1 and the .1 is a low bandwidth channel.

So, even if we take that times three, we are at 516 KB/s. This is how much raw bandwidth 6 channels of sound at "CD Quality" would take up. This is nowhere near the PCI bus limit.

And then I realized something - all this audio makes it to my sound card now a okay. Obviously there is enough bandwidth to get the raw audio channels TO the sound card. Right? I think this is where I am getting confused on WHAT the bandwidth limitation is.

Because once the raw audio signals hit the sound card, then it should be left up to the hardware to take the regular positional audio signals and turn them into a single Dolby Digital stream that can then be passed out via optical or coax to my receiver.

Soundstorm had to have the same PCI bus limitations, even if it was tied to the south bridge, so how is it any different? I know it used Hyper Transport to gain more bandwidth but bandwidth from where to where?

Or are we talking about general limitations in the PCI bus in general? I mean, you aren't sure you get to read off the bus every clock cycle, but do we not cycle fast enough where this shouldn't be an issue? Is this tied back to the ancient roots of PCI?

Also, is the AC-3 codec so poorly designed? I mean, this came out in the early 90s. Are you telling me that in the last 10+ years our CPUs have not become powerful enough to do real time AC-3 encoding? Think of the next run of CPUs - dual core, 64 bit, many MANY floating point extensions - and it can not do AC-3 encoding with the efficiency needed to also play a game at the same time?!



So what is the straight dope? Is it not wanting to pay the licensing fees needed? Is it a bandwidth limitation but in something else? Is it just the thought that there is no market for such hardware? What is going on here?

Thank you :):confused:

EDIT -> http://www.cmedia.com.tw/product/CMI9780.htm aparently CMedia has a chip that can do this albeit in software (so there is going to be a CPU hit) However, it does not seem that any seperate sound cards are based on this chipset. A soyo Socket A board based on the KT880 chipset uses this sound chip.

So again, what is stopping someone from taking CMEDIA's chip and slapping it on a sound card?!

EDIT AGAIN-> http://www.cmedia.com.tw/product/CMI8768_plus.htm
" Valuable software technology:
l Dolby Digital Live 5.1 (AC-3) Real-time Encoder (only 8768+)"

So - it looks like the AC-3 encoding is done in software but yet it is still done in real-time. Last time I looked at my CPU usage while running HL2, I had some cycles to spare. It was my 6800 that was coming up short, not my 3500+

And a google search for that chipset brings me back to Anandtech where I quote
q[C-Media makers of the CMI8768+ 8CH High-Performance PCI Audio Single Chip has stated in c-media sales person emails that Hitec were most likely going to be the first to release a card with the CMI8768+ chip. (only the 8768+ actually encodes DD5.1 and it is just a chip, the 8768 does not encode DD5.1 as reported by card users)]q
 

gryfon

Member
Dec 4, 2003
120
0
0
may i ask why you want a realtime digital encoding ? AFAIK the digital output from nforce is not for 'better quality sound' , but for cable simplicity sakes (i.e no need to have separate outputs for digital and analog), also the nforce doesn't seem to have a usual bitrate if i remember correctly and also cuts off 17khz frequency and above.

 
Apr 25, 2004
44
0
0
Every analog sound card I have ever had in every system I Have had since socket a first came out has had problems with noise polution when in analog. From hearing crackling when I scroll the mouse with my old SB Live! (First gen card) to an audigy I pluged into my Nofrce 3 board giving me crackling when I turned the analog audio up a notch, I have had bad luck with analog audio.

This is why I want possitional digital audio encoded real time.
 

Auric

Diamond Member
Oct 11, 1999
9,591
2
71
PC's are inherently noisy environs (RMI/EFI) and DAC's on mass-market cards pre-Audigy were less than average. Scrolling crackling stuff is probably from improper installation like sharing IRQ's. Other noise is generally from poor amplifiers. RT encoding would be nice but it would have to be a lot better than nFarce SS was.
 
Aug 29, 2004
74
0
66
So if analog out from the soundcard is a problem,that still doesn't necessarily mean that you need to compress the audio into AC3. Can't you send all 6 channels in digital PCM format (uncompressed) to an HT receiver? Its only a handful of megabits per second at most.
 

Low Roller

Junior Member
Dec 10, 2004
10
0
0
As an HTPC'er, nothing compares to SS for HTPC gaming. The fact is, if you want to game in in 5.1, you need a Creative card, and a game with EAX or D3D support. Even then, this solution is in no way superior to SS in terms of surround sound positioning effects. Now comes of running the analog 5.1 outputs of an Audigy or whatever into the 5.1 analog inputs of my receiver/preamp. From my experience, Creative cards have alot of noise or low level hum that's not assosiated with SS via the SPDIF connection. If I want to use those 5.1 analog inputs of my receiver/preamp with my PC and DVD-A player, I need to fork over money for a switcher of some kind.(most HT receivers only have a single set of 5.1 analog inputs) Then throw on the fact that MANY receivers and preamps have no bass management features available on their 5.1 analog inputs, so the bass is not routed to the sub. Depending on your system, this can have a BIG impact on sound quality. By outputting DD over SPDIF, you get the access to the receiver/preamp's DD crossover to properly rout the low frequencies to the subwoofer.
 

TerryMathews

Lifer
Oct 9, 1999
11,464
2
0
Originally posted by: rosewood
PCM 16-bit/44.1k stereo -> (((16 x 44100) / 1024) / 8) x 2 = 172 KB/s -- "CD quality sound"
That gives us two channels of audio. Well, Dolby Digital is 5.1 and the .1 is a low bandwidth channel.

EDIT: nvm, my math is waaay off. :)
 

imported_halcyon

Junior Member
Aug 30, 2004
17
0
0
One reason I'd like to get a DD realtime output from all games is this:

DD 5.1 output -> Dolby headphone virtualizer -> headphone amp -> quality headphones (with 'virtual 5.1 surround')

Much better than those el cheapo "5.1 headphones for $29.99"

You can pick any world class headphone amp and headphones to suite your taste AND still have great positional audio in games.

SoundStorm on nf4 would be a real killer, imho.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
So what is the straight dope? Is it not wanting to pay the licensing fees needed? Is it a bandwidth limitation but in something else? Is it just the thought that there is no market for such hardware? What is going on here?

It's A and C (although anyone who says there's no market is nuts; anyone with an HTPC would buy one in an instant if it was available as a discrete card, possibly even if it was software encoding).

Bandwidth is not a problem, even over PCI (you can easily transmit encoded DD 5.1 streams through your soundcard's optical out, which goes through the PCI bus).

Dolby licensing apparently is somewhat of a hassle and expense (Microsoft payed a lot of the costs for developing SoundStorm, as it was used in the XBox as well).