• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Extigy vs. Audigy

RolyL

Senior member
Has anyone seen a comparative review of the two? I gather the Extigy is missing ASIO support, but does include a hardware DD decoder. Any other feature-set differences? Given Creative insisted the bandwidth and latency penalties &c. ISA could offer a few years ago were unacceptable and that PCI was necessary for decent performance, how on earth have they managed to persuade a USB connection to suffice?
 
In reading the specs about the extigy because i was interested in one for my lappy, i couldn't understand why anyone would want one for their desktop. Unless of course they come out with a USB 2.0 version 🙂
Creative themselves right on their site mentions that b/c of the USB 1.1 bandwith limitations that the extigy is not a good sub for gamers or people that use a lot of USB devices. Sorry, to lazy to link the page, just search CL's site tho and you will find this in their faq.
j
 
CD quality audio only uses about 150kbyte/s - 1.2Mbit/s. USB 1.x can handle that.

Say you have Front and Rear speakers + Sub + Center - 3*1.2Mbit/s = 3.6Mbit/s. USB can still handle that. However, Sub uses less bandwidth so it is less than 3.6Mbit/s

 
Don't your figures correspond to a quantised waveform? The signal sent over USB will be unprocessed audio data, for example the raw information dumped by a game engine, that the audio card needs to decode and convert into such quantised (or analogue for that matter) data. Anyone know the typical bandwidth requirement of such unprocessed output? It's going to have to include more than enough to fully describe the audible output, as the sound card can't generate anything it doesn't know about, although having said that, as we're not necessarily dealing with a waveform yet, the sound model transmitted may actually be considerably smaller. Thoughts?
 
I calculated the thing in digital form.
I think that SHOULD be the format being transfer over TO the extigy via USB.


 
I don't know how sound cards work, but I'm pretty sure that can't be the case. Your figures correspond to a waveform, but this is generated by the sound card once it's received information from the audio engine. The USB cable to an Extigy will carry the engine's output: if this wasn't true, there'd be no need for a sound card as we could plug speakers directly into a header on the motherboard and still hear sound. The only situation where this doesn't hold is in direct pass-through. In only this case (and only digital at that), I suppose such a bandwidth calculation for 44.1 kHz @ 16 bit would hold. We're still not taking into account the difference in protocol between SPDIF and USB though. Basically, I don't think the figure holds at all.
 
The original 1x CD-Roms (= same speed as audio CD) had 150 kbyte/s transfer rates. This is indeed waveform data, but that's as complicated as it gets. A soundcard is fed either waveform data or MIDI, where MIDI needs much less bandwidth...

A game, however, may play several waveforms at the same time, and then things need more bandwidth...
This is why it will work for "normal" work, CD playback and such, but a game with music, gunshots, etc. will have problems. Unless the driver uses the CPU to compress data before streaming it over the USB... Which I find unlikely.

I guess you are both right.
 
Straight CD audio only needs bandwidth of roughly 172 kilobytes per second.

[(16 x 44100) / 1024] / 8 x 2 = 172

16-bit word length multiplied by 44100 samples per second divided by 1024 to find kilobits instead of bits divided by 8 to find kilobytes instead of kilobits multiplied by 2 to calculate for a stereo (binaural) signal.

EDIT: Assuming the WAVs have the same characteristics as PCM 16/44.1, we would need 172 x 2 = 344KBps of bandwidth for 4-channel playback, and then we need an independent signal for LFE and possibly a center.
 
24/96 audio (which the Extigy is supposedly capable of playing) would require a bandwidth of [(24 x 96000) / 1024] / 8 = 281KBps of bandwidth for a monaural (mono) signal. A 5.1 24/96 signal would require 1406KBps of bandwidth, not including the LFE channel. Now if the Extigy were capable of playing DVD-Audio, you would have to have USB bandwidth of [(24 x 192000) / 1024] / 8 x 5 = 2813KBps of bandwidth, not including the LFE channel, and then you might have to contend with the 6.1/7.1 format DVD-A may be capable of in the future.
 
A game, however, may play several waveforms at the same time, and then things need more bandwidth...

Surely this isn't the full story at all? Let's take an engine involving extensions (EAX, A3D etc.) for example. The sound chip has processing to do, else all would sound alike (which they clearly don't) excepting physical factors such as s/n ratio etc. The EMU10K2 on the Audigy and the CS4630 on the GXTP are different chips that are capable of different tasks: it's not simply a case of them applying their DACs to their input signals. An EMU10K2 can receive a EAX HD signal and deal with it, a CS4630 cannot. The sound card does work. For wont of a better word, it can receive code, not just a waveform. It is for this reason I think all figures we throw around are meaningless, unless someone who has actually coded for a modern soundcard can inform us precisely what is sent to the soundcard.
 
RolyL, where would they get the sound samples from first of all to do the processing? either the RAM or the hard drive. that has to go through the PCI bus, and now through the USB port.

typically sound files in games are recorded in either mono or stereo, then they are sent to the soundcard for 'processing' by the API that the soundcard supports in hardware. otherwise they go to the CPU, which then uses the soundcard as a passthrough device..

so, say a gunshot is recorded for a game. the quality IMHO would be 16 bit 44100hz or so, possibly in mono due to the fact that a stereo sample would seemingly confuse the issue even more.. don't quote me on that though.. not only that, but you have people's voices, people running around and jumping, other types of weapons being fired.. an FPS game seems to use alot of audio samples. many games do in fact.

so the only way for the Extigy to get away with the USB port and play games is to compress (losslessly?) the audio but I don't think that they're doing that.. it would eat EVEN MORE CPU time.

btw, the bitrate of an audio file that is in PCM format (like a wav file, or a CD audio track), is simple to calculate. bitrate = number of channels * number of samples per second * number of bits per sample. so 2 * 44100hz * 16 bit = 1 411 200 bits/second.

Surely this isn't the full story at all? Let's take an engine involving extensions (EAX, A3D etc.) for example. The sound chip has processing to do, else all would sound alike (which they clearly don't) excepting physical factors such as s/n ratio etc.

they don't sound alike because most soundcards use different API's. Aureals Vortex 2 used the well supported A3D 2.0, but other soundcards not based on that chip had to use Directsound (like the Creative labs chips) which uses inferior HRTF.

An EMU10K2 can receive a EAX HD signal and deal with it, a CS4630 cannot.

why do you think that is? not becuase the CS4630 isn't powerful enough, for it can run an API made by Sensaura that is just as capable as EAX Advanced HD but with more accurate (due to better work done by engineers) HRTF, meaning better 3D positional audio.

the CS4630 can't do it, becuase Creative won't let anyone use that API. it gives them an 'advantage' over other cards, becuase it has the potential to catch on as the major sound API for gaming in the PC industry.
 
I'm astonished this has spawned a discussion 🙂

All I suppose I'm claiming is that I assume a soundcard can be fed more than a wavefile. CPU usage wouldn't vary to the degree it does between different cards if this wasn't the case, right?

Note: I wasn't trying to insinuate that the 10K2 was a superior unit to the Crystal BTW. Anyway, the focus of the APIs seems different. People say the years old A3D has barely been surpassed (even though Creative own the patents) for positional accuracy, but the feel of immersion on cards utilising EAX effectively is greater.
 
All I suppose I'm claiming is that I assume a soundcard can be fed more than a wavefile. CPU usage wouldn't vary to the degree it does between different cards if this wasn't the case, right?

that's a tough one to figure out for me.

say I was running a benchmark with 2 cards (the Audigy and the Live) both must use the same API, but one of them gets a lower CPU usage score. why? well it seems to me the CPU would do the same type of work as before, only with the slower DSP it would have to wait around for it perhaps longer than with the faster one? it's tough to answer because unlike video cards, audio cards are measured in CPU usage. Video cards theoretically could be measured in that form too I think (which is why some cards speed up more with a newer CPU than other cards, becuase they're more CPU dependant, or because they're fast enough to not be bottlenecked by themsevles, but rather the CPU).

so perhaps with soundcards it's the other way around? the CPU has to wait for the DSP to do it's work? I find that hard to believe though because with a 15% CPU hit or more some cards don't sound necessarily slow or bad, they just slow a game down more than another..

finally what I find most likely is simply that the Live depended on the CPU to do alot of it's work (perfect example was Dolby Digital Decoding which has to be done in software) in the benchmark, whereas the newer Audigy was capable fast enough to do more work at the same time, thus keeping CPU usage to a low level where the other card failed. you might equate this to upping the resolution for 3D graphics cards, some cards (I think like the Voodoo 5 with the HSR) weren't fast at high resolution, so the CPU was sitting around doing nothing. instead drivers were written to use the wasted CPU time to do HSR before the T&L data was sent off to the video card, resulting in a speed increase at high resolution, though becuase 3dfx didn't get to work with it long, we didn't see it come to fruition and work properly at more extreme levels.

Anyway, the focus of the APIs seems different. People say the years old A3D has barely been surpassed (even though Creative own the patents) for positional accuracy, but the feel of immersion on cards utilising EAX effectively is greater.

I don't know I haven't tried both in games that have both EAX Advanced HD and A3D 2.0 (because games like that don't exist, and cause I don't have an Audigy). but I'd have to say that most people who listened to their audio and had both the Live and the Vortex 2 agree that the Vortex 2 (assuming it used A3D 2.0 or higher) had superior 3D audio to the Live which used either EAX 1 or 2 (through directsound or through EAX).
 
<<say I was running a benchmark with 2 cards (the Audigy and the Live) both must use the same API, but one of them gets a lower CPU usage score. why? well it seems to me the CPU would do the same type of work as before, only with the slower DSP it would have to wait around for it perhaps longer than with the faster one? it's tough to answer because unlike video cards, audio cards are measured in CPU usage.>>

It's true that the quality of a sound card is often measured by the percentage of the available CPU time that must be used when performing certain operatings (playing games, listening to MP3s, etc.). However, the reason the CPU time is being used is not because the CPU has to "wait around" for the DSP to complete its work, but simply because the DSP has given some of its work back to the CPU because the DSP is not fast enough or the particular functions being called are not implemented in the DSP's hardware. In other words, if you have an advanced DSP (like the Santa Cruz in my desktop) which is faster and capable of processing a greater variety of tasks than a less-advanced DSP (like the ESS Solo in my Dell laptop) then the CPU does not have to be used as heavily in order to process the sound stream. Think about it: an FPS wants to generate a gunshot, so the call gets sent to the DSP to process and output that sound. If the DSP is already too busy or does not have the proper functions in hardware to process the sound, then it offloads the request to the CPU to process -- this is often known as doing something "in software."

For a further explanation, just look at the difference between onboard AC97 audio and a PCI sound card like the Audigy or the Santa Cruz. Why do you think that the onboard sound takes up more CPU time processing the sound, not to mention sounding like a POS? It's because the AC97 sound has to send many of the requests it gets back to the CPU to be processed there; the AC97 sound has neither the speed nor all the hardware to process the required sound streams in real time.

Hope this helps!
Nick
 
oh ya.. if the DSP isn't fast enough it has to hand things off to the CPU.. I forgot about that 🙂

and the hardware abilities of the DSP (as well as speed) do matter like I mentioned in my post above. Specifically:

finally what I find most likely is simply that the Live depended on the CPU to do alot of it's work (perfect example was Dolby Digital Decoding which has to be done in software) in the benchmark, whereas the newer Audigy was capable fast enough to do more work at the same time, thus keeping CPU usage to a low level where the other card failed. you might equate this to upping the resolution for 3D graphics cards, some cards (I think like the Voodoo 5 with the HSR) weren't fast at high resolution, so the CPU was sitting around doing nothing. instead drivers were written to use the wasted CPU time to do HSR before the T&L data was sent off to the video card, resulting in a speed increase at high resolution, though becuase 3dfx didn't get to work with it long, we didn't see it come to fruition and work properly at more extreme levels.

I find audio hardware very interesting becuase I don't quite understand it as well as graphics cards, but your explaination is better than mine 🙂
 
Back
Top