Originally posted by: chizow
Originally posted by: Peter
"Conflict", your favorite word? It's getting annoying. Are you Creative's chief apologist?
Nope, but conflict is appropriate as that's what's happening and I'm certainly not going to make asinine and irresponsible comments like "only products that begin with C from a certain company have IRQ conflicts". Fact remains Creative products can and do work flawlessly in most systems and even within the same system with different hardware or software configurations. But we already know your position on the topic, which is laughable, since you're a BIOS writer by profession, that the specifications (which change daily) are perfect and that everything should work perfectly in a perfect world. Except that's nothing like the end-user experience.
If you'd listened instead of your desperate search of mis-quotables, you'd figured that my main claim is that IRQ conflicts do not exist, only in sad excuses to distract customers from the real problem, halfbakery. And there, I said it again.
The PCI specification hasn't changed its original rules on interrupts, ever.
www.pcisig.com - membership required. You're not one, and haven't read any of it? You are basing your claims on what, then?
VGA cards, TV cards, sound cards, all those are the LEAST demanding things when it comes to interrupt load - less than 100 per second, ridiculous compared to true high bandwidth components. And yes, the PCI specification MANDATED shared-IRQ support "even" for these, from day 1 of PCI's existence.
Again, pointing back to my PCI VGA example, you absolutely would run into problems if you placed the VGA card in a slot that had multiple shared devices on the same IRQ. This used to be one of the single biggest troubleshooting queries on any tech board, ignoring it is just laughable. Instead of continuing to work within the constraints of a limited and poorly thought out bus system they forced a change in the industry with AGP and now PCI-E.
AGP and PCIE didn't change any of the rules regarding interrupts, and shared interrupts weren't a problem there either. People thought they were, because support idiots kept (and still keep) droning on about it.
And there's only one single reason for something "causing IRQ problems": Sh*t drivers. Not bandwidth requirements, no other pathetic excuses or passing the blame on evil chipset vendors that won't give our lovely chip enough bus time, blah blah blah.
Being a BIOS engineer by profession, I've had $15k business flights around half the world to debug "IRQ conflicts" on site with the customer only to prove this point, so please, no more apologetics. Given the market segment I work in, I've seen 17 gigabit NICs share a single interrupt - working. 1.3µs extra latency per handler, totally irrelevant even then. Buy yourself a license of an IRQ profiling tool (e.g. PCIScope, just software, inexpensive) to see for yourself.
Wait, so you've taken $15k business trips to troubleshoot a Creative product? Oh didn't think so.
No ... and I didn't say so, fanboi. Read again what I /actually/ wrote, not what you want to read into it. 12 years in this business and a lot of FAE work come and gone, I have yet to see a perceived "IRQ conflict" that after an hour of debugging doesn't boil down to a poorly written driver. The "IRQ conflict" myth is just that, a myth.
Could've swore you insisted earlier only Creative products had IRQ conflicts. Oh wait, you're tacitly acknowledging the FACT other peripherals from various vendors are subject to the same problems on a hit and miss basis depending on hardware and software configuration. For a second I forgot we were talking about the PC industry lmao.
No I didn't, and the record is there for everyone. One more time: "IRQ conflicts" do not exist, except in desperate apologetics like Creative's.
As for purchasing/licensing a IRQ profiling tool, there's no need to as
Creative had done just that and more. Feel free to take it up with them but I have no reason to doubt what they're saying.
Funny, because what they're saying is, "We have observed peak holdoffs of up to 2 milliseconds in some cases. This is unusual chipset behavior that is beyond the ability of a hardware audio accelerator to compensate for in its internal buffering. The SoundBlaster X-Fi tolerance for these PCI holdoffs is approximately 120 microseconds peak holdoff, with a 1 microsecond average holdoff."
120 microseconds worth of bus buffer, 1 microsecond expected holdoff. 1 microsecond is a mere 33 clock cycles, so what do you think will happen if something (like an IDE controller) on the same bus transfers data? By the rules of PCI, devices are allowed to burst up to (roughly) 1000 clock cycles in one go - and in case of storage controllers for example, you definitely want them to, because throughput won't happen with short bursts.
So how on earth is /any/ realistic PCI setup going to fulfil Creative's "requirements"? Not. Going. To. Happen.
What is it then? Pathetic misjudgment of reality whilst designing the product. Sound familiar? Heard it on the previous page of this dispute, possibly?
The bottom line is simple: Design your product with realistic expectations, write drivers for it that don't attempt to rule the world, and it'll work. Simple, isn't it.
But its not that simple. If you could find or design a sound card that was as capable as a SB you'd have a point. If you can't and the X-Fi demands more of the PCI bus it comes down to a simple decision: use or don't use. Stalling innovation for the sake of conforming to an antiquated legacy spec makes no sense. When confronted with similar decisions for more demanding mission critical systems they just design new specs and interfaces.
You're proving my point yet again. They've tried to do something that cannot
realistically be done on PCI.
And yes the next card from Creative most likely will be PCI-E, they've already come out with a cheaper version in PCI-E, but of course there's business decisions involved as 1x PCI-E slots are still rather new and absent on all but the most recent boards on the market.
They're not fighting for bus bandwidth on PCIE, but due to the packet traffic nature of PCIE, answer latencies from the chipset are inherently worse, easily an order of magnitude worse than on PCI. I think they're long done building the card, what's taking so long is typing up the excuses sheet.
Besides, as elaborated above, the BIOS you are so keen to put the rest of the blame on doesn't even "assign" interrupts anymore these days, and for PCIE, not even the mainboard design has a choice. Wake up and smell the coffee. Reality is that Creative's cards are a huge headache in a large percentage of systems, while you'll have a truly hard time in finding just ONE other example that even comes close in being similarly problem ridden.
Sure they do. Even if the chipset (which I do blame in the case of NV chipsets) is controlling IRQ assignments and timings mainboard designers decide which pin-outs are serving which components and can change any timings via the BIOS.
You should stop trying to prove me wrong by saying exactly what I say. Chipsets make the rules, mainboard design has a couple of choices, and that's it. BIOS not involved in IRQ routing. Simple.
More than likely they're going to ensure their own integrated components receive priority over PCI expansion slots, which is what I've been seeing with assigned interrupts on recent boards. It'll be IDE/RAID, SATA/RAID, USB controllers and onboard sound sitting pretty on their own IRQs and then everything else you install afterwards sitting on 1-2 interrupts. Because that makes sense.....well I suppose it does from a business standpoint but certainly not from a practical standpoint.
With the PCI bus being an appendix on the southbridge, that's how it is (and I said that already ...). If you've started a PCI card design after 1999 or so, you should know this and deal with it.
Interrupt service takes time. The bus request/grant procedure takes time. Read data from the RAM take time to arrive, particularly when other things (like CPU and graphics card) are serving themselves from the same RAM at many orders of magnitude higher traffic rates.
So there's quite a few large factors of non-predictability. If you want your add-on card design to work, you need to account for these. Creative have failed to achieve that, and not just once in their track record.
Let's see how they do on PCIE.