Speaking of IRQ sharing/possible problems with such

mrblotto

Golden Member
Jul 7, 2007
1,639
117
106
I was just reading the 'X-Fi' Squeal of Death thread, and I remember I have an IRQ-sharing 'dilemna' which results in said 'SOD', well, maybe not directly results, but probably doesn't help matters.....

sound card: IRQ 16
Vid card 01: IRQ 16
Vid card 02: IRQ 16

I mean, come on.....in device mgr-view resources by connection, IRQ's 1,2,3,4,5, and 7 arent even listed.......and the *driver/OS/BIOS/peripheral* has to pick one that's already being used......wtf?

I am curious: where exactly does the IRQ assignment come from-the peripheral hardware (ie soundcard), the driver file of said hardware, the OS itself, the BIOS? I HATE the fact that one cannot manually change interrupts, memory addresses, DMA, etc......

All I wanna do is change my sound card to a different interrupt. Alas, I cannot simply move it to another slot as the only one accessible is between the 2 vid cards....... *shakes fists at sky*

/rant off

 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
IRQ sharing capability has been mandatory ever since the PCI specification existed, which is easily 15 years by now.

You're probably not old enough to remember what a painful pile of sh*t "doing everything manually" really was.

IRQ routing is decided by the chipset and mainboard design; if you want to change it, you'll have to make your own. Back when the "advanced" interrupt controllers didn't exist in desktop machines, PCI interrupts had to be routed back onto legacy ISA interrupts - and that brought a bit of software into the game. But that routeback produced MORE sharing, not less, plus a hideous penalty in interrupt latency.
Today, PCI devices have their own exclusive set of interrupt lines (typically numbered 16 and above), and how they're spread there is up to the mainboard design like I said. PCI Express cards natively use no interrupt lines at all, but so called "message signalled interrupts" ... and if they have to fall back to line interrupt emulation (yes emulation!), they often enough all end up on the 1st PCI interrupt line (thanks to chipset design).
Want MSI? Get Vista. XP is too retarded to even know it.

In short, you don't have a point, and you didn't have one for about six years. Direct your anger into some reading up ;)
 

mrblotto

Golden Member
Jul 7, 2007
1,639
117
106
Thank you for your response. And, yes, I'm old enough to remember the ISA days, switching jumpers, etc........it required research and patience, both of with are painfully in short supply by many folks nowadays (even myself sometimes). If I read you post correctly, then its primarily up to the mobo ("...and how they're spread there is up to the mainboard design....")

"In short, you don't have a point, and you didn't have one for about six years. Direct your anger into some reading up ..." - I wasn't trying to make a point, just ask a question-where do the IRQ assignments come from, evidently they originate from the mobo.

At any rate, thanx for the info
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
I confirm again ... now that we're using native APIC connections, it's all up to mainboard design for legacy PCI and chipset-integrated stuff. For PCIE devices, their line interrupt emulation is serviced by the chipset, so it's not even something the mainboard designer can influence. Inherent to the PCI system having a number of its own interrupt line inputs, you'll never see a PCI(-ish) design use one of the IRQ lines 0-15. This may look a bit silly on boards that don't have many legacy ISA(-ish) devices left, but that's how it is today.

On top of that, most chipsets don't bother trying to spread legacy interrupts from PCIE evenly, for this mechanism is just a crutch for old operating systems. Message signalled interrupts (straight from device to CPU) are a mandatory feature for PCIE devices, and modern OSes make good use of them.

I was being a bit miffed because so many people still think "reallocating IRQs" and moving cards to other slots magically solves the actual problem they have - which is sh*t drivers that still can't get the basic rules of PCI correct, 15 years down the tube. Unfortunately the majority of these people appear to work in end user support ... just look at how many troubleshooting FAQs still recommend that bull. Waste of time, and if anything, it screws users' systems up more.

So have I been suspecting correctly, your machine doesn't actually have a problem? All sorted here then? By the way, three device instances sitting on the same line IRQ adds roughly 2.5µs of interrupt entry latency (worst case) ... and at the low interrupt rate of graphics and sound cards (less than 100 per second) you wouldn't ever notice.

cheers,
Peter
 

mrblotto

Golden Member
Jul 7, 2007
1,639
117
106
Unfortunately, my system is having a problem. The 'infamous 'Screech of Death'. You're right-so many times suggestions are to 'move the card to another slot-therefore IRQ'. It just seems too logicalBut, alas, I cannot do that as only PCI slot is accessible (right between the 2 vid cards). I would try adjusting the latency on the soundcard, but mr. Vista x64 doesn't seem to like the tool.
But if it boils down to the mobo calling the shots, I doubt anything would be able to remedy this. Just bad luck on my part I spose. For now I've just taken out 1 of the cards, and am mulling over purchasing a different brand board.

Have a good one!
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
The only thing the "move to other slot" manoeuvre gets you is a fresh clean driver installation (because XP remembers stuff by location). So it does /something/, just not what people like to read into it.

Now you can't do that (and latency adjustment is just another flavor of snake oil), so all you can do - and what everybody should in fact be doing - is get a driver update for the offending piece of hardware, and if that doesn't help, return it to the shop. Better yet, return it straight away. This is a level of product quality nobody would accept if it were anything but a computer item.

One certain sound card maker is particularly notorious for this kind of halfbakery. Cards designed to rely on getting the last bit of bus bandwidth margin and top priority at everything, and drivers of unspeakable initial quality.
Yes Creative, I'm looking at you. Now if they'd stop making a neat set of excuses for every generation of card they bring out, and start making stuff that actually complies to standards and works wherever you plug it, fine. (Other sound chip makers do manage ...)

And no, getting a different mainboard will not get you anywhere. Get a different sound card.
 

mrblotto

Golden Member
Jul 7, 2007
1,639
117
106
Good info to know Peter. I didn't realize that moving to another slot wouldn't affect much. I still think I'm off to get a different mainboard. After trying 3 different brand sound cards, the problem still remains. I did heed your advice and return the last one I purchased. The previous 2 are past return status. Meh.....time for a change anyhow! I must agree, CL has been sorely lacking lately in driver development as of late.

Cya on the flipside........
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Peter
This is a level of product quality nobody would accept if it were anything but a computer item.
You've described anything and everything in the PC industry yet only make a distinction for Creative products? That's hilarious. The reality of it is the peripheral has no control over "the hand its dealt" when it comes to IRQs and system resources and if there's a conflict, it existed before the driver was even loaded. You might actually have a point if Creative's products suffered from systematic failures on all OS and platforms, but the reality of it is they can and do work flawlessly from system to system and even on the same platform with different hardware and software configurations.

Personally I think the opposite, that mainboard, chipset, and BIOS support on PC is by far the weakest link in the industry. Poor quality control, rushed products, short product and support life cycle, cheap components used etc. As cryptic as it seems, there is some truth to the "snake oil" of plugging components into different slots on a mainboard (dimm slots are a good historical example of this...."don't use the [insert color] dimm slots!"). Less is always better, as there's less chance of running into a half-baked, unsupported feature or conflict with another device, or worst yet, a 15 year old legacy device placeholder that you don't and would never use.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: mrblotto
Good info to know Peter. I didn't realize that moving to another slot wouldn't affect much. I still think I'm off to get a different mainboard. After trying 3 different brand sound cards, the problem still remains. I did heed your advice and return the last one I purchased. The previous 2 are past return status. Meh.....time for a change anyhow! I must agree, CL has been sorely lacking lately in driver development as of late.

Cya on the flipside........

Its certainly a trade-off worth considering, just like anything else in the PC industry. Sure Creative can run into some show-stopping bugs in some configurations just like any other piece of hardware or software, one just has to ask themselves if its worth it or not. As a heavy gamer I do find it worth it and when I have problems I pull the Creative card or the offending part until its fixed. Most recently, I pulled the X-Fi to accomodate 4GB of RAM on my current board. The X-Fi worked flawlessly in both XP and Vista 64 with 2GB of RAM; 4GB resulted in problems with the board until a BIOS release fixed the problem but subsequently broke the X-Fi. For me, the improvement in gaming and overall system performance from 4GB outweighed the significant degradation in sound from onboard.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Originally posted by: chizow
Originally posted by: Peter
This is a level of product quality nobody would accept if it were anything but a computer item.
You've described anything and everything in the PC industry yet only make a distinction for Creative products? That's hilarious. The reality of it is the peripheral has no control over "the hand its dealt" when it comes to IRQs and system resources and if there's a conflict, it existed before the driver was even loaded. You might actually have a point if Creative's products suffered from systematic failures on all OS and platforms, but the reality of it is they can and do work flawlessly from system to system and even on the same platform with different hardware and software configurations.

And finally someone throws in the word "conflict". Sharing an IRQ is not a conflict, and has never been a conflict ever since PCI was first invented. IRQ sharing capability has always and ever been mandatory here.
It's been a long time since, 15 years ... want to take a guess at which company is amongst the very very few remaining whose support workers still insist you fsck about to give their card an exclusive IRQ? Hint: Starts with a C.

You know, you've sort of hit the point with Creative cards anyhow. They're not designed to work in anyone's and everyone's PCI slots. They need top notch response speed from the chipset, else they don't ever work right.

From the symptoms so many people have, the X-Fi suffers from two problems. One is hardware design, the thing appears to have way too little internal buffer space, and thus it tends to suffer data starvation when it doesn't have top priority all areas. (And seeing how the PCI bus these days is only a legacy appendix on the far end of the food chain, this isn't going to happen.)

The other one is totally pathetic bugs in their drivers, like this 4GB failure. So your hardware device can't fetch data from "high" RAM addresses, for it cannot do 64-bit addressing in hardware? Then the driver for it better allocate the audio buffers in 32-bit space only. Just like anyone else with a 32-bit-addressing device does. Duh.
 

zig3695

Golden Member
Feb 15, 2007
1,240
0
0
hmmm. peter you seem knowledgeable. thank you for backing up my 10-year rebellion on creative products.. LAME company!
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Peter
Originally posted by: chizow
Originally posted by: Peter
This is a level of product quality nobody would accept if it were anything but a computer item.
You've described anything and everything in the PC industry yet only make a distinction for Creative products? That's hilarious. The reality of it is the peripheral has no control over "the hand its dealt" when it comes to IRQs and system resources and if there's a conflict, it existed before the driver was even loaded. You might actually have a point if Creative's products suffered from systematic failures on all OS and platforms, but the reality of it is they can and do work flawlessly from system to system and even on the same platform with different hardware and software configurations.

And finally someone throws in the word "conflict". Sharing an IRQ is not a conflict, and has never been a conflict ever since PCI was first invented. IRQ sharing capability has always and ever been mandatory here.
It's been a long time since, 15 years ... want to take a guess at which company is amongst the very very few remaining whose support workers still insist you fsck about to give their card an exclusive IRQ? Hint: Starts with a C.
That's not true at all, if you remember VGA adapters once used PCI slots and they were as problematic and demanding as a Creative card ever was, if not moreso. TV Tuner adapters, SCSI/RAID add-in cards, various NICs and a few other high demanding peripherals were also known to cause IRQ problems as well.

The reasons behind IRQ sharing are obvious, as they're forced to conform to a poorly thought out and antiquated design based on the flawed premise that all components and parts are equal and should be treated as such. Only in recent years have we seen divergence for the most demanding and necessary components with AGP and now PCI-E.

It is a conflict however if your chipset/BIOS is too stupid/oblivious to recognize the differences between more/less demanding components and decides to throw all of the most demanding peripherals on the same interrupt, causing problems that could otherwise be avoided by simply placing them on different and available interrupts.

You know, you've sort of hit the point with Creative cards anyhow. They're not designed to work in anyone's and everyone's PCI slots. They need top notch response speed from the chipset, else they don't ever work right.
I don't disagree with that, its a niche product and given the competition and forces in the industry working against them, they're expecting concessions that simply aren't going to be made. The same could be said for a 3D accelerator, except you need video for your PC to function, but you don't need sound.

As for needing top notch response, I don't know if I'd go that far, from Creative's white papers on the topic the biggest problem observed was on NV's chipsets because they weren't giving Creative's parts enough priority on the PCI bus. Given SB DSP's need to encode "on-the-fly" having to wait too long or not being able to control the bus long enough could lead to problems. Its not the only problem I've had or seen with NV chipsets so I have no reason to doubt what they're saying.

From the symptoms so many people have, the X-Fi suffers from two problems. One is hardware design, the thing appears to have way too little internal buffer space, and thus it tends to suffer data starvation when it doesn't have top priority all areas. (And seeing how the PCI bus these days is only a legacy appendix on the far end of the food chain, this isn't going to happen.)
Again, see above but I highly doubt internal buffers are the problem. X-Fi have 2MB onboard with some going as high as 64MB with X-RAM. Personally I think the X-RAM itself can be a problem.

The other one is totally pathetic bugs in their drivers, like this 4GB failure. So your hardware device can't fetch data from "high" RAM addresses, for it cannot do 64-bit addressing in hardware? Then the driver for it better allocate the audio buffers in 32-bit space only. Just like anyone else with a 32-bit-addressing device does. Duh.
The driver did allocate audio buffers in the 32-bit range ofc, but when moving to 4GB+ and 64-bit addressing they just got plowed over by the OS causing shared address space and conflicts. The updated driver probably got the okey-doke from MS and moved the buffers out of contested ranges, which is fine until MS plows them over again. This isn't anything new though, as there's been multiple reports of one of MS' 2GB+ Hot Fixes breaking working X-Fi drivers. The way I see it is MS doesn't like Creative and don't see them as a necessity in the PC landscape, and as such, aren't going to respect "their space."

The other problems with the X-Fi aren't even on the driver/OS level, they're with the BIOS/ACPI and have specific BIOS fixes for them. Asus P5N-E is a board with a specific fix for the X-Fi, although I don't know the details about it. Probably another 4GB compatibility issue though since my X-Fi worked fine on that board with 2GB and XP.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
"Conflict", your favorite word? It's getting annoying. Are you Creative's chief apologist?

VGA cards, TV cards, sound cards, all those are the LEAST demanding things when it comes to interrupt load - less than 100 per second, ridiculous compared to true high bandwidth components. And yes, the PCI specification MANDATED shared-IRQ support "even" for these, from day 1 of PCI's existence.

And yes, there are plenty of vendors who get that perfectly right and always have. Funnily enough by your world view, it's been exactly the people behind the true high bandwidth devices - SCSI, server class network, multichannel video recorders, etc. pp. Why? Because these people always had customers who'd never accept even a fraction of the hassle Creative is putting on their customers still today.

The reason behind IRQ sharing and support for it being mandatory is quite simple: Even back in 1992 when these things were conceived, it was already very clear that regardless of the number of interrupt input lines, you won't ever be able to give everything its own unique vector as long as interrupts are line driven. ISA, and its exclusive-IRQ requirement, had already proven to be way too restrictive to people. (And this is where the term "IRQ conflict" came from and where its realm ends - ISA slots.) Message signalled interrupts solve that once and for all, btw. But oh evil, Vista and Linux only, XP no can do. Too old.

And there's only one single reason for something "causing IRQ problems": Sh*t drivers. Not bandwidth requirements, no other pathetic excuses or passing the blame on evil chipset vendors that won't give our lovely chip enough bus time, blah blah blah. Being a BIOS engineer by profession, I've had $15k business flights around half the world to debug "IRQ conflicts" on site with the customer only to prove this point, so please, no more apologetics. Given the market segment I work in, I've seen 17 gigabit NICs share a single interrupt - working. 1.3µs extra latency per handler, totally irrelevant even then. Buy yourself a license of an IRQ profiling tool (e.g. PCIScope, just software, inexpensive) to see for yourself.

The bottom line is simple: Design your product with realistic expectations, write drivers for it that don't attempt to rule the world, and it'll work. Simple, isn't it.

Besides, as elaborated above, the BIOS you are so keen to put the rest of the blame on doesn't even "assign" interrupts anymore these days, and for PCIE, not even the mainboard design has a choice. Wake up and smell the coffee. Reality is that Creative's cards are a huge headache in a large percentage of systems, while you'll have a truly hard time in finding just ONE other example that even comes close in being similarly problem ridden.
 

nerp

Diamond Member
Dec 31, 2005
9,865
105
106
I'm saving this thread to quote in the future. Thanks Peter.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Peter
"Conflict", your favorite word? It's getting annoying. Are you Creative's chief apologist?
Nope, but conflict is appropriate as that's what's happening and I'm certainly not going to make asinine and irresponsible comments like "only products that begin with C from a certain company have IRQ conflicts". Fact remains Creative products can and do work flawlessly in most systems and even within the same system with different hardware or software configurations. But we already know your position on the topic, which is laughable, since you're a BIOS writer by profession, that the specifications (which change daily) are perfect and that everything should work perfectly in a perfect world. Except that's nothing like the end-user experience.

VGA cards, TV cards, sound cards, all those are the LEAST demanding things when it comes to interrupt load - less than 100 per second, ridiculous compared to true high bandwidth components. And yes, the PCI specification MANDATED shared-IRQ support "even" for these, from day 1 of PCI's existence.

And yes, there are plenty of vendors who get that perfectly right and always have. Funnily enough by your world view, it's been exactly the people behind the true high bandwidth devices - SCSI, server class network, multichannel video recorders, etc. pp. Why? Because these people always had customers who'd never accept even a fraction of the hassle Creative is putting on their customers still today.
Uh no, its because the factors at work in the industry give enterprise-level parts priority. This is nothing new, both hardware and software vendors are going to bend over backwards to make sure enterprise-level and their own integrated system devices work correctly, they expect it and they know they won't run into any such conflicts as they'll be given priority. Again, pointing back to my PCI VGA example, you absolutely would run into problems if you placed the VGA card in a slot that had multiple shared devices on the same IRQ. This used to be one of the single biggest troubleshooting queries on any tech board, ignoring it is just laughable. Instead of continuing to work within the constraints of a limited and poorly thought out bus system they forced a change in the industry with AGP and now PCI-E.

And there's only one single reason for something "causing IRQ problems": Sh*t drivers. Not bandwidth requirements, no other pathetic excuses or passing the blame on evil chipset vendors that won't give our lovely chip enough bus time, blah blah blah.

Being a BIOS engineer by profession, I've had $15k business flights around half the world to debug "IRQ conflicts" on site with the customer only to prove this point, so please, no more apologetics. Given the market segment I work in, I've seen 17 gigabit NICs share a single interrupt - working. 1.3µs extra latency per handler, totally irrelevant even then. Buy yourself a license of an IRQ profiling tool (e.g. PCIScope, just software, inexpensive) to see for yourself.
Wait, so you've taken $15k business trips to troubleshoot a Creative product? Oh didn't think so. Could've swore you insisted earlier only Creative products had IRQ conflicts. Oh wait, you're tacitly acknowledging the FACT other peripherals from various vendors are subject to the same problems on a hit and miss basis depending on hardware and software configuration. For a second I forgot we were talking about the PC industry lmao.

As for purchasing/licensing a IRQ profiling tool, there's no need to as Creative had done just that and more. Feel free to take it up with them but I have no reason to doubt what they're saying. Its not a problem I've encountered personally but it was widespread and doesn't surprise me as NV chipsets have been garbage for me since the NF2. They can't even get their own IDE and USB problems worked out (horrible SW IDE and USB 2.0 problems until MS wardened both), so who's to blame there? It starts with a flaky chipset ofc.

The bottom line is simple: Design your product with realistic expectations, write drivers for it that don't attempt to rule the world, and it'll work. Simple, isn't it.
But its not that simple. If you could find or design a sound card that was as capable as a SB you'd have a point. If you can't and the X-Fi demands more of the PCI bus it comes down to a simple decision: use or don't use. Stalling innovation for the sake of conforming to an antiquated legacy spec makes no sense. When confronted with similar decisions for more demanding mission critical systems they just design new specs and interfaces.

And yes the next card from Creative most likely will be PCI-E, they've already come out with a cheaper version in PCI-E, but of course there's business decisions involved as 1x PCI-E slots are still rather new and absent on all but the most recent boards on the market.

Besides, as elaborated above, the BIOS you are so keen to put the rest of the blame on doesn't even "assign" interrupts anymore these days, and for PCIE, not even the mainboard design has a choice. Wake up and smell the coffee. Reality is that Creative's cards are a huge headache in a large percentage of systems, while you'll have a truly hard time in finding just ONE other example that even comes close in being similarly problem ridden.
Sure they do. Even if the chipset (which I do blame in the case of NV chipsets) is controlling IRQ assignments and timings mainboard designers decide which pin-outs are serving which components and can change any timings via the BIOS. More than likely they're going to ensure their own integrated components receive priority over PCI expansion slots, which is what I've been seeing with assigned interrupts on recent boards. It'll be IDE/RAID, SATA/RAID, USB controllers and onboard sound sitting pretty on their own IRQs and then everything else you install afterwards sitting on 1-2 interrupts. Because that makes sense.....well I suppose it does from a business standpoint but certainly not from a practical standpoint.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Originally posted by: chizow
Originally posted by: Peter
"Conflict", your favorite word? It's getting annoying. Are you Creative's chief apologist?
Nope, but conflict is appropriate as that's what's happening and I'm certainly not going to make asinine and irresponsible comments like "only products that begin with C from a certain company have IRQ conflicts". Fact remains Creative products can and do work flawlessly in most systems and even within the same system with different hardware or software configurations. But we already know your position on the topic, which is laughable, since you're a BIOS writer by profession, that the specifications (which change daily) are perfect and that everything should work perfectly in a perfect world. Except that's nothing like the end-user experience.
If you'd listened instead of your desperate search of mis-quotables, you'd figured that my main claim is that IRQ conflicts do not exist, only in sad excuses to distract customers from the real problem, halfbakery. And there, I said it again.

The PCI specification hasn't changed its original rules on interrupts, ever. www.pcisig.com - membership required. You're not one, and haven't read any of it? You are basing your claims on what, then?

VGA cards, TV cards, sound cards, all those are the LEAST demanding things when it comes to interrupt load - less than 100 per second, ridiculous compared to true high bandwidth components. And yes, the PCI specification MANDATED shared-IRQ support "even" for these, from day 1 of PCI's existence.


Again, pointing back to my PCI VGA example, you absolutely would run into problems if you placed the VGA card in a slot that had multiple shared devices on the same IRQ. This used to be one of the single biggest troubleshooting queries on any tech board, ignoring it is just laughable. Instead of continuing to work within the constraints of a limited and poorly thought out bus system they forced a change in the industry with AGP and now PCI-E.

AGP and PCIE didn't change any of the rules regarding interrupts, and shared interrupts weren't a problem there either. People thought they were, because support idiots kept (and still keep) droning on about it.

And there's only one single reason for something "causing IRQ problems": Sh*t drivers. Not bandwidth requirements, no other pathetic excuses or passing the blame on evil chipset vendors that won't give our lovely chip enough bus time, blah blah blah.

Being a BIOS engineer by profession, I've had $15k business flights around half the world to debug "IRQ conflicts" on site with the customer only to prove this point, so please, no more apologetics. Given the market segment I work in, I've seen 17 gigabit NICs share a single interrupt - working. 1.3µs extra latency per handler, totally irrelevant even then. Buy yourself a license of an IRQ profiling tool (e.g. PCIScope, just software, inexpensive) to see for yourself.
Wait, so you've taken $15k business trips to troubleshoot a Creative product? Oh didn't think so.

No ... and I didn't say so, fanboi. Read again what I /actually/ wrote, not what you want to read into it. 12 years in this business and a lot of FAE work come and gone, I have yet to see a perceived "IRQ conflict" that after an hour of debugging doesn't boil down to a poorly written driver. The "IRQ conflict" myth is just that, a myth.

Could've swore you insisted earlier only Creative products had IRQ conflicts. Oh wait, you're tacitly acknowledging the FACT other peripherals from various vendors are subject to the same problems on a hit and miss basis depending on hardware and software configuration. For a second I forgot we were talking about the PC industry lmao.
No I didn't, and the record is there for everyone. One more time: "IRQ conflicts" do not exist, except in desperate apologetics like Creative's.

As for purchasing/licensing a IRQ profiling tool, there's no need to as Creative had done just that and more. Feel free to take it up with them but I have no reason to doubt what they're saying.

Funny, because what they're saying is, "We have observed peak holdoffs of up to 2 milliseconds in some cases. This is unusual chipset behavior that is beyond the ability of a hardware audio accelerator to compensate for in its internal buffering. The SoundBlaster X-Fi tolerance for these PCI holdoffs is approximately 120 microseconds peak holdoff, with a 1 microsecond average holdoff."

120 microseconds worth of bus buffer, 1 microsecond expected holdoff. 1 microsecond is a mere 33 clock cycles, so what do you think will happen if something (like an IDE controller) on the same bus transfers data? By the rules of PCI, devices are allowed to burst up to (roughly) 1000 clock cycles in one go - and in case of storage controllers for example, you definitely want them to, because throughput won't happen with short bursts.

So how on earth is /any/ realistic PCI setup going to fulfil Creative's "requirements"? Not. Going. To. Happen.
What is it then? Pathetic misjudgment of reality whilst designing the product. Sound familiar? Heard it on the previous page of this dispute, possibly?

The bottom line is simple: Design your product with realistic expectations, write drivers for it that don't attempt to rule the world, and it'll work. Simple, isn't it.
But its not that simple. If you could find or design a sound card that was as capable as a SB you'd have a point. If you can't and the X-Fi demands more of the PCI bus it comes down to a simple decision: use or don't use. Stalling innovation for the sake of conforming to an antiquated legacy spec makes no sense. When confronted with similar decisions for more demanding mission critical systems they just design new specs and interfaces.

You're proving my point yet again. They've tried to do something that cannot realistically be done on PCI.

And yes the next card from Creative most likely will be PCI-E, they've already come out with a cheaper version in PCI-E, but of course there's business decisions involved as 1x PCI-E slots are still rather new and absent on all but the most recent boards on the market.

They're not fighting for bus bandwidth on PCIE, but due to the packet traffic nature of PCIE, answer latencies from the chipset are inherently worse, easily an order of magnitude worse than on PCI. I think they're long done building the card, what's taking so long is typing up the excuses sheet.

Besides, as elaborated above, the BIOS you are so keen to put the rest of the blame on doesn't even "assign" interrupts anymore these days, and for PCIE, not even the mainboard design has a choice. Wake up and smell the coffee. Reality is that Creative's cards are a huge headache in a large percentage of systems, while you'll have a truly hard time in finding just ONE other example that even comes close in being similarly problem ridden.
Sure they do. Even if the chipset (which I do blame in the case of NV chipsets) is controlling IRQ assignments and timings mainboard designers decide which pin-outs are serving which components and can change any timings via the BIOS.

You should stop trying to prove me wrong by saying exactly what I say. Chipsets make the rules, mainboard design has a couple of choices, and that's it. BIOS not involved in IRQ routing. Simple.

More than likely they're going to ensure their own integrated components receive priority over PCI expansion slots, which is what I've been seeing with assigned interrupts on recent boards. It'll be IDE/RAID, SATA/RAID, USB controllers and onboard sound sitting pretty on their own IRQs and then everything else you install afterwards sitting on 1-2 interrupts. Because that makes sense.....well I suppose it does from a business standpoint but certainly not from a practical standpoint.

With the PCI bus being an appendix on the southbridge, that's how it is (and I said that already ...). If you've started a PCI card design after 1999 or so, you should know this and deal with it.

Interrupt service takes time. The bus request/grant procedure takes time. Read data from the RAM take time to arrive, particularly when other things (like CPU and graphics card) are serving themselves from the same RAM at many orders of magnitude higher traffic rates.
So there's quite a few large factors of non-predictability. If you want your add-on card design to work, you need to account for these. Creative have failed to achieve that, and not just once in their track record.

Let's see how they do on PCIE.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Peter
If you'd listened instead of your desperate search of mis-quotables, you'd figured that my main claim is that IRQ conflicts do not exist, only in sad excuses to distract customers from the real problem, halfbakery. And there, I said it again.
You seem to be hung-up on the use of conflict, if you have such a problem with it why not take it up with Microsoft? I mean when they have a big yellow ! that says IRQ Conflict that's what people are going to call it. It certainly sounds better than "halfbakery" lmao. But back to the original point. If there was no such thing as IRQ Conflicts, why bother having different interrupts? If all PCI devices shouldn't have any troubles playing nicely and the spec allows for it, why not just have one interrupt and let all the peripherals sort it out. Oh right, you'd run into IRQ Conflicts. And there, I said it again.

The PCI specification hasn't changed its original rules on interrupts, ever. www.pcisig.com - membership required. You're not one, and haven't read any of it? You are basing your claims on what, then?
Right and I never said otherwise, they're using the same half-baked, poorly thought out rules they implemented from day 1. As a result its no surprise they're trying to bury it going forward given its long history of conflicts and problems. They just can't do it fast enough due to another lingering problem plaguing the industry: "legacy support".

AGP and PCIE didn't change any of the rules regarding interrupts, and shared interrupts weren't a problem there either. People thought they were, because support idiots kept (and still keep) droning on about it.
Sure they did, they were given interrupt priority. Back in the earliest days of AGP using PCI slot 1 for a VGA card or anything else really would often share an interrupt with the AGP card, leading to problems. This gradually became less of a problem as the use of an additional PCI VGA was less common, AGP cards began using dual-slot coolers making this impossible, and board-makers changed the interrupt for that first slot.

No ... and I didn't say so, fanboi. Read again what I /actually/ wrote, not what you want to read into it. 12 years in this business and a lot of FAE work come and gone, I have yet to see a perceived "IRQ conflict" that after an hour of debugging doesn't boil down to a poorly written driver. The "IRQ conflict" myth is just that, a myth.
You completely missed the point, hater. My point was that singling out Creative as "particularly notorious for this kind of halfbakery" is irresponsible when you have clients paying more to service a non-Creative product than a lifetime's worth of Creative products. What part were you servicing? Ya, didn't think you'd say. I mean $15k for a single-instance of support doesn't deserve mention, but Creative does. And what was your recommendation in that case? Pull the product? Write a new driver? You probably just switched PCI slots to solve the conflict. $15k well spent! LMAO.

No I didn't, and the record is there for everyone. One more time: "IRQ conflicts" do not exist, except in desperate apologetics like Creative's.
Yes, the visual queues in Windows and the BSOD IRQ_LESS_THAN_OR_EQUAL are purely imagined.

Funny, because what they're saying is, "We have observed peak holdoffs of up to 2 milliseconds in some cases. This is unusual chipset behavior that is beyond the ability of a hardware audio accelerator to compensate for in its internal buffering. The SoundBlaster X-Fi tolerance for these PCI holdoffs is approximately 120 microseconds peak holdoff, with a 1 microsecond average holdoff."

120 microseconds worth of bus buffer, 1 microsecond expected holdoff. 1 microsecond is a mere 33 clock cycles, so what do you think will happen if something (like an IDE controller) on the same bus transfers data? Devices are allowed to burst up to (roughly) 1000 clock cycles in one go, so how on earth is /any/ realistic PCI setup going to fulfil Creative's "requirements"? Not. Going. To. Happen.
What is it then? Pathetic misjudgment of reality whilst designing the product. Sound familiar? Heard it on the previous page of this dispute, possibly?
Except. It. Does. Happen. on. Different. Chipsets. Again, unless it was an endemic problem with all chipsets and systems its clearly possible and a realistic expectation. If that's whats required to operate within normal parameters without experiencing cracking/popping its pretty simple, use the part or don't.

You're proving my point yet again. They've tried to do something that cannot realistically be done on PCI.
Seems to be working fine on my rig (and the majority of others as well). But some of us do actually use these products in reality and don't just quote tech specs in perfect scenarios in a perfect world.

They're not fighting for bus bandwidth on PCIE, but answer latencies from the chipset are inherently WORSE than on PCI. I think they're long done building the card, what's taking so long is typing up the excuses sheet.
Yep higher bus latencies were cited as the reason the X-Fi with 20k1 processor was not made available at launch and to-date. But at least they won't have to worry about fighting off 30 other peripherals on the same interrupt and PCI bus.

So? It's not like any single interrupt line had any priority over the others. In IOAPIC land, all interrupt inputs are created equal. There are also no "interrupt timings" to be adjusted. Interrupt service takes time. The bus request/grant procedure takes time. Both have large factors of non-predictability. If you want your add-on card design to work, you need to account for these. Creative have failed to achieve that, and not just once in their track record.
Switch IOAPIC with "Fantasy" and you'd be accurate. In reality, number of devices sharing an IRQ does impact compatibility, whether its due to software or hardware and the "snake oil" of doing whatever was necessary to change IRQ assignments (switching slots, setting jumpers, BIOS options, disabling devices etc.) does fix problems. Its certainly easy to say integrated components can share the PCI bus without issue when all known variables are accounted for. Expecting the same of Creative with unknown and constantly changing variables is unfair, yet they've managed to do it for the majority of systems anyways.
 

Old Hippie

Diamond Member
Oct 8, 2005
6,361
1
0
I learned a lot in this discussion, fellows. :thumbsup:

Doesn't look like you guys are ever gonna agree, but, Peter's points about Creative's products, sure answers a lotta questions! :thumbsup:
 
Nov 26, 2005
15,188
401
126
My Adaptec SCSI gets assigned the same IRQ as Onboard SATA Raid (IRQ 11)
The SCSI is my boot drive and I want to use the Onboard SATA RAID for storage space but I get the 'No enough Space to Copy PCI option ROM [00:1F:01] Checking NVRAM' error. Anyways, subscribing to this thread :)

Cheers
 

mrblotto

Golden Member
Jul 7, 2007
1,639
117
106
Yikes.......didn't mean to cause such a ruckus guys. I, too, have learned a lot from this discusssion. Perhaps only time will tell what the solution to my and others problem turns out to be- newer/modded drivers or BIOS updates/hardware replacements.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
chizow, digging an ever deeper hole for yourself, nice. The more you write, the more your complete lack of actual technical knowledge becomes obvious. Do keep on, you're entertaining.

AGP has no interrupt priority over PCI, and never had. Technically impossible, since AGP hooks to the same set of interrupt inputs as the rest of PCI does. In fact, you're confirming that (thereby contradicting yourself) the next sentence, when you say AGP and PCI share interrupt lines. Duh.

IRQL_LESS_OR_EQUAL constitutes an interrupt conflict? No it doesn't. "But it says IRQ in it, it must be an IRQ conflict. I've heard soooo much about this on the internet." What is it then? It's the BSOD you get when a driver crashed while servicing an interrupt. Research it on microsoft's developer websites (these are public). Poorly written drivers, remember? Duh.

No I haven't "just switched PCI slots" in that cited support case. I already said in my original posting it that a driver got updated as the result, end of. Since IRQ conflicts don't actually exist, there was none to solve here either. Just a poorly written driver. $16m business case, btw.

Etc. etc. etc.

It's exactly dim self taught pseudo experts like you who keep this mythology alive - simply for lack of proper troubleshooting skills, replaced by inflating themselves in endless quote wars, wherein when cornered, you just keep on mis-quoting to hide your own lack of substance, like you've just done again.

Top notch trolling there. Luckily, the less rabid people here have, without exception, understood.

I have better things to do. Keep hugging your X-Fi. Bye.
 

nerp

Diamond Member
Dec 31, 2005
9,865
105
106
Originally posted by: chizow
But its not that simple. If you could find or design a sound card that was as capable as a SB you'd have a point.

Many people have designed such cards. Take a look at m-audio, Turtle Beach, Creamware, Echo, heck, even Chaintech made a decent Envy24 based card a while ago (blows creative fakery out of the water).

The fact that I can power a complete digital recording studio with 4 m-audio cards sharing teh same driver with massive cross-routing, ins and outs and spidfs going nuts controlling MIDI all perfectly in synch, all in perfect harmony, no problems, no clicks, pops, static, lag, delay, latency, etc, means that either creative can't make a card worth a damn or someone is confusing a consumer card for listening to mp3s with professional recording solutions. Nobody in their right mind with professional requirements would use creative.
 

rgallant

Golden Member
Apr 14, 2007
1,361
11
81
Originally posted by: nerp
Originally posted by: chizow
But its not that simple. If you could find or design a sound card that was as capable as a SB you'd have a point.

Many people have designed such cards. Take a look at m-audio, Turtle Beach, Creamware, Echo, heck, even Chaintech made a decent Envy24 based card a while ago (blows creative fakery out of the water).

The fact that I can power a complete digital recording studio with 4 m-audio cards sharing teh same driver with massive cross-routing, ins and outs and spidfs going nuts controlling MIDI all perfectly in synch, all in perfect harmony, no problems, no clicks, pops, static, lag, delay, latency, etc, means that either creative can't make a card worth a damn or someone is confusing a consumer card for listening to mp3s with professional recording solutions. Nobody in their right mind with professional requirements would use creative.

yet never seen a mother board box that said ,Creative sound cards might not work with this mother board, or did I miss that somewhere ?
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Peter
chizow, digging an ever deeper hole for yourself, nice. The more you write, the more your complete lack of actual technical knowledge becomes obvious. Do keep on, you're entertaining.

AGP has no interrupt priority over PCI, and never had. Technically impossible, since AGP hooks to the same set of interrupt inputs as the rest of PCI does. In fact, you're confirming that (thereby contradicting yourself) the next sentence, when you say AGP and PCI share interrupt lines. Duh.

IRQL_LESS_OR_EQUAL constitutes an interrupt conflict? No it doesn't. "But it says IRQ in it, it must be an IRQ conflict. I've heard soooo much about this on the internet." What is it then? It's the BSOD you get when a driver crashed while servicing an interrupt. Research it on microsoft's developer websites (these are public). Poorly written drivers, remember? Duh.

No I haven't "just switched PCI slots" in that cited support case. I already said in my original posting it that a driver got updated as the result, end of. Since IRQ conflicts don't actually exist, there was none to solve here either. Just a poorly written driver. $16m business case, btw.

Etc. etc. etc.

It's exactly dim self taught pseudo experts like you who keep this mythology alive - simply for lack of proper troubleshooting skills, replaced by inflating themselves in endless quote wars, wherein when cornered, you just keep on mis-quoting to hide your own lack of substance, like you've just done again.

Top notch trolling there. Luckily, the less rabid people here have, without exception, understood.

I have better things to do. Keep hugging your X-Fi. Bye.

Lol ya, as opposed to bible-thumping spec quoting experts with no practical knowledge or understanding. Its really simple, does the X-Fi's demands on the PCI bus fall within spec or not? Now you've already made a weak attempt at showing its demands are unrealistic (by throwing out meaningless bandwidth numbers when latency is the issue), based on what? Your own technical experience and limited practical experience? But what does the PCI-SIG bible say? If it falls within working parameters and a conflict arises then the spec is broken. Conflicts of this nature don't deviate from end-user experience for as long as PCI has existed, only your perfect-world scenario and PCI spec referencing revisionist history paint a different picture.

But of course you didn't answer the real question. If the PCI spec and devices/drivers are designed to share IRQs without issue ad nauseum then why bother having different interrupts? Which brings me back to the point about the AGP bus, I never said it had any signaling priority, it gets priority by design as other devices will not share an interrupt with it and the same reason there are conflicts to begin with. PCI devices, whether designed to do so or not can run into CONFLICTS when forced to share the same interrupt. Its that simple. But according to you, mainboard designers have no control over IRQs, which again is another half-truth you insist on when that's clearly not the case.

But once again, thanks for proving a high degree of technical knowledge applied in a vacuum tube yields no practical benefit. People like you are so out of touch with the end-user experience and reality that its no surprise IHVs and ISVs sit on their asses pointing the finger at each other when there's a conflict instead of fixing the problem instead. But keep clinging to your PCI-SIG bible and thinking its perfect in your perfect world, you've once again confirmed mainboard/chipset/BIOS makers are the weakest link in the PC industry.