Originally posted by: Peter
That extra capacitor is not for extra _signal_ quality, it's about smooth _power_ to that particular PCI slot. If you've seen the incoming ripple from the average cheapo PSU, and if you've also seen how many cheap-to-midrange TV and sound cards lack proper bulk decoupling themselves, you might actually see an improvement in audio or video noise if you plug that stuff there.
Absolutely correct.

Someone might want to tell
OCWorkbench that though.
A OS-con quality capacitor sits near right next to the yellow PCI slot. With the help of this capacitor, the slot can provide ultra signal quality for outstanding autio and video card performance
I guess what they meant to say is "ultra low-noise power quality", which for analog-input devices, voltage drops during high power loads can affect things like the reference voltages used by comparators in things like A/D converters, and thus affect the signals measured by the device.
I mean, it is
kind of marketing, to tout that feature out on a single PCI slot - on a
quality board,
all of the PCI slots should have that level of proper power decoupling, so that the problem should never be an issue, no matter what slot you decide to plug your cards into. But the real world isn't always that kind... I'm a bit miffed at my MSI KT4V-L board for that, precisely because when I did get it, there are nearly zero extra caps between the PCI slots, even though the board layout has all of them clearly marked out, and the pics of the KT4 Ultra model boards that I saw online before my purchase, showed them properly-filled with components. It's kind of egregious because this board has six PCI slots, which implies that it was designed to be filled to the brim with cards. So MSI is just as "bad" in regards to this as ECS - all of the major low-cost/high-volume mobo mfgs tend to have this sort of issue, unfortunately. The number/size of extra decoupling/filtering caps is generally the first thing to go in the process of cost-reducing mass-market boards.
Originally posted by: Peter
In general, more capacitors doesn't mean better (voltage regulation) quality.
True, but it allows for a greater level of transient power loading before the power to other components starts to degrade, doesn't it? IOW, as long as I don't heavily load the CPU's VRMs, the DRAM array, or the PCI slots, things will be fine, but once I start loading them all down, if the board doesn't have enough power-stability "headroom", everything will start to flake out. Kind of like trying to upgrade a name-brand OEM PC with a low-rated PSU (just enough to cover the pre-installed factory components) with a high-power item, like a 6800 Ultra or something. But in general, the system should run fine using a
I'm guessing that's why the ECS boards offer only limited overclocking features as compared to other "enthusiast" boards, like massive voltage adjustments for vcore, vagp, vdimm, vchipset, etc., because the increased power loading would push the board beyond what it was designed to support, specifically more nominal voltages and components.
Originally posted by: Peter
Once you've put enough of the right capacitor mix on, extra ones will not make it better anymore, just drive cost up. Reference boards are designed before the actual chips, cost is not an issue, and the properties of the yet-to-be-made chips aren't known at that point.
I would be surprised if they were designed before actually even recieving test chipset silicon from the mfgs - they have to mfg and do real-world testing/characterization on those test boards before putting them into mass-production, in order to get the bugs out, don't they?
I agree that the reference design is engineered more-or-less independently of the mfg-cost side of the equation. That's sort of the ying and the yang of the mobo business, isn't it? Balancing the designs coming out of engineering with the cost issues of mass-production, and getting a board to market, on time, that balances the two well enough to provide a stable board at a low enough cost point for market acceptance. Given those constraints, I would have to say that ECS actually does pretty well there.
(Forgive me if that seemed a bit presumptuous, I know you work in that biz whereas I don't, but I do know people on engineering staff in other similar industries, and they're always telling me how their engineering designs are being "compromised" by the mfg-cost guys, more or less. I assume that the mobo biz is pretty similar, although cost-reduction is even more critical there, as compared to the defense industry, where 10K units is considered a "massive" production run in many cases.)
Originally posted by: Peter
They're - amongst many other purposes - made to figure out how much power and decoupling the final chip actually requires. On CPU regulation in particular, when you're using faster regulators and MOSFETS, you need a lot fewer of the big capacitors. Design choice.
That makes some sense. At least they are using three-phase for the CPU, most MSI boards seemingly use two-phase, which is as I understand it a bit harder on the components, and does require bigger caps than a three-phase design. (And then Gigabyte has their over-the-top six-phase/dual three-phase CPU VRM option on some of their boards. That seems almost like flashy overkill to me, although it might not be when dual-core Prescott chips come out, but I doubt they will ever be released for S478.)
Originally posted by: Peter
ECS "Top Hat" is a separate BIOS chip on top of an inverted socket. You plug that onto the BIOS chip on the board when you screwed that one's contents up. This is all different to Gigabyte's approach that places two chips onto the board and uses a software (!) switch to use the spare if the primary one is b0rked.
I assumed that Gigabyte used a hardware jumper to switch the #CE signals on the chip or something. If they're using software.. that's bizarre. They could have just used a single chip with twice the capacity instead, really. (Maybe a jumper to swap the high address bit instead?)
I wonder how all of these sorts of "dual BIOS" solutions will pan out, when Intel's EFI firmware gets into a major swing? Don't those essentially require you to "upgrade the drivers in your BIOS" for new hardware? Wouldn't that effectively make dual-BIOS solutions more hassle than they are worth? I'd take a socketed BIOS anyday, really.