thats more or less what I was thinking when I asked about the registry. I'd like to know if there's any reason for the registry other than hiding the complexity, which also makes it harder to fix problems.
Well the registry dates back to NT31 so you've got to think about what was going on in the 1990-1993 timeframe. People screaming about open standards weren't nearly as prevelant and interoperability between systems was an afterthought at best. So it makes sense that when someone said "A central configuration database would be a good idea." the developers were left to go off and create their own completely new, closed database. As long as the API was available for 3rd party devs they were happy for the most part. If they were to create it fresh today one would hope they would start with something more open like XML.
I also think the "complexity" of a pc is being overstated.
Generally, yes. The combinations of hardware is almost infinite, however as long as devices follow specs it shouldn't really matter. However, lots of problems stem from hardware that doesn't follow the specs properly and drivers that just suck. ACPI is a great example of this, lots of manufacturers only tested their hardware with Windows and Windows didn't have complete ACPI support up until Vista. This let manufacturers release with partial implementations that made ugly assumptions based on how the Windows ACPI drivers worked. Now that Linux devs have been forcing the issue there's an ACPI test suite released on a Linux LiveCD and now Vista/W7 have more complete ACPI support this is getting better.
Just take a look at some of the Linux device drivers, there's tons of work arounds for hardware bugs. Hell, hardware bugs are the primary reason that "chipset drivers" even exist for Windows.
My understanding is that the work of creating the operating system is divided up, and each group creates their own piece, then it's all glommed together into a giant program.
It depends on how far you're stretching that. For the kernel itself, that's pretty much true because all of the drivers get loaded into the kernel directly and essentially become part of it.
For userland stuff each application is a separate binary and runs as a normal process. So explorerer, IE, paint, etc are all unique entities but they all use facilities presented by the kernel and libraries such as MFC, .Net, etc. As long as the exported APIs are maintained and work as described there shouldn't be any problems. But once again, those APIs are created by people so there's going to be bugs and things that aren't designed as well as they could've been.
I assume there's a group that integrates the stuff together, but that group probably gets rushed everytime and has to throw stuff together any way it can, rather than having time to come with a more sensible, easier to work with concept.
From that perspective, Windows and OS X have the higest levels of integration with the core system because they develop the whole thing from Explorer/Finder down to the kernel. Linux is different in that Gnome, KDE, XFCE, etc are developed separately from the kernel, C library, etc and then distributions like Debian, Ubuntu, Fedora, etc package them all up and do QA on the system as a whole.
As with every program, how well it works is going to depend on the developers skills and priorities. But IMO the more separated systems like Linux tend to work better because it forces the people working on exported APIs to be conservative with changes and think more about the design because they know they'll be stuck with that API for a while after it starts getting used.